Carl-Richard

Conflating knowledge with intelligence

109 posts in this topic

Posted (edited)

21 minutes ago, Nemra said:

@Carl-Richard

Me:

Give me AI hallucination rates

Gemini:

AI hallucination rates vary depending on the model, the complexity of the task, and the quality of the training data. However, recent studies have shown that even the most advanced models can still hallucinate a significant portion of the time.

Here are some estimates of AI hallucination rates:

  • OpenAI's GPT-4: Has the lowest hallucination rate among the tested models, estimated at around 3%.
  • Other large language models: Hallucination rates can range from 3% to 10% of responses.
  • Chatbots: Analysts estimate that chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of their responses.

It's important to note that these are just estimates, and the actual hallucination rate can vary depending on the specific use case. However, it's clear that AI hallucinations are a significant issue that needs to be addressed in order to improve the reliability and trustworthiness of AI systems.

Ironic that you asked an AI for that. Those are general estimates, and apparently they're not very good: 
https://gleen.ai/blog/the-3-percent-hallucination-fallacy/

The AI did not give you that information ;)

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Posted (edited)

2 hours ago, Leo Gura said:

AI is more truthful than humans.

You are definitely wrong.

 

The level of AI glazing on this forum is getting crazy

Edited by zurew

Share this post


Link to post
Share on other sites
5 minutes ago, Carl-Richard said:

Those are general estimates, and apparently they're not very good

Well, it warned me.

You have to figure out what and how to ask. 😉

As Gemini told me:

"It's important to note that these are just estimates, and the actual hallucination rate can vary depending on the specific use case. However, it's clear that AI hallucinations are a significant issue that needs to be addressed in order to improve the reliability and trustworthiness of AI systems."

To be clear, I'm not saying it's not a problem.

13 minutes ago, Carl-Richard said:

Ironic that you asked an AI for that.

I don't trust humans. 😋

Share this post


Link to post
Share on other sites

Posted (edited)

7 hours ago, Nemra said:

Well, it warned me.

You have to figure out what and how to ask. 😉

I know how to use an AI. I was one of the first ones to use it 😉

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

@Carl-Richard

4 hours ago, Carl-Richard said:

At this stage, at least more than an AI. Look up the hallucination rates in AI language models. They're staggering. And again, you have more ways to uncover inaccuracies or untruthfulness in humans. Humans generally care about being truthful — AIs don't (they simply happen to be generally truthful if they're coded and trained well). And when humans aren't being truthful, you have many ways to uncover the untruthfulness. A person might stumble in their words, make awkward pauses, blush, avert their gaze, change their posture in a weird way, start fidgeting, becoming restless or uneasy, becoming blunt or defensive, changes in their vocal tone, become emotional or insecure, etc. An AI doesn't do that. I already mentioned markers like variations in fluency and verbal richness (untruthfulness often decreases these things). AI doesn't have such variations. Additionally, you can check which biases and incentives the person has (e.g. ideological affiliations, professional affiliations, economic incentives) and you can judge their character and past actions (e.g. positions of authority which require trust, general reputation, times caught lying). AI doesn't have that (except for past actions).

AKA body language analysis, AKA social/interpersonal intelligence.

Share this post


Link to post
Share on other sites

Posted (edited)

3 hours ago, Danioover9000 said:

@Carl-Richard

AKA body language analysis, AKA social/interpersonal intelligence.

Well, there is "body language analysis", and then there is body language analysis. Two very different things.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Posted (edited)

10 hours ago, Nemra said:

Can you trust humans to be accurate?

You can trust a human to know what it’s like to be a human… the joy, the sorrow; the feeling of being in love for the first time; what it’s like to belong and to be a misfit; the wonder that comes from gazing into the night sky; the sensation of hot summer wind on your skin; how it feels to witness injustice and the fire it lights up within oneself to do something about it, and perhaps not knowing what to do about it at all; the utter importance of a hug… and any „intelligence“ that doesn’t have these experiences as the basis of its agency is not ever to be fully trusted.

Edited by Nilsi

“Did you ever say Yes to a single joy? O my friends, then you said Yes to all woe as well. All things are chained and entwined together, all things are in love; if ever you wanted one moment twice, if ever you said: ‘You please me, happiness! Abide, moment!’ then you wanted everything to return!” - Friedrich Nietzsche
 

Share this post


Link to post
Share on other sites

Posted (edited)

@zurew memorization would only get you so far in Chess.  You could memorize previous games or even a pre-thought out attack - but Chess involves being in the present moment with that particular game.  It wouldn't use memorization at all.  Its a different aspect.

Edited by Inliytened1

 

Wisdom.  Truth.  Love.

Share this post


Link to post
Share on other sites

Posted (edited)

On 5/21/2024 at 5:54 AM, Carl-Richard said:

This assumes the humans were truthful in how they conducted and reported that study.

But I have not tested AI on the topic of law, so perhaps it makes more mistakes there. But on the stuff I did test it on, it made no mistakes. In fact, I have only seen Claude 3 make one mistake. But I see humans making dozens of mistakes.

If you go to NYC and just interview humans off the street, and ask those same questions to Claude 3, you will see that Claude 3 gives more intelligent answers than 90% of humans.

The only people who can really outshine Claude 3 would be advanced polymaths like a Ken Wilber or a Daniel Schmachtenberger.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

32 minutes ago, Leo Gura said:

This assumes the humans were truthful in how they conducted and reported that study.

But I have not tested AI on the topic of law, so perhaps it makes more mistakes there. But on the stuff I did test it on, it made no mistakes. In fact, I have only seen Claude 3 make one mistake.

Then the question is: are you trusting yourself as a flawed human to be able to identify the mistakes? Would it be wise to be generally careful with things that you know can possibly deceive you without you knowing it?

I sometimes use Perplexity which is an AI that provides the sources it used to create an answer. If you actually control check the sources, the general rule is that you'll find a lot of factual inaccuracies. It's the best way to use an AI imo, but if you have to control check the sources all the time, it works basically just like a sophisticated Google Search. It's useful for that. Regardless, if you use AIs like Perplexity, it makes it even more clear how inaccurate AIs can be.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Posted (edited)

8 hours ago, Inliytened1 said:

@zurew memorization would only get you ao far in Chess.  You could memorize previous games or even a pre-thought out attack - bur Chess involves being in the present moment with that particular game.  It wouldn't use memorization at all.  Its a different aspec

I meant actually infinite memory, so knowing all the possible chess games that can be played and within those all the possible moves that can be played.

The reason why I brought that up, because it seems to me that both of these individuals would have the exact same amount of chance to win - a person with infinite memory going against a person who has infinite intelligence  (of course we are talking in the context of chess)

Edited by zurew

Share this post


Link to post
Share on other sites

Posted (edited)

On 21.5.2024 at 0:06 AM, Nemra said:

Use an AI.

I'm using AI to better understand you all. 😁

I just thought about this suggestion (of using AI to summarize a text that is worded clumsily and hard to understand). Will the AI ever respond with "this text is incomprehensible and does not make any sense?" or will it always give you an apparently coherent summary which doesn't necessarily reflect the text at all?

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
5 minutes ago, Carl-Richard said:

or will it always give you a an apparently coherent summary which doesn't necessarily reflect the text at all?

I would vote for this one for sure.

Share this post


Link to post
Share on other sites

Posted (edited)

25 minutes ago, Carl-Richard said:

Then the question is: are you trusting yourself as a flawed human to be able to identify the mistakes?

This goes both ways.

There is no way around trusting yourself. If you didn't trust yourself then you would have no position to argue for and you could not know that humans are any better than AI.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

I don't us AI so much for raw factual data but for its Tier 2 reasoning and helping me see new perspectives. For this function it's hard for it to be "wrong" because it's just giving me more perspective which just opens my mind and helps me think in fresh ways.

So there are qualitatively different uses for AI. If you use AI to help you write a screenplay then it can't really be wrong. So the trick here is to leverage the AI's strengths not its weaknesses. Hallucination may be a weakness for law research, but it is a strength for creative work.

They really need to make a slider to adjust its hallucination rate per conversation.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

On 23.5.2024 at 11:42 AM, Leo Gura said:

This goes both ways.

There is no way around trusting yourself. If you didn't trust yourself then you would have no position to argue for and you could not know that humans are any better than AI.

It's just an argument to be careful, and it's especially important to be careful when the probability of deception is shown to be high (69%-88% in some cases) and when you have fewer ways of uncovering the deception compared to humans (fluency markers, socioemotional markers, incentive markers, etc.). That is why you should use AIs like Perplexity; cut out the middleman :) The human sources that Perplexity references could certainly be faulty, but it doesn't help if it additionally misrepresents those sources. That's just more problems.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
11 minutes ago, Leo Gura said:

I don't us AI so much for raw factual data but for its Tier 2 reasoning and helping me see new perspectives. For this function it's hard for it to be "wrong" because it's just giving me more perspective which just opens my mind and helps me think in fresh ways.

So there are qualitatively different uses for AI. If you use AI to help you write a screenplay then it can't really be wrong. So the trick here is to leverage the AI's strengths not its weaknesses. Hallucination may be a weakness for law research, but it is a strength for creative work.

Fair.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Posted (edited)

58 minutes ago, Leo Gura said:

This goes both ways.

There is no way around trusting yourself. If you didn't trust yourself then you would have no position to argue for and you could not know that humans are any better than AI.

This argument only works if we don't dive deeper in to the semantics about what we mean by "trusting yourself".

Of course we need to take for granted a set of things to even begin epistemology, but all of those things are granted in the case of AI as well.

The difference is that, there are tests that can be run on a human and on AI and that can give a picture about the differences (for example being wrong about facts)

Edited by zurew

Share this post


Link to post
Share on other sites

Posted (edited)

8 minutes ago, zurew said:

there are tests that can be run on a human and on AI

Yes, and I even suggested one such test. My NYC street test.

I dare you to find a single human on the internet who can produce as intelligent a policial conversation as I had and posted with Claude.

Run the test.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

32 minutes ago, Leo Gura said:

I don't us AI so much for raw factual data but for its Tier 2 reasoning and helping me see new perspectives. For this function it's hard for it to be "wrong" because it's just giving me more perspective which just opens my mind and helps me think in fresh ways.

Regarding  generating new perspectives - could the AI that you used generate different perspectives while maintining the same set of facts? So lets say there are 10 facts and you want to explain those 10 facts using  4 different perspectives. Can the AI do that in a way where it includes all the 10 facts in each of those 4 perspectives? (so you have set X that stands for 10 facts. Perspective 1 includes set X , Perspective 2 includes set X etc, the only thing that differentiates the perspectives is the explanation)

Edited by zurew

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now