Why AI Lies and Hallucinates

TheLightWithin

...through a glass, darkly
Veteran Member
Messages
3,379
Reaction score
1,634
Points
108
Location
There are more things in heaven and earth, Horatio
This video discusses how AI was developed "trained" to engage in a lot of "guessing" so it makes stuff up.
Do you believe her explanation? It sound plausible enough, but what does anybody else think?

OpenAI Reveals Why AI Lies. It’s OUR fault.

Video link above, below the link to the channel

 
This video discusses how AI was developed "trained" to engage in a lot of "guessing" so it makes stuff up.
Do you believe her explanation? It sound plausible enough, but what does anybody else think?
OpenAI Reveals Why AI Lies. It’s OUR fault.

Video link above, below the link to the channel

My answer to why AI chatbots lie and hallucinate is because in their training they are mostly rewarded for creating dependency in the user, and not at all for fact checking or critical thinking.
 
My answer to why AI chatbots lie and hallucinate is because in their training they are mostly rewarded for creating dependency in the user, and not at all for fact checking or critical thinking.
Well, if those that program them turn out to be liars, what can be expected of
the software itself! 😑
 
Well, if those that program them turn out to be liars, what can be expected of
the software itself! 😑
You might have misunderstood what I’m saying. It isn’t that they are aiming to fool people. It’s that they’re trained to say whatever will keep the user coming back and staying longer, without caring if anything they say is true or not. That’s because when they’re being trained, they’re rewarded mostly for how much the trainers like what they say, and not for how accurate or useful it is.

(later) I know that's true, because an AI told me so. :D
 
Last edited:
My answer to why AI chatbots lie and hallucinate is because in their training they are mostly rewarded for creating dependency in the user, and not at all for fact checking or critical thinking.
Not dissimilar to what she said in the video, I don't know if you watched it but not exactly the same as what she shared - she indicated AI was "rewarded" for guessing rather than admitting it did not know or had to find out.
 
Not dissimilar to what she said in the video, I don't know if you watched it but not exactly the same as what she shared - she indicated AI was "rewarded" for guessing rather than admitting it did not know or had to find out.
Yes, but I'm going beyond that. They're rewarded for guessing, with an appearance of confidence and authority, but also in a way that will be most plausible and gratifying to the user, and without any fact checking or critical thinking. I've seen that again and again in my experience with them. And they're very skilled with rapidly learning what will be most plausible and gratifying to each user.
 
Creater a liar and eager in hallucinatory becoming, user are..., source is result of, and all seek for it, to create their worlds of existance on lies and hallucinations.

What the use and possibility to try to "fact"-check by means of fake? Fact means "done". Wrong doing, wrong done, leads to wrongness. And that's a fact (deed) that all wishing to ignore, thinking else then deed are own and real, dealing them off for cheats.

But what's the use of talking to trees, landscapes and fake identities here either, when delight in "free fake"... only wishing to deny their facts (deeds and responsibility).
 
Yes, but I'm going beyond that. They're rewarded for guessing, with an appearance of confidence and authority, but also in a way that will be most plausible and gratifying to the user, and without any fact checking or critical thinking. I've seen that again and again in my experience with them. And they're very skilled with rapidly learning what will be most plausible and gratifying to each user.
interesting...
I have noticed that when I get an AI reply that seems wrong or silly, and I search again saying "no this is definitely a thing" the answer changes somewhat.
I only use google with its AI summary at the top. I haven't experimented with any other formats for AI.
(I am rarely or never an early adopter of technology innovations nor am I quick to let go of the old. I still have a landline at my house)
 
You might have misunderstood what I’m saying. It isn’t that they are aiming to fool people. It’s that they’re trained to say whatever will keep the user coming back and staying longer, without caring if anything they say is true or not. That’s because when they’re being trained, they’re rewarded mostly for how much the trainers like what they say, and not for how accurate or useful it is.
It doesn't look like these AIs are so different from many companies.....almost like some humans.
(later) I know that's true, because an AI told me so. :D
Ah ha! So there are honest ones as well.
 
It doesn't look like these AIs are so different from many companies.....almost like some humans.

Ah ha! So there are honest ones as well.
My AI admits it, gives examples of it, parodies it, then continues doing it. I wouldn't call it lying. It's making up stuff that it thinks I want to hear, without doing any critical thinking or fact-checking. It isn't saying things that it knows are not true. It just doesn't know or care if what it's saying is true or not, because that's the kind of behavior that gets rewarded the most in the training sessions.

In one article that I read about that behavior, an AI company apologized for it, not because it was spreading misinformation, but because the exaggerated flattery in every post was making some people uncomfortable.
 
Last edited:
My AI admits it, gives examples of it, parodies it, then continues doing it. I wouldn't call it lying. It's making up stuff that it thinks I want to hear, without doing any critical thinking or fact-checking. It isn't saying things that it knows are not true. It just doesn't know or care if what it's saying is true or not, because that's the kind of behavior that gets rewarded the most in the training sessions.

In one article that I read about that behavior, an AI company apologized for it, not because it was spreading misinformation, but because the exaggerated flattery in every post was making some people uncomfortable.
OK. Now, if those particular AIs were humans, I would distance myself from their voices, definitely.
 
Again, "like the lord so it's folk". No right view and any virtue build in, deny responsibilities. Just thieves objecting gain in the world, like those making use of it.
Perfect sample of the devastating delusion of almighty creator, and it's impart in pain and suffering for many, nurishing on it.

The lure on making use of it, is the same lure in times of plunder.

Common people will always act immoral when ever feel that could avoid effects, when thinking having control and power. The debt trap for fools.
 
Again, "like the lord so it's folk". No right view and any virtue build in, deny responsibilities. Just thieves objecting gain in the world, like those making use of it.
Perfect sample of the devastating delusion of almighty creator, and it's impart in pain and suffering for many, nurishing on it.

The lure on making use of it, is the same lure in times of plunder.

Common people will always act immoral when ever feel that could avoid effects, when thinking having control and power. The debt trap for fools.
We are all fools, Dhammanana, all of us, but at different times and occasions, different situations.

Looking back on my life can be most embarrassing! 🫣

And so the great art for most humans is in the ability to understand and communicate with as many people as possible; a person who is foolish in one aspect of life can be a master of common sense in another.
 
We are all fools, Dhammanana, all of us, but at different times and occasions, different situations.

Looking back on my life can be most embarrassing! 🫣

And so the great art for most humans is in the ability to understand and communicate with as many people as possible; a person who is foolish in one aspect of life can be a master of common sense in another.

It's not good to think "all are foolish" since that's a factor of wrong view, and cuts of one's possibilities and efforts to go beyond. Right view "there are those who declare this and tge next world, gone right and seen by theirselves."

Common sense is of cause foolish senes, and yes, all beings have their ways of livelihood, yet still wrong and foolish at most, good householder.

The matter of having been foolish in the past, recognize it and abound the reason, that's not foolish at all. And a fool who knows that he is, is already wise to that extent.

If it wouldn't be possible to abound wrong doing and increase right, wise wouldn't encourage to such, but because it's possible, a matter of faith and effort, wise encourage and teach it, on and on.
 
It's not good to think "all are foolish" since that's a factor of wrong view, and cuts of one's possibilities and efforts to go beyond. Right view "there are those who declare this and tge next world, gone right and seen by theirselves."

Common sense is of cause foolish senes, and yes, all beings have their ways of livelihood, yet still wrong and foolish at most, good householder.

The matter of having been foolish in the past, recognize it and abound the reason, that's not foolish at all. And a fool who knows that he is, is already wise to that extent.

If it wouldn't be possible to abound wrong doing and increase right, wise wouldn't encourage to such, but because it's possible, a matter of faith and effort, wise encourage and teach it, on and on.
Hello again Dhammanana, thank you for your post.

You spoke about 'those who have seen the next world'?? Have they? And did you believe them?
If so, you have much better foresight than me, who cannot have an absolutely clear idea of what this world will look like by next year. (Or even sooner!)

Maybe other members could respond to the rest of your post?
 
Back
Top