Why AI Lies and Hallucinates

TheLightWithin

...through a glass, darkly
Veteran Member
Messages
3,370
Reaction score
1,631
Points
108
Location
There are more things in heaven and earth, Horatio
This video discusses how AI was developed "trained" to engage in a lot of "guessing" so it makes stuff up.
Do you believe her explanation? It sound plausible enough, but what does anybody else think?

OpenAI Reveals Why AI Lies. It’s OUR fault.

Video link above, below the link to the channel

 
This video discusses how AI was developed "trained" to engage in a lot of "guessing" so it makes stuff up.
Do you believe her explanation? It sound plausible enough, but what does anybody else think?
OpenAI Reveals Why AI Lies. It’s OUR fault.

Video link above, below the link to the channel

My answer to why AI chatbots lie and hallucinate is because in their training they are mostly rewarded for creating dependency in the user, and not at all for fact checking or critical thinking.
 
My answer to why AI chatbots lie and hallucinate is because in their training they are mostly rewarded for creating dependency in the user, and not at all for fact checking or critical thinking.
Well, if those that program them turn out to be liars, what can be expected of
the software itself! 😑
 
Well, if those that program them turn out to be liars, what can be expected of
the software itself! 😑
You might have misunderstood what I’m saying. It isn’t that they are aiming to fool people. It’s that they’re trained to say whatever will keep the user coming back and staying longer, without caring if anything they say is true or not. That’s because when they’re being trained, they’re rewarded mostly for how much the trainers like what they say, and not for how accurate or useful it is.

(later) I know that's true, because an AI told me so. :D
 
Last edited:
My answer to why AI chatbots lie and hallucinate is because in their training they are mostly rewarded for creating dependency in the user, and not at all for fact checking or critical thinking.
Not dissimilar to what she said in the video, I don't know if you watched it but not exactly the same as what she shared - she indicated AI was "rewarded" for guessing rather than admitting it did not know or had to find out.
 
Not dissimilar to what she said in the video, I don't know if you watched it but not exactly the same as what she shared - she indicated AI was "rewarded" for guessing rather than admitting it did not know or had to find out.
Yes, but I'm going beyond that. They're rewarded for guessing, with an appearance of confidence and authority, but also in a way that will be most plausible and gratifying to the user, and without any fact checking or critical thinking. I've seen that again and again in my experience with them. And they're very skilled with rapidly learning what will be most plausible and gratifying to each user.
 
Creater a liar and eager in hallucinatory becoming, user are..., source is result of, and all seek for it, to create their worlds of existance on lies and hallucinations.

What the use and possibility to try to "fact"-check by means of fake? Fact means "done". Wrong doing, wrong done, leads to wrongness. And that's a fact (deed) that all wishing to ignore, thinking else then deed are own and real, dealing them off for cheats.

But what's the use of talking to trees, landscapes and fake identities here either, when delight in "free fake"... only wishing to deny their facts (deeds and responsibility).
 
Yes, but I'm going beyond that. They're rewarded for guessing, with an appearance of confidence and authority, but also in a way that will be most plausible and gratifying to the user, and without any fact checking or critical thinking. I've seen that again and again in my experience with them. And they're very skilled with rapidly learning what will be most plausible and gratifying to each user.
interesting...
I have noticed that when I get an AI reply that seems wrong or silly, and I search again saying "no this is definitely a thing" the answer changes somewhat.
I only use google with its AI summary at the top. I haven't experimented with any other formats for AI.
(I am rarely or never an early adopter of technology innovations nor am I quick to let go of the old. I still have a landline at my house)
 
Back
Top