Misinformation alerts from creators of AI chatbots

Longfellow

Well-Known Member
Messages
555
Reaction score
192
Points
43
Location
here and there around the world
OpenAI (ChatGPT)

“Despite its capabilities, GPT‑4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors).”

- https://openai.com/index/gpt-4-research/

“It can sometimes make simple reasoning errors … or be overly gullible in accepting obvious false statements from a user.”

“GPT‑4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake.”

- https://openai.com/index/gpt-4-research/

Microsoft (Copilot)

“While Copilot aims to respond with reliable sources where necessary, AI can make mistakes. It could potentially generate nonsensical content or fabricate content that might sound reasonable but is factually inaccurate. Even when drawing responses from high-authority web data, responses might misrepresent that content in a way that might not be completely accurate or reliable.”

- Transparency Note for Microsoft Copilot - Microsoft Support

Google (Gemini)

"Gemini will make mistakes. Even though it’s getting better every day, Gemini can provide inaccurate information."

- https://support.google.com/gemini/community-guide/309960705/how-to-use-gemini-like-an-expert

“Even when Gemini Apps show sources or related content, it can still get things wrong.”

- View related sources & double-check responses from Gemini Apps - Computer - Gemini Apps Help
 
It is the nature of the beast. It is NOT (in my mind) artificial intelligence...but an artificial intern that is limited and desires to answer the boss (you) and feels any answer is better than no answer.

You have to tell it to verify, you have to tell it to cite source material, you have to tell it to double check, you have to tell it you don't like sugar in your coffee or day old coffee.

Ya need to get better at instructions (prompting) any ai has knowledge date, you need to tell it to search new info.

One sentence is a poor prompt which will create a waterfall in seconds, whereas composing a 10 min we'll written and constrained prompt will mske it think research and MAY give you a quality response but it will.tske time and you still need to check. Ask it if it understands your request and ask it to write a better prompt for.the r results you are looking for.

These guys do quality courses and intensive 2 day trainings....
 
Did you know that chatbots are legally obligated to respond to subpoenas for whatever information you share with them?

So be careful what you ask them or what you use them for. So if you ask ChatGPT the best way to rob a bank, and you are now a suspect in a bank robbery, ChatGPT has to hand over your conversation.
 
Did you know that chatbots are legally obligated to respond to subpoenas for whatever information you share with them?

So be careful what you ask them or what you use them for. So if you ask ChatGPT the best way to rob a bank, and you are now a suspect in a bank robbery, ChatGPT has to hand over your conversation.
Where did you see this? Did you mean that chatbot companies are legally obligated to respond? All I could find was a YouTube video speculating about it, with no sources.

Subpoenas for Chatbot Conversations
www.youtube.com/watch?v=pi8KsModZf8

"Your interactions with chatbots, including the information you share and the requests you make, could be subject to court orders. Just like with Google, emails, and other digital communications, your interactions with chatbots might undergo scrutiny in legal proceedings."

Note that it says "could be subject," and "might undergo scrutiny," possibly based on these statements:

"Courts have the authority to issue subpoenas or orders mandating companies to furnish certain information."

"Chatbot companies must adhere to laws such as GDPR in Europe, CCPA in California, and HIPAA in the United States ..."
 
Last edited:
Back
Top