Misinformation alerts from creators of AI chatbots

Longfellow

Well-Known Member
Messages
556
Reaction score
193
Points
43
Location
here and there around the world
OpenAI (ChatGPT)

“Despite its capabilities, GPT‑4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors).”

- https://openai.com/index/gpt-4-research/

“It can sometimes make simple reasoning errors … or be overly gullible in accepting obvious false statements from a user.”

“GPT‑4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake.”

- https://openai.com/index/gpt-4-research/

Microsoft (Copilot)

“While Copilot aims to respond with reliable sources where necessary, AI can make mistakes. It could potentially generate nonsensical content or fabricate content that might sound reasonable but is factually inaccurate. Even when drawing responses from high-authority web data, responses might misrepresent that content in a way that might not be completely accurate or reliable.”

- Transparency Note for Microsoft Copilot - Microsoft Support

Google (Gemini)

"Gemini will make mistakes. Even though it’s getting better every day, Gemini can provide inaccurate information."

- https://support.google.com/gemini/community-guide/309960705/how-to-use-gemini-like-an-expert

“Even when Gemini Apps show sources or related content, it can still get things wrong.”

- View related sources & double-check responses from Gemini Apps - Computer - Gemini Apps Help
 
It is the nature of the beast. It is NOT (in my mind) artificial intelligence...but an artificial intern that is limited and desires to answer the boss (you) and feels any answer is better than no answer.

You have to tell it to verify, you have to tell it to cite source material, you have to tell it to double check, you have to tell it you don't like sugar in your coffee or day old coffee.

Ya need to get better at instructions (prompting) any ai has knowledge date, you need to tell it to search new info.

One sentence is a poor prompt which will create a waterfall in seconds, whereas composing a 10 min we'll written and constrained prompt will mske it think research and MAY give you a quality response but it will.tske time and you still need to check. Ask it if it understands your request and ask it to write a better prompt for.the r results you are looking for.

These guys do quality courses and intensive 2 day trainings....
 
Did you know that chatbots are legally obligated to respond to subpoenas for whatever information you share with them?

So be careful what you ask them or what you use them for. So if you ask ChatGPT the best way to rob a bank, and you are now a suspect in a bank robbery, ChatGPT has to hand over your conversation.
 
Did you know that chatbots are legally obligated to respond to subpoenas for whatever information you share with them?

So be careful what you ask them or what you use them for. So if you ask ChatGPT the best way to rob a bank, and you are now a suspect in a bank robbery, ChatGPT has to hand over your conversation.
Where did you see this? Did you mean that chatbot companies are legally obligated to respond? All I could find was a YouTube video speculating about it, with no sources.

Subpoenas for Chatbot Conversations
www.youtube.com/watch?v=pi8KsModZf8

"Your interactions with chatbots, including the information you share and the requests you make, could be subject to court orders. Just like with Google, emails, and other digital communications, your interactions with chatbots might undergo scrutiny in legal proceedings."

Note that it says "could be subject," and "might undergo scrutiny," possibly based on these statements:

"Courts have the authority to issue subpoenas or orders mandating companies to furnish certain information."

"Chatbot companies must adhere to laws such as GDPR in Europe, CCPA in California, and HIPAA in the United States ..."
 
Last edited:
Where did you see this? Did you mean that chatbot companies are legally obligated to respond? All I could find was a YouTube video speculating about it, with no sources.

Subpoenas for Chatbot Conversations
www.youtube.com/watch?v=pi8KsModZf8

"Your interactions with chatbots, including the information you share and the requests you make, could be subject to court orders. Just like with Google, emails, and other digital communications, your interactions with chatbots might undergo scrutiny in legal proceedings."

Note that it says "could be subject," and "might undergo scrutiny," possibly based on these statements:

"Courts have the authority to issue subpoenas or orders mandating companies to furnish certain information."

"Chatbot companies must adhere to laws such as GDPR in Europe, CCPA in California, and HIPAA in the United States ..."
Yes, that they are legally obligated to COMPLY.


Right away in the article it states "OpenAI is required to legally disclose what you've told ChatGPT if subpoenaed".
 
In an ongoing copyright lawsuit brought by The New York Times, a federal judge last month ordered OpenAI to preserve all ChatGPT user logs, including “temporary chats” and API requests, even if users opted out of training and data sharing. The court did not allow oral argument before issuing this decision.

Users may choose to delete chat logs that contain their sensitive information from their accounts, but all chats must now be retained to comply with the court order. And for business users connecting to OpenAI’s models, the stakes may be even higher. Their logs could contain their companies’ most confidential data, including trade secrets and privileged information.

To lawyers, this is unremarkable. Courts issue preservation orders all the time just to ensure that evidence isn’t lost during litigation.

- ChatGPT promised to forget user conversations. A federal court ended that.

Note: It was the company, OpenAI, that made the promise, not the chatbot.

If anyone can find an example of a court actually requiring a chatbot company to give them private information, I would like to know.
 
This has been the case with search engines for a long time. In fact, I seem to remember at least one US intelligence agency just bypassing the entire process by directly tapping into the data cables of internet giants like Google so they could hoover all the information they needed anyway. :)
 
It's all thief and naturally increase immoral and works just as the third step of degeneration (some call it revolution), after bodily and verbal, ... onto the poor minds. Take care and strive for modesty, taking only what's liberal given for good.
 
Back
Top