Longfellow
Well-Known Member
OpenAI (ChatGPT)
“Despite its capabilities, GPT‑4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors).”
- https://openai.com/index/gpt-4-research/
“It can sometimes make simple reasoning errors … or be overly gullible in accepting obvious false statements from a user.”
“GPT‑4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake.”
- https://openai.com/index/gpt-4-research/
Microsoft (Copilot)
“While Copilot aims to respond with reliable sources where necessary, AI can make mistakes. It could potentially generate nonsensical content or fabricate content that might sound reasonable but is factually inaccurate. Even when drawing responses from high-authority web data, responses might misrepresent that content in a way that might not be completely accurate or reliable.”
- Transparency Note for Microsoft Copilot - Microsoft Support
Google (Gemini)
"Gemini will make mistakes. Even though it’s getting better every day, Gemini can provide inaccurate information."
- https://support.google.com/gemini/community-guide/309960705/how-to-use-gemini-like-an-expert
“Even when Gemini Apps show sources or related content, it can still get things wrong.”
- View related sources & double-check responses from Gemini Apps - Computer - Gemini Apps Help
“Despite its capabilities, GPT‑4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors).”
- https://openai.com/index/gpt-4-research/
“It can sometimes make simple reasoning errors … or be overly gullible in accepting obvious false statements from a user.”
“GPT‑4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake.”
- https://openai.com/index/gpt-4-research/
Microsoft (Copilot)
“While Copilot aims to respond with reliable sources where necessary, AI can make mistakes. It could potentially generate nonsensical content or fabricate content that might sound reasonable but is factually inaccurate. Even when drawing responses from high-authority web data, responses might misrepresent that content in a way that might not be completely accurate or reliable.”
- Transparency Note for Microsoft Copilot - Microsoft Support
Google (Gemini)
"Gemini will make mistakes. Even though it’s getting better every day, Gemini can provide inaccurate information."
- https://support.google.com/gemini/community-guide/309960705/how-to-use-gemini-like-an-expert
“Even when Gemini Apps show sources or related content, it can still get things wrong.”
- View related sources & double-check responses from Gemini Apps - Computer - Gemini Apps Help