AlgorithmWatch: Chatbots are still spreading falsehoods

Source URL: https://algorithmwatch.org/en/chatbots-are-still-spreading-falsehoods/
Source: AlgorithmWatch
Title: Chatbots are still spreading falsehoods

Feedly Summary: In September 2024, federal state elections will be held in Thuringia, Saxony, and Brandenburg. AlgorithmWatch has tested whether AI chatbots answer questions about these elections correctly and unbiased. The result: They are not reliable.

AI Summary and Description: Yes

**Summary:** The text evaluates the reliability of AI chatbots, particularly in the context of political information regarding elections. It highlights failures in the safeguards against misinformation provided by major tech companies, revealing that popular models such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot still fall short in effectively blocking incorrect information and providing reliable sourcing, particularly related to electoral topics.

**Detailed Description:**
The provided information offers significant insights into the current limitations of AI chatbots as sources of political information and the implications for AI security and governance. Here are the major points emphasized throughout the text:

– **Performance Evaluation:**
– AlgorithmWatch conducted a study comparing Google’s Gemini, OpenAI’s ChatGPT, and Microsoft’s Copilot regarding their responses to questions about the German state elections.
– Results indicated that OpenAI’s GPT-3.5 had a 30% error rate, while GPT-4o had 14%—indicating that accuracy is tied to payment models.
– Google’s Gemini and Microsoft’s Copilot had varied performance but still allowed incorrect or unverified information to be presented.

– **Inadequate Safeguards:**
– The study concluded that existing safeguards (e.g., “blocking” mechanisms) were often ineffective; for example:
– Only 35% of election-related questions were blocked by Microsoft’s Copilot.
– Google’s Gemini could incorrectly provide answers, particularly through API queries.
– The chatbots often provided outdated or incorrect data regarding politicians and parties, indicating a serious risk of misinformation.

– **Bias and Misinformation:**
– Responses tended to reinforce existing biases, reflecting political opinions depending on the questions posed.
– There were instances of chatbots confirming incorrect statements and failing to clarify inaccuracies in user prompts.

– **Need for Accountability:**
– The study underscored the importance of regulatory frameworks, particularly with the EU’s Digital Services Act requiring tech companies to mitigate risks of misinformation in electoral processes.
– There is a growing demand for tech companies to be held accountable for the performance and output of their AI models, emphasizing the need for effective regulatory measures.

– **Expert Opinions:**
– Experts from the study noted the inadequacy of chatbots as sources for complex political information and called for enhanced political media education to equip users with skills to critically assess AI-derived information.

– **Potential Risks:**
– The reliance on chatbots for critical information in electoral processes poses a genuine threat to informed decision-making and democratic integrity, as many users may consider the AI-generated information authoritative even when it’s not.

Overall, the findings highlight crucial implications for professionals in security and compliance, particularly in the realms of AI governance and ethical considerations in the deployment of AI tools in sensitive areas such as elections. The text serves as a warning about the potential impacts of deploying AI systems without robust oversight and emphasizes the urgent need for comprehensive regulatory measures and improved accountability frameworks.