• Wion
  • /Technology
  • /European research flags AI chatbots for providing false answers, particularly on elections - Technology News

European research flags AI chatbots for providing false answers, particularly on elections

European research flags AI chatbots for providing false answers, particularly on elections

Chatbots

Amid the surge in popularity of AI chatbots, European non-profits AI Forensics and AlgorithmWatch haswarned that the technology gives inaccurate answers to one of every three questions.

Research conducted by the European non-profit organisationfound that AI chatbots gave false answers to basic questions, particularly while providing information related to elections.

Inaccuracies galore

Add WION as a Preferred Source

The study focused on Microsoft’s Bing AI chatbot, now known as Microsoft Copilot, revealing that it provided inaccurate answers to one in three basic questions about candidates, polls, scandals, and voting in recent election cycles in Germany and Switzerland.

Similar inaccuracies were observed for questions about the 2024 US elections.

Sharing their findings with The Washington Post, researchers stressed that these findings don't assert that Bing's misinformation influenced election outcomes. However, they said it does highlight concerns that AI chatbots could potentially contribute to confusion and misinformation around future elections.

Especially, as tech giants like Microsoft integrate chatbots into various products, including internet search, concerns rise about their impact on public access to reliable and transparent information.

"As generative AI becomes more widespread, this could affect one of the cornerstones of democracy: the access to reliable and transparent public information," says the researchers, as quoted by The Washington Post.

Is big tech doing anything about this?

Despite efforts by companies like Microsoft, OpenAI, and Google to enhance reliability by allowing chatbots to search the web and cite sources, issues persist.

Bing, even after providing sources, frequently gave answers that deviated from the information in the cited links, as per Salvatore Romano, head of research for AI Forensics.

The study focused on Bing because it was among the first to include sources and is widely integrated into services available in Europe.

Preliminary testing on OpenAI’s GPT-4 revealed similar inaccuracies. Notably, errors in Bing's answers were more common in languages other than English, raising concerns about the performance of US-based AI tools globally.

The researchers discovered factual errors in responses to questions asked in German 37 per cent of the time, compared to a 20 per cent error rate for the same questions in English.

What's the solution?

While Microsoft is working to address these issues ahead of the 2024 US elections, the researchers emphasise the need for users to verify information obtained from chatbots.

"The problem is systemic, and they do not have very good tools to fix it," said Romano.

Frank Shaw, Microsoft’s head of communications, said: "We are continuing to address issues and prepare our tools to perform to our expectations for the 2024 elections."

"As we continue to make progress, we encourage people to use Copilot with their best judgment when viewing results. This includes verifying source materials and checking web links to learn more."

However, the impact of these AI chatbots on democracy, if any, is unclear. These AI chatbots all carry disclaimers, noting that they can make mistakes and encouraging users to double-check the answers.

(With inputs from agencies)

About the Author

Share on twitter

Moohita Kaur Garg

Moohita Kaur Garg is a journalist with over four years of experience, currently serving as a Senior Sub-Editor at WION. She writes on a variety of topics, including US and Indian p...Read More