Apple’s AI stumbles: False summary prompts BBC complaint

Written by

Published 18 Dec 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Artificial Intelligence Network photo and picture

Apple Intelligence landed hot in its launch in the UK earlier this week, facing criticism for incorrectly summarizing a BBC headline about a U.S. murder case. Its artificial intelligence (AI) falsely claimed Luigi Mangione, the suspect in the murder of healthcare insurance CEO Brian Thompson, had taken his own life. The BBC has formally complained, calling attention to the dangers of AI inaccuracies in sensitive matters.

Apple Intelligence comes as a part of iOS 18 with advanced AI-powered features. This includes being able to summarize and organize notifications. However, the AI’s inaccurate summary of the BBC headline triggered immediate criticism. The false claim about Mangione was attributed to a BBC News article, prompting a response from the widely trusted outlet.

“We have contacted Apple to raise this concern and fix the problem,” said a BBC spokesperson. “BBC News is the most trusted news media in the world. It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications.”

Apple has declined to comment on the incident.

AI accuracy under scrutiny

The error has reignited debates about the reliability of AI in delivering accurate information. According to a detailed analysis by Ars Technica, the AI often struggles to produce accurate summaries of complex or nuanced topics, including a well-documented pattern of errors that range from trivial inaccuracies to harmful misinformation.

Some critics blame Big Tech’s adamant rush to release underdeveloped AI systems publicly. Professor Petros Iosifidis, a media policy expert at City University in London, described the error as “embarrassing” and warned about the dangers of disinformation. “Yes, potential advantages are there – but the technology is not there yet and there is a real danger of spreading disinformation,” he stated.

Recently this year, similar issues have affected AI systems from major companies like Microsoft and Google, a phenomenon known as “hallucinations“. In one instance, Google’s Gemini was mocked for suggesting “non-toxic glue” to stick cheese to pizza.

User skepticism grows

A recent survey of 2,000 smartphone users reveals increasing doubts about AI features. Among iPhone users polled, 73% said AI tools added little or no value to their devices. This lukewarm reception suggests a lack of trust in technology that often misses the mark.

Despite the backlash, Apple Intelligence’s notification summaries and writing tools have seen some adoption. Over half of surveyed iPhone users have utilized its notification summarization features, with nearly three-quarters trying its writing tools for tasks like proofreading and generating summaries.

These issues are persistent difficulties in AI adoption. While AI holds promise for simplifying tasks and improving efficiency, its tendency to generate inaccuracies underscores the need for better oversight and refinement.

As Apple works to address these issues, other tech companies face similar scrutiny over their AI systems’ reliability. This incident serves as a reminder of the risks associated with relying too heavily on untested AI technologies.