Rakolsoft

Apple has a significant misinformation problem, as seen by its erroneous AI news notifications.

Fake news alerts are being generated by an artificial intelligence feature on iPhones, raising worries about the technology’s potential to propagate false information.

Inaccurately summarized BBC News app notifications about the broadcaster’s story about the PDC World Darts Championship semifinal were sent.

Out last week by a feature that Apple recently introduced that uses artificial intelligence to summarize users’ notifications. 

The feature falsely claimed that British darts player Luke Littler had won the championship.

Rakolsoft

which Littler did go on to win

The incident took place one day prior to Littler’s eventual victory in the tournament final. Hours after that occurrence, a different message that was created by Apple Intelligence, the AI system of the tech giant, made a bogus claim that tennis legend Rafael Nadal had come out as gay. For over a month, the BBC has been attempting to get Apple to address the issue. After Apple’s AI feature produced a fictitious headline implying that Luigi Mangione, the man detained after the murder of UnitedHealthcare CEO Brian Thompson in New York, had shot himself, the British public broadcaster lodged a complaint with the company in December.

When CNBC reached out to Apple, the company was not immediately available for comment. Apple informed the BBC on Monday that it is developing a fix that will clarify when times Apple Intelligence is in charge of the wording that appears in the notifications. At the moment, generated news alerts appear to be straight from the source. In a statement provided to the BBC, the business stated that “Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback.” Apple further stated that if consumers get a “unexpected notification summary,” they are urged to report the issue.

Other news outlets have also been impacted by Apple Intelligence’s erroneous summaries of news notifications, in addition to the BBC. The feature incorrectly reported that Israeli Prime Minister Benjamin Netanyahu had been detained in November through an AI-summarized notice. Ken Schwencke, a senior editor at the investigative journalism website ProPublica, pointed out the error on the social media app Bluesky. Regarding Apple’s suggested remedy for the disinformation problem with its AI feature, CNBC has contacted the BBC and The New York Times for comments.

AI’s misinformation problem

Apple claims that its AI-generated notification summaries are a useful tool for combining and rewriting news app notification previews into a single notice that appears on the user’s lock screen. Apple claims that this function is intended to reduce the deluge of updates that many smartphone users are accustomed to and assist users in searching their alerts for important information. But this has led to what AI specialists call “hallucinations”—AI-generated responses that contain inaccurate or deceptive information. “I have a suspicion that Apple won’t be the only company facing difficulties with AI-generated content. Numerous instances of AI services boldly stating falsehoods—referred to as “hallucinations”—have previously been documented. Chief analyst Ben Wood of the tech-focused market research company CCS Insights told CNBC.

Scroll to Top