Apple faced calls to eliminate a fresh AI function after incorrectly summarizing news articles.

Apple faced calls to eliminate a fresh AI function after incorrectly summarizing news articles.

The backlash arises following a push notification, crafted by Apple's Intelligence unit and delivered to users last week, that incorrectly portrayed a BBC report regarding Luigi Mangione, the alleged perpetrator of the UnitedHealthcare CEO's murder, as stating that he had committed suicide.

The BBC announced it had reached out to Apple concerning the feature, voicing its concern and aiming to rectify the situation, yet no confirmation was received regarding Apple's response to the complaint.

On Wednesday, Vincent Berthier, head of the technology and journalism desk at Reporters Without Borders, urged Apple to take responsibility by eliminating this feature.

Berthier stated in a release, "A.I.s are probability machines, and facts can't be resolved through a game of chance." He further added, "The automated production and attribution of false information to a media outlet threatens the outlet's credibility and endangers the public's right to trustworthy information regarding current events."

The aforementioned body also expressed serious concerns about the hazards posed to media outlets by emerging A.I. technologies, implying that the incident underscores how A.I. is still not mature enough to deliver trustworthy information to the general public and should not be utilized for such purposes.

In response to the concerns, the BBC declared, "ensuring our audience can have faith in any information or journalism published under our name is of paramount importance to us."

Apple did not provide comment on the matter.

Apple unveiled its generative-AI tool within the U.S. in June, boasting of the feature's capacity to summarize specific content in a concise manner, such as a digestible paragraph, bullet points, a table, or a list. Apple's iPhone, iPad, and Mac devices provide users with the option to organize notifications, generating a list of news items within a single push alert.

Since the AI feature went public in late October, users have reported similar errors in summarizing a New York Times story, asserting that Israeli Prime Minister Benjamin Netanyahu had been apprehended. In actuality, the International Criminal Court had published a warrant for Netanyahu's arrest, but those viewing their home screens would only see the phrase, "Netanyahu arrested."

The controversy surrounding Apple's Intelligence incident stems from the lack of control news outlets possess. While some publishers utilize A.I. to support article creation, the decision is theirs. However, Apple Intelligence's summaries, which require user consent, are still presented with the publisher's logo. Additionally, the inaccuracies also pose a risk to the outlets' credibility.

Apple's A.I. complications are just the latest challenge faced by news publishers as they attempt to adapt to the evolving technology. Since ChatGPT's inception approximately two years ago, numerous tech titans have released their own large-language models, many of which have been accused of using copyrighted content, including news reports. While some outlets, like The New York Times, have filed lawsuits over the technology's alleged misuse of content, others, such as Axel Springer, which owns notable publications like Politico, Business Insider, Bild, and Welt, have signed licensing agreements with the developers.

The business relationship between tech giants and media outlets is under scrutiny after Apple's A.I. tool generated incorrect summary for a BBC report, sparking concerns about the reliability of media in the digital age. This incident has highlighted the need for media outlets to maintain control over their content, even when it's being utilized by A.I. technologies.

Read also: