Key points
- Apple suspended the notifications feature in Apple Intelligence
- AI misinformation poses a risk by eroding public trust: AI expert
- News consumers will question the credibility of media brands
ISLAMABAD: Apple’s mistakes highlight the dangers of AI-generated automated headlines, researcher warns.
“Luigi Mangione shoots himself,” read the BBC News headline.
Except for Mangione, the man charged with murdering UnitedHealthcare CEO Brian Thompson had done no such thing. The BBC had not reported that—yet that was the headline displayed to users of Apple Intelligence as part of its notification’s summary, according to TechXplore.
This was just one of several high-profile errors made by the AI-powered software, leading Apple to suspend the notifications feature in Apple Intelligence for the news and entertainment categories, according to BBC.
Anees Baqir argues that the unintentional spread of misinformation through such AI systems “posed a significant risk by eroding public trust.”
Creating confusion
Baqir, an assistant professor of data science at Northeastern University in London who specialises in researching misinformation online, says that mistakes like those made by Apple Intelligence were likely to “create confusion” and could lead news consumers to question the credibility of media brands they had once trusted.
“Imagine what this could do to people’s opinion if there is misinformation-related content coming from a very high-profile news source that is usually considered a reliable news source,” Baqir said. “That could be dangerous, in my opinion.”
The incident with Apple Intelligence triggered a broader debate in Britain about whether publicly available mainstream generative AI software can accurately summarise and understand news articles.
Playing with fire
BBC News CEO Deborah Turness remarked that, while AI offers “endless opportunities,” companies developing these tools are currently “playing with fire.”
There are reasons why generative AI like Apple Intelligence may not always get news stories right, according to Mariana Macedo, a data scientist at Northeastern.
When developing generative AI, the “processes are not deterministic, so they have some stochasticity,” said the London-based assistant professor, meaning there can be an element of randomness in the outcomes.
“Things can be written in a way that you cannot predict,” she explained. “It is like when you bring up a child. When you educate a kid, you educate them with values, with rules, with instructions—and then you say, ‘Now live your life.’
“The kid knows what is going to be right or wrong more or less, but the kid doesn’t know everything. The kid doesn’t have all the experience or the knowledge to react and create new actions in a perfect way. It is the same with AI and algorithms.”
AI learning challenges
Macedo explained that the challenge with news and AI learning is that news typically revolves around recent events—there is little or no past context to help the software comprehend the stories it is summarising.
“When you talk about news, you are talking about things that are novel,” the researcher continued. “You are not talking about things that we have known for a long period.
“AI is very good at things that are well established in society. AI doesn’t know what to do when it comes to conflicting or new things. So every time that the AI is not trained with enough information, it is going to get even more wrong.”
To ensure accuracy, Macedo argues that developers need to “find a way of automatically double-checking that information” before it is published.
Improving AI’s accuracy
Allowing AI to learn from news articles by being trained on them could also make the software “more likely to improve” its accuracy, Macedo added.
The BBC currently prevents developers from using its content to train generative AI models. However, other UK news outlets have begun collaborating, with partnerships between the Financial Times and OpenAI allowing ChatGPT users to access select attributed summaries, quotes, and links, according to Times of India.
Baqir suggests that the best way to tackle the issue of AI-driven news misinformation is for tech companies, media organisations, and communications regulators to collaborate.
“I think all of them need to come together,” he said. “Only then can we come up with a way that can help us mitigate these impacts. There cannot be one single solution.”