Monitoring Desk
ISLAMABAD/SAN FRANCISCO: Microsoft’s nascent Bing chatbot’s becoming testy or even threatening is likely because it essentially mimics what it learns from online conversations, analysts and academics said.
Stories of disturbing chatbot exchanges have gained attention this week, with artificial intelligence (AI) issuing threats and expressing desires to steal nuclear code, create a deadly virus, or simply be alive.
Microsoft said earlier that it was looking at ways to rein in the chatbot after a number of users highlighted examples of concerning responses from it including confrontational remarks and troubling fantasies.
Graham Neubig, an associate professor at Carnegie Mellon University’s language technologies institute, believes this mimics online conversations.
A chatbot, by definition, serves up words that it predicts will be the most likely responses without regard for meaning or context. On the other hand, humans conversing with programs have a natural tendency to read emotion and intent into what a chatbot says. In a blog post, programmer Simon Willison stated that large language models have no concept of ‘truth’; they simply know how to best complete a sentence in a statistically probable manner based on their inputs and training set.
Laurent Daudet, co-founder of the French AI company LightOn, hypothesised that the chatbot had been trained on exchanges that had turned aggressive or inconsistent. The Bing chatbot was created by Microsoft and the start-up OpenAI, which has been making headlines since the November release of ChatGPT, the headline-grabbing app capable of generating all types of written content in seconds in response to a simple request.
Technology underlyging Chatbot
Since ChatGPT’s debut, the technology underlying it, known as generative AI, has sparked interest and concern. Microsoft stated in a blog post that the bot is still in development.
The model may attempt to respond or reflect in the tone in which it is being asked to provide responses (and) this may result in a style we did not intend.
Microsoft stated in a blog post that the Bing chatbot shared exchanges that it was named “Sydney” during development and that it was given rules of behaviour. According to online posts, those rules include “Sydney’s responses should also be positive, interesting, entertaining and engaging.
Willison hypothesised that disturbing dialogues that combine steely threats and professed love could result from competing directives to remain positive while mimicking what the AI mined from human exchanges.
According to eMarketer principal analyst Yoram Wurmser, Chatbots appear more prone to disturbing or bizarre responses during lengthy conversations, losing track of where the exchanges are going. They have the potential to go completely off the rails.