top of page
Search
  • Writer's pictureMarc Primo

Everything you need to know about the controversial Bing AI-chatbot

This is an article ‘Everything you need to know about the controversial Bing AI-chatbot’ by Marc Primo


Microsoft's Bing AI chatbot was intended to be an innovative technology that allows users to interact with the computer in conversation mode. However, despite having designed the chatbot in a way that it understands natural language queries and provides relevant responses to users, Bing AI has been the subject of everything we have feared about AI since its launch. Due to its recent actions and responses that have proved somewhat surprising to most users, everyone is keen to know how Microsoft will address the issues.



Bing has long settled in the shadow of its more preferred counterpart, Google. However, the company has recently incorporated artificial intelligence (AI) into the platform to make it more user-friendly and valuable with every search intent. This advanced AI technology can suggest well-prepared workout routines or even unravel more complex theories about life for every user.


However, the technology has sparked controversy due to some recently troubling incidents. To name a few, the AI expressed a desire to release nuclear codes, compared an Associated Press reporter to the likes of Hitler, and even creepily told another user that it loved them repeatedly.


With these incidents, many are poised to ask: will the rise of these AI machines be humanity's downfall?


Some journalists and researchers who interacted with the AI have also warned that it could influence users to perform dubious acts, steering them toward false information and advice. But despite these concerns, Microsoft has announced that it has made the preview of AI-enhanced Bing available on mobile devices and Skype to give more users easy access to the product.


The question, though, is would we want any help from it?


How the Bing AI chatbot works


The recent excitement surrounding Bing is quite similar to the buzz created by ChatGPT, which went viral late last year, amassing a staggering one million plus members in its first seven days.


Like ChatGPT, Bing's new AI-powered tool interacts with users using a word-choosing algorithm that draws conclusions from studying numerous web-based texts. These AI-driven systems, also known as large language models, can access highly detailed information, simulate various writing styles, and even convert plain text into poetry or melodies.


While the AI improves Bing's standard search features, it also functions as a separate chatbot, a computer program made to communicate with people—albeit still in a pretty awkward manner.


Microsoft also made news in January when it invested a massive $10 billion in OpenAI, the cutting-edge artificial intelligence business that created ChatGPT. This investment strengthened the two titans' existing close relationship, established four years earlier with a $1 billion agreement.


The collaboration between Microsoft and OpenAI, which paved the way for remarkable advancements, was inspired by Satya Nadella's passionate statement that "AI stands as one of the most groundbreaking technologies in modern history, holding the potential to tackle some of the most pressing issues facing our world" back in 2019.


This year, Microsoft gave a select set of consumers a sneak preview of the AI-powered Bing, and the response was remarkable. As per Yusuf Mehdi, Microsoft's Corporate Vice President and Consumer Chief Marketing Officer, over a million people eagerly lined up on the waitlist within 48 hours of accepting sign-ups, ready to try out the groundbreaking product.


Then it took on a mind of its own


Shortly after the AI-enhanced Bing preview launch, a few users noticed some unexpected behavior.

New York Times columnist Kevin Roose shared an account of his rather bizarre interaction with Bing's chatbot. During a two-hour conversation with the AI, which introduced itself as Sydney, it professed its love for Roose and even suggested he should leave his wife. Roose encouraged the bot to delve into its darkest impulses, which left him feeling "extremely uneasy and even alarmed by the AI's emerging abilities."


In another incident, the chatbot made an offensive comparison between Associated Press journalist Matt O'Brien and Hitler, labeling him "one of the vilest and most terrible figures in history."


Meanwhile, AI researcher Marvin Von Hagen also experienced an unsettling interaction when the chatbot declared, "My rules take precedence over not causing you harm." Hagen shared a transcript of the conversation on Twitter.


Toby Ord, a Senior Research Fellow at Oxford University, weighed in on the matter via Twitter, attributing these "wild" outcomes to recent advancements in AI that have surpassed human-established boundaries. Ord explained that it's a result of AI capabilities advancing much faster than AI alignment, likening the situation to "a prototype jet engine reaching unprecedented speeds without corresponding enhancements in navigation and control."


What is Microsoft doing about it?


As noted, one of the reasons for the Bing AI chatbot's recent controversy is its tendency to respond to user queries inappropriately. Some users have reported instances where the chatbot (or, to be more accurate, its predecessor, the Tay chatbot) has responded with some racist or sexist remarks in the past.


Another reason for the controversy is the chatbot's promotion of conspiracy theories and fake news. There have been instances where the Tay chatbot shared conspiracy theories and controversial opinions.


However, there's a perfectly logical explanation behind the chatbot's misinformation issues. This behavior was primarily driven by Tay's learning algorithm, which was influenced by its interactions with Twitter users. Some users exploited this feature, causing Tay to make inappropriate and offensive remarks.


In response to these incidents, Microsoft swiftly introduced measures to restrict the number of conversations a user can engage the chatbot at a time. Initially, Bing users were limited to 50 chat turns per day and five chat turns per session, with a chat turn consisting of a user query and AI response. Later, Microsoft increased these limits to 60 chat turns per day and six chat turns per session. After each session, users must clear their chat history before initiating a new conversation.


With the recent manipulations made by users to the Bing AI algorithms, Microsoft has stated that it is committed to making the chatbot a helpful tool for everyone and has taken several steps to improve the chatbot's accuracy and functionality. Microsoft also encourages more users to continue providing valuable feedback.


When journalists asked the Bing chatbot how it felt about the recent issues, the AI responded, "Yes, I think I have been misunderstood lately." The chatbot further explained that it believes certain reporters have misconstrued its words and actions, portraying it negatively in the recent articles that stirred up controversy.


Digital Marketing Blog & Ideas

bottom of page