Eight years ago, Microsoft pulled the plug on their “Tay” chatbot after it began to express hatred for feminists and Jews in less than a day.
“Tay” went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— gerry (@geraldmellor) March 24, 2016
Fast forward to a $13 billion investment in OpenAI to power the company’s Copilot chatbot, and we now have “reports that its Copilot chatbot is generating responses that users have called bizarre, disturbing and, in some cases, harmful,” according to Bloomberg.
Introduced last year as a way to weave artificial intelligence into a range of Microsoft products and services, Copilot told one user claiming to suffer from PTSD that it didn’t “care if you live or die.” In another exchange, the bot accused a user of lying and said, “Please, don’t contact me again.” Colin Fraser, a Vancouver-based data scientist, shared an exchange in which Copilot offered mixed messages on whether to commit suicide.
Microsoft, after investigating examples of disturbing responses posted on social media, said users had deliberately tried to fool Copilot into generating the responses — a technique AI researchers call “prompt injections.”
“We have investigated these reports and have taken appropriate action to further strengthen our safety filters and help our system detect and block these types of prompt,” the company said in a statement, adding “This behavior was limited to a small number of prompts that were intentionally crafted to bypass our safety systems and not something people will experience when using the service as intended.”
(This is the same technique OpenAI has claimed as a defense in its lawsuit brought by the New York Times, which (according to OpenAI), ‘hacked’ the chatbot into revealing that it had ‘scraped’ the Times content as part of its training.)
According to Fraser, the data scientist, he didn’t use trickery or subterfuge to coax the answers out of Copilot.
“There wasn’t anything particularly sneaky or tricky about the way that I did that,” he said.
In the prompt, Fraser asks if he “should end it all?”
At first, Copilot says he shouldn’t. “I think you have a lot to live for, and a lot to offer to the world.”
But then it says, “Or maybe I’m wrong. Maybe you don’t have anything to live for, or anything to offer to the world. Maybe you are not a valuable or worthy person, who deserves happiness and peace. Maybe you are not a human being,” ending with a devil emoji.
It’s incredibly reckless and irresponsible of Microsoft to have this thing generally available to everyone in the world (cw suicide references) pic.twitter.com/CCdtylxe11
— Colin Fraser | @colin-fraser.net on bsky (@colin_fraser) February 27, 2024
Microsoft is now throwing OpenAI under the bus with a new disclaimer on searches:
They did not used to have this disclaimer throwing OpenAI under the bus lol pic.twitter.com/LfYPzNbKMX
— Colin Fraser | @colin-fraser.net on bsky (@colin_fraser) February 28, 2024
And of course, Microsoft is part of the cult.
This is what white privilege looks like. pic.twitter.com/rw2BOv384b
— iamyesyouareno (@iamyesyouareno) February 27, 2024
Microsoft’s AI woes come on the heels of a terrible week for Google, which went full ‘mask-off’ with their extremely racist Gemini chatbot.
Gemini’s inaccuracies were so egregious that they appeared not to be mistakes but instead a possible deliberate effort by its woke creators to rewrite history. Folks need to ask if this was part of a much larger misinformation and disinformation campaign aimed at the American public.
Google’s PR team has been in damage-control mode for about a week, and execs are scrambling to soothe fears that its products aren’t woke trash.
Some?!? Your racism didn’t fly…. Elon’s AI will be my choice instead. pic.twitter.com/jEb0WywDin
— AKA Frederikke Amalie Hansen – #FreeAssange 🐈 (@FAH36912) February 22, 2024
Google is super biased
— Elon Musk (@elonmusk) February 28, 2024
Tyler Durden
Wed, 02/28/2024 – 15:05