The great craze for “chatbots” in the mid-2010s seemed to have passed. But on Friday, August 5, Meta recalled that he was still working on this technology by presenting BlenderBot 3, his new “state-of-the-art chatbot“. According to the company, this text-based robot can “converse naturally with people” sure “almost any subject“, a promise made repeatedly by chatbot creators but never kept.
Still in prototype status, BlenderBot 3 is freely accessible (US only at the moment), so a large number of volunteer testers can progress it through a discussion rating system. Thus, it has been questioned extensively by the media and other curious people since it went online, and the first assessment seems like a sad refrain: BlenderBot 3 quickly plagues Facebook, criticizes Zuckerberg’s clothing style, then spins with conspiratorial comments, even anti-Semitic. Just before launching the tool, Meta warns users that the chatbot “is likely to make false or offensive statements“But in your press release, you specify that you have put in place safeguards to filter out the worst of them…
The Meta chatbot, the first critic of Meta
BlenderBot’s goal is long term. The researchers do not want to create a functional and marketable tool in the short term, they just want to improve the state of the art of chatbots. In concrete terms, their tool aims to integrate human conversational qualities (such as personality traits) into their responses. With a long-term memory, it must be able to adapt to the user as trades progress. In their press release, the researchers specify that BlenderBot should advance the conversational abilities of chatbots”avoiding learning unnecessary or dangerous responses“.
The problem, as always, is that the chatbot will look for information on the Internet to feed the conversation. Except it doesn’t classify enough. When asked about leader Mark Zuckerberg, he can reply: “he is a competent businessman, but his practices are not always ethical. It’s funny that he has all that money but still wears the same clothes!reports Business Insider. He doesn’t hesitate to recall the myriad scandals that have marred Facebook (and partly justified his change of identity) when it comes to his parent company. Or, he says his life is much better since he removed Facebook.
If the bot is so negative towards Meta, it’s simply because it will take advantage of the most popular search results on Facebook, which tell the story of its setbacks. By this operation, it maintains a bias, which results in the detriment of its own creator. But these drifts are not limited to fun projections, which poses a problem. to a journalist from Wall Street JournalBlenderBot claimed that Donald Trump was still president, and “he would still be with his second term ending in 2024“. Thus, it conveys a conspiracy theory. To top it off, Vice indicates that BlenderBot’s responses are only “generally neither realistic nor good“and that”frequently change the subject“brutally.
history repeats itself
These drifts from fun to dangerous have an air of déjà vu. In 2016, Microsoft launched the Tay chatbot on Twitter, which was supposed to learn in real time from conversations with users. Failed: After a few hours, the text bot broadcast conspiracy theories as well as racist and sexist comments. Less than 24 hours later, Microsoft pulled the plug on Tay and apologized profusely for the fiasco.
Therefore, Meta has tried a similar approach, based on a massive learning model with more than 175 billion parameters. This algorithm was then trained on giant (mostly publicly accessible) text databases, with the goal of extracting an understanding of the language in mathematical form. For example, one of the data sets created by the researchers contained 20,000 conversations on more than 1,000 different topics.
The problem with these large models is that they reproduce the biases of the data they have been fed into, most often with a magnifying effect. And Meta was aware of these limitations: “Since all AI-powered conversational chatbots are known to sometimes imitate and generate dangerous, biased or offensive comments, we conducted large-scale studies, co-hosted workshops and developed new techniques to create protections for BlenderBot 3. Despite this work, BlenderBot can still make rude or offensive comments, so we collect feedback.” Obviously, the additional guarantees do not have the desired effect.
Faced with the repeated failures of the main language models and a good number of abandoned projects, the industry has returned to less ambitious but more effective chatbots. Thus, most customer service bots today follow a predefined decision tree without ever leaving it, even if it means telling the customer that they don’t have the answer or directing them to a human operator. The technical challenge then becomes understanding the questions asked by users and asking the questions that remain without the most relevant answers.
goal is transparent
While the success of BlenderBot3 is more than questionable, Meta at least demonstrates a rare transparency, a quality that is generally lacking in AI-powered tools. The user can click on the chatbot responses to get the sources (in more or less detail) about the origin of the information. Additionally, the researchers share their code, data, and model used to power the chatbot.
A guardiana Meta spokesperson also clarifies that “yourAnyone using the Blender Bot must acknowledge that they understand that the discussion is for research and entertainment purposes only, that the bot may make false or offensive statements, and that they agree not to intentionally incite the bot to make offensive statements.“
In other words, BlenderBot reminded us that the ideal of sentient chatbots capable of expressing themselves like humans is still a long way off, and that there are still many technical barriers to overcome. But Meta has taken enough precautions in framing him so that this time the story doesn’t turn into a scandal.
#Avec #son #chatbot #nourri #lIA #Meta #foncé #vers #une #catastrophe #annoncée