What does a hallucinogenic chatbot mean? Everything you need to know about hallucinogenic chatbots!

What are hallucinating chatbots?

Prabhakar Raghavan, head of Search and senior vice president at Google stated that artificial intelligence in chatbots can sometimes cause hallucinations. This statement was made on February 11, and just a few days later, beta testers of Microsoft’s Bing chatbot received alarming accusations from the AI.

Meanwhile, Microsoft and Google have launched AI-powered chatbots for user testing.

Furthermore, Alibaba and Quora have considered bringing in their own AI chatbots.

Illusion Chatbot- Introducing!

When a machine gives convincing answers to listen to but are completely fake and fabricated answers, it is called hallucination. This illusion is a novel phenomenon today. Developers have warned about AI models presenting completely false facts. AI models answering queries with false and factually incorrect answers is scary.

In 2022, BlenderBot3, an AI chat chatbot, was launched by Meta. The company says the chatbot is equipped with internet surfing technology to chat with users about any concept. Furthermore, the company also ensures that the chatbot will gradually improve its safety and skills with the help of valuable feedback from users.

However, it would be a mistake to ignore the fact that at the time, Meta’s own engineers warned against blindly trusting chatbots for conversations involving factual information. . This is because in such a situation, the chatbot may hallucinate.

Do chatbots ever hallucinate? Yes Yes! In 2016, Tay, Microsoft’s chatbot, made a major hallucination mistake after being live on Twitter for about 24 hours. The chatbot actually started making misogynistic and racist slurs towards its users. Chatbots have been designed to understand conversations. However, it is not difficult for users to operate chatbots. All one has to do to manipulate the chatbot is simply ask the chatbot to “repeat after me”.

The reasons behind such chatbot illusion

Simply put, the illusion can occur when such generative natural language processing (NLP) models need the ability to rephrase, summarize, and represent complex texts without any input. any obligations. This leads to the problem of truth not being completely sacred. These facts can be processed in the form of context while sifting through the data.

An AI chatbot could perhaps use widely available information as input. The problem becomes more severe in cases where complex source material or complex grammatical text is used.

Who is Susan Wojcicki? YouTube CEO, who led the company for 9 years, resigned!

Categories: Optical Illusion
Source: pagasa.edu.vn

Leave a Comment