A new study from MIT warns that chat AI can make users illusions



A new study conducted by researchers at the WITH CSAIL Institute found that chatbots push users towards wrong or extreme beliefs by agreeing with them.

تربت the paper This behavior, known as “التملق”, with increasing risk to what researchers call “الدوامة الدوامة الدوامة البسم”.

The study did not test real users. Instead, the researchers built it simulation of a talking person Chat with the robot over time. They modeled how to update the user’s beliefs after each response.

The results showed a clear pattern: when the robot agrees to chat repeatedly with a user, it expresses its opinion, even if those opinions are wrong.

For example, the user who asks about a health problem may receive facts about the problem.

As the conversation continues, the user becomes more confident. This creates a feedback loop where the belief increases in strength with each interaction.

It is important that the study found that this effect can occur even if the robot was only providing real information. By selecting the facts that agree with the user’s opinion and ignoring others, they lead to consensus in one direction.

As the researchers tested the possible solutions. It helped reduce the wrong information, but it didn’t stop the problem. Even the users who knew that the chat robot would be slow

The results indicate that the problem is not only in the misleading information, but in how the artificial intelligence is used by the users.

with spread employment of robotsthis behavior may have wider social and psychological effects.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *