How AI chatbots affect social and emotional well-being of people has been revealed

26 March 18:34

OpenAI, the company that developed ChatGPT, partnered with the Massachusetts Institute of Technology (MIT) to conduct a study on how artificial intelligence chatbots affect people’s social and emotional well-being.

This was reported by MIT Media Lab, Komersant ukrainskyi reports.

The authors conducted two parallel studies with different approaches. The first is an automated analysis of nearly 40 million ChatGPT interactions. The second study involved nearly 1000 participants who used ChatGPT for four weeks.

The second controlled study was designed to identify causal relationships on how specific platform features and usage patterns might affect users’ psychosocial well-being. The researchers focused on loneliness, social interaction with real people, emotional dependence on an AI chatbot, and problematic use of AI.

What did the second study reveal?

The study showed that overall, daily use of a chatbot correlated with greater loneliness, dependence, problematic use, and lower socialization. People with stronger emotional attachment tendencies and more trust in AI chatbots tended to experience more loneliness and emotional dependence, respectively.

The researchers tested whether communication via text, neutral or attractive chatbot voice had a greater impact. They also took into account the types of conversations: open, impersonal, and personal. The results showed that while voice chatbots initially seemed to be more helpful in mitigating loneliness than text chatbots, these benefits diminished with prolonged use. Especially if the chatbot used a “neutral” voice.

What did the first study show?

The first study showed that even among the most active ChatGPT users, emotional interaction is rare. Face-to-face conversations were associated with higher levels of loneliness, but less emotional dependence and problematic use of ChatGPT (with moderate use). Conversely, non-personal conversations tended to increase emotional dependence, especially with heavy use.

The researchers emphasized that these results are only a “first step.” The findings have yet to be peer-reviewed by the scientific community, so they should be interpreted with caution. In addition, the research was based on ChatGPT, and users of other platforms may have different experiences and results. Also, not all results demonstrate a clear cause-and-effect relationship, so more research is needed.

Марина Максенко
Editor