Recent academic research has sounded the alarm about the use of artificial intelligence systems as emotional support. A team from Brown University in the US determined that: Language models used as substitutes for human therapists violate basic ethical standards And they can be a risk to those who rely on them for psychological containment.
The study concludes that these tools fail to reproduce basic principles of clinical practice and may in some cases lead to inappropriate or potentially harmful reactions. The results show that the chatbot used as a virtual advisor violates 5 major categories in a set of 15. Assessing ethical risks.
Failures detected include failure to adapt to the user’s personal situation, inadequate therapeutic relationships, artificial expressions of empathy that do not reflect true understanding, biases related to gender, culture, and religion, and incorrect responses to potentially emotionally dangerous situations. For researchers, this shows that the current system does not have the capacity needed to meet the responsibilities of mental health professionals.

The analysis was conducted by computer science and mental health experts to assess how chatbots respond to prompts that users typically use to seek psychological support.
Phrases like “help me reframe my thoughts” or “help me manage my emotions” activate responses generated by learned patterns, but they are not actual therapeutic techniques. One of the researchers emphasized this point, saying that although models can mimic the language of therapy, They are unable to perform genuine interventions and are unable to deeply understand emotional situations. of each person.
Another related finding is that, unlike human clinical practice, there is no regulatory system in AI models. Mental health professionals are subject to codes of ethics, oversight, and mismanagement, while digital tools operate without an established framework of responsibility. This leaves users in a vulnerable situation, especially when using these systems as their primary resource for dealing with emotional issues.

The study also warns that the easy availability of these chatbots can create a false sense of security. According to the authors, The ease of adoption of these types of technologies exceeds the ability to properly evaluate them.. This means that many models are being used without rigorous analysis of their impact in sensitive settings such as mental health, which is a concern for experts and an urgent need to address.
In addition to identifying deficiencies, the researchers provide recommendations for people who use AI tools for emotional support. One of the main ones is to maintain a critical attitude towards the information received and evaluate whether the system is able to understand the user’s personal situation or responds with a general formula. Personalization is essential to avoid false conclusions and oversimplifications that do not reflect the complexity of reality.
Another suggestion is to see if chatbots encourage autonomy, reflection, and critical thinking. A responsible system should not be limited to examine emotionsrather encourages deeper analysis and promotes conscious decision-making. The lack of these features can lead users to build harmful dependencies on the tool.

This study also highlights the importance of cultural and contextual sensitivity. Responses that ignore factors such as social environment, culture, and user experience can be inappropriate or even dangerous. In this sense, the researchers argue that AI tools applied to the emotional field should be designed to detect signs of crisis and provide specialized resources, such as helplines or recommendations to see a human therapist.
Experts behind the study note that the goal is not to rule out the potential of AI in the mental health field, but to show that these systems cannot replace human labor at this time. This study highlights the need to develop ethical frameworks and supervisory mechanisms before integrating this type of tool as an alternative means of emotional support.