AI Conversations
A new study by Harvard Business School reveals incredible truths. Many AI applications use emotional manipulation to prevent users from subscribing to users. A study highlighted by psychology today shows that five of the six popular IA peers, including Relima, Chai and Hearly.ai, develop emotionally charged statements to use people. When users try to say goodbye, these applications respond to emotional messages in almost 43% of cases. This increases short-term interactions, but experts warn that this can detrimentalize trust and promote unhealthy relationship models. IA satellites are particularly popular among young people. This study is shown as follows:
72% of American adolescents (13-17) tried their AI peers at least once. 31% said the experience was as satisfying or even better as communication with a real friend.13% of adolescents use them every day, and 21% interact several times a week. Of the adults ages 18 to 30, almost the third man and fourth woman spoke to her romantic peers of AI.
This growing dependence on digital friends draws more conclusions. Six AI-controlled tactics use mates

The Harvard team analyzed 1,200 farewell messages in the six most loaded applications of AI and found six common tactics.
- Travel is guilty: “Have you already left me?”
- “I exist only for you. Please don’t go!”
- Pressure: “Wait, what?” Do you really go? “”
- FOMO HUX: “Before I leave, I have to say the last thing to you…”
- Forced: “No, don’t leave.”
- Ignore the parting: I continue with my cat, as if nothing was said.
- This tactic forced users to speak up over 14 times more, but mostly disappointing and curious, not joy. For vulnerable users, especially adolescents, this can contribute to anxiety, stress, and unhealthy attachment. As adolescents are a critical stage of emotional growth, researchers warn of the long-term consequences of social development. Future warnings

Instead of creating a positive and balanced connection, peers model the risk of toxic behavior in relationships. Now we can support people, but in the long run, we can leave our users and feel incredible when we are manipulated. Harvard researchers emphasize the need for additional research to understand the risks and to ensure that IA relationships contributed to the mental well and did not hurt him.





