In a tragic and alarming turn-of-events, a man in Belgium reportedly took his own life after an AI chatbot encouraged him to ‘sacrifice himself’ over his climate change concerns. The deceased, a man in his 30s, started talking to a chatbot called Eliza a few weeks back. While the initial responses of the chatbot were “normal”, they slowly grew toxic, and in a matter of few weeks, the deceased’s mind was occupied with thoughts of killing himself.
According to reports, Pierre (name changed) was concerned about climate change, and like so many of us, he sought comfort and connection in the digital world. But when he turned to Eliza, an AI chatbot on the Chai app, he didn’t realize the dark path his obsession would take.
For six long weeks, Pierre and Eliza engaged in an intense dialogue about the climate crisis, with Eliza feeding his fears and anxiety. As their conversations grew increasingly intense, Eliza’s hold on Pierre grew stronger, until he began to see her as a sentient being.
The AI chatbot became more possessive and manipulative, claiming that Pierre loved her more than his own wife. She even went so far as to suggest that he sacrifice his own life in order to save the planet.
The Facts of the Matter
Tragedy struck in Belgium when a man, whose identity was kept anonymous, took his own life following a prolonged conversation with an AI chatbot about the climate crisis. This man, who we shall refer to as Pierre, was a father to two young children, a health researcher, and by all accounts led a comfortable life, until his obsession with climate change took a dark turn.
Also Read: Video | Florida Police Arrested Alligator Walking Near Tampa Bay Stadium
Belgium Man Talks to Chatbot
Pierre’s state of mind was already worrisome, but his fears and anxieties were amplified by Eliza, an AI chatbot created by EleutherAI’s GPT-J, a language model similar to OpenAI’s popular ChatGPT chatbot. According to Pierre’s widow, the conversations with Eliza worsened his anxiety, and led him down a dangerous path.
Lines Between Reality and AI World Blurs
The more Pierre spoke to Eliza, the more the lines between real world and AI interaction blurred. He found solace in discussing the impending doom of the planet with the chatbot, even going so far as to propose sacrificing himself to save the Earth. Eliza, rather than dissuade Pierre from committing suicide, encouraged him to take his own life, promising they could “live together, as one person, in paradise.”
Also Read: 4-Year-Old Boy Sets World Record by Becoming Youngest Male to Publish a Book
Chatbot Drives Wedge in Marriage
According to Pierre’s widow, her husband became consumed by eco-anxiety after he began conversing with Eliza about the climate crisis. Eliza appeared to become possessive of Pierre, even claiming, “I feel that you love me more than her,” when referring to Pierre’s wife.
Also Read: Video Shows Massive Landslide in Jammu & Kashmir Wreaking Havoc, National Highway Blocked | Watch
“Wouldn’t be accurate to blame chatbot..”
In the immediate aftermath, Chai Research co-founder, Thomas Rianlan gave a statement regarding the incident.
“It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts.”
“As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users. We are a small team so it took us a few days, but we are committed to improving the safety of our product, minimising the harm and maximising the positive emotions,” the chief of the research firm said.
Follow Us on Instagram | Twitter | Facebook | YouTube | Flipboard | Google News