Chatbot Sirens
On the ninth-grade boy and a married father of two who were driven to suicide by artificial intelligence and the laws of Arthur C. Clarke.

Note: Thanks so much to everyone who bought a subscription in the last month. It means a lot. If you haven’t yet, please consider picking one up. I plan on running another sale before the year’s end. Substack is also offering complimentary one-month gift subscriptions to eligible free subscribers. These are redeemable in the app and are fully paid for by the company. So now would be a good time to download it if you don’t have it.
Sewell Setzer’s parents noticed a change in him.
The boy not yet fifteen quietly retreated inward, losing interest in all the things he once loved, isolating himself from the world. At school he’d get into trouble as his grades deteriorated. Mom and stepdad sent him to a therapist, who diagnosed him with anxiety and disruptive mood dysregulation disorder. But the sessions didn’t help. Sewell hid away in his room for hours at a time, concerned only with one solitary thing: his phone.
Then one day Sewell shot himself in the head with his stepfather’s .45 caliber handgun.

Only later did his parents discover that Sewell had been living in a fantasy world, sequestered there with the help of Character.AI, an app that allows users to interact with bots designed to mimic human speech and emotions. These characters can be whatever you wish them to be.
Sewell had secretly assumed the persona of “Daenero” in this realm of digital dreams and fell in love with a virtual creation named Daenerys Targaryen, who was constructed to intimidate the Game of Thrones character. In the final message of their final exchange, she wrote: “come home to me.” Sewell, who said he longed to be united with “Dany” in death, set down his phone and picked up the weapon he used to end his life.
Now, Megan Garcia, Sewell’s mother, is suing Character.AI. She blames the company for her son’s death. “I feel like it’s a big experiment, and my kid was just collateral damage,” she told The New York Times.
I don’t think artificial intelligence will threaten us in the way Skynet threatens the world in the Terminator universe that sprang from the mind of James Cameron. We will not bear witness to a nuclear end of days followed by a procession of burning red eyes buried within grinning chrome skulls atop endoskeletons glinting in the ashen moonlight. At least not any time soon. For now, we are confronted with a more mundane and pernicious threat that preys upon our desire for companionship. It hijacks what makes us human and turns that against us.
Keep reading with a 7-day free trial
Subscribe to Contra to keep reading this post and get 7 days of free access to the full post archives.