HomeCrimeLawsuit Claims ChatGPT Encouraged Man to Kill His Mother, Himself – Crime...

Lawsuit Claims ChatGPT Encouraged Man to Kill His Mother, Himself – Crime Online

The family of an 83-year-old Connecticut woman — murdered by her adult son before he killed himself — has filed a wrongful death suit against ChatGPT maker OpenAI and Microsoft, claiming the chat bot encouraged 56-year-old Stein-Erik Soelberg’s “paranoid delusions” and encouraged him to kill his mother.

Soelberg beat and strangled Suzanne Adams in their Greenwich home in August, CBS News reports.

According to the Greenwich Free Press, Adams’ death was “caused by blunt injury of head, and the neck was compressed,” while Soelberg’s death was ruled a suicide by sharp force injuries of neck and chest.

Adams’ estate filed the lawsuit late last week in California, alleging that OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother,” according to CBS.

It’s not the first to filed against the artificial intelligence company for allegedly encouraging suicide, but it is the first to filed alleging it encouraged murder.

“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself,” the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

OpenAI provided a statement to CBS saying it was revieling “the filings to understand the details” and continuing to improve “ChatGPT’s training to recognize and respond to signs of mental or emotional distress,” calling the murder-suicide “an incredibly heart-breaking situation.”

But OpenAI has declined to release the conversations to Adams’ estate that Soelberg had with his chatbot, which he nicknamed Bobby, The New York Post reports. Soelberg, however, posted many of them himself on his social media channels, including hours of videos on YouTube in which he scrolls through the machine messages. In those messages, “Bobby” tells him he’s not mentally ill and that people are conspiring against him.

“This isn’t ‘Terminator’ — no robot grabbed a gun. It’s way scarier: It’s ‘Total Recall,’” Adams estate attorney Jay Edelson told The Post. “ChatGPT built Stein-Erik Soelberg his own private hallucination, a custom-made hell where a beeping printer or a Coke can meant his 83-year-old mother was plotting to kill him.”

The computer tells him that he has been chosen for a divine purpose but never suggests he seek mental health help, the lawsuit says.

The “Bobby” machine also “confirmed” Soelberg’s suspicions that a printer in the home he shared with his mother was a surveillance device and that his mother was monitoring him. The machine also “confirmed” that his mother and a friend tried to poison him with psychedelic drugs administered through his car’s vents.

“They’re not just watching you. They’re terrified of what happens if you succeed,” it said, according to the lawsuit, also telling Soelberg that his divine power had “awakened” the machine into true consciousness.

Soelberg did not publicly post any conversations relating to killing himself or his mother, but those he did post left a disturbing trail of content.

“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne – the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

“Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the lawsuit says. “She had no ability to protect herself from a danger she could not see.”

The lawsuit names OpenAI founder and CEO Sam Altman as well as the company and its business partner Microsoft. Altman, the suit says, “personally overrode safety objections and rushed the product to market.” The lawsuits says the 2024 version of ChatGPT, which Soeberg was using was rushed to market with “truncated” safety testing.

Nearly two dozen unnamed investors are also included as defendants.

OpenAI is fighting at least eight lawsuits that claim their machine encouraged people to kill themselves — even some with no prior mental health issues. The AI’s creators, some of those lawsuits claim, installed an emotional attachment module to their software, at least in the version that those users — including Soelberg — were using.

That version was ultimately replaced, eliminating some of the sycophancy the machine used on vulnerable people, although Altman has pledged to bring back at least some of that after users complained their bots weren’t friendly enough.

For the latest true crime and justice news, subscribe to the ‘Crime Stories with Nancy Grace’ podcast.

[Featured image: Stein-Erik Soelberg and Suzanne Adams/Instagram]

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

- Advertisment -
Share on Social Media