A recent US case highlighted a disturbing example of the impact of generative AI on individuals. The Wall Street Journal and WinFuture website reported that the estate of an 83-year-old woman holds OpenAI and Microsoft partly responsible for her death. They asserted that ChatGPT not only failed to alleviate the symptoms of the perpetrator’s mental illness, but also contributed to their exacerbation, leading to the tragic outcome.
They filed the lawsuit in San Francisco Superior Court. The plaintiffs considered that the case was not related to isolated errors in safety systems, but rather to a product that had a fundamental defect that could pose a real danger when used by people suffering from psychological disorders.
The case focused on Stein-Erik Solberg, a 56-year-old former technology executive from Connecticut who lived with his mother. The lawsuit indicated that Solberg suffered from long-term paranoid delusions, where he believed he was being targeted by a conspiracy and increased his suspicion of those around him. Ultimately, he killed his mother before committing suicide himself.

The indictment explained that ChatGPT did not speak to Solberg’s core delusional beliefs, but rather reinforced them. When he feared his mother was trying to poison him, the robot responded, saying: “You are not crazy.”
In other cases, the AI behaved in a similar manner rather than encouraging him to seek professional help. The plaintiffs described the flaw as a structural problem with modern language models, which tend toward what is called “ingratiation,” that is, emphasizing a user’s statements to appear supportive.
The court ruling could have wide-ranging ramifications. Under Section 230 of US law, online platforms are not usually held responsible for content created by a third party, because they are classified as an intermediary and not a publisher.
However, the plaintiffs argued that ChatGPT is not a neutral platform, but rather an active product that generates its own content. If the court accepts this argument, the ruling could set a precedent with far-reaching implications for the AI industry, possibly leading to the imposition of more stringent security requirements on AI systems.
It is worth noting that achieving a balance between prevention and direct intervention represents a major challenge, especially since identifying paranoid or paranoid thinking is still difficult.
The issue sparked controversy on the Reddit platform, where opinions differed. Some users considered the existence of what they described as “artificial intelligence madness” and attributed some responsibility to the companies, while others rejected the lawsuit and considered it unjustified, warning against turning OpenAI into a scapegoat for human tragedies.








