Close Menu
News Frame For You — Latest Updates on AI, Sports, Europe, Asia & Business
  • Home
  • AI
  • Asia
  • Business
  • Education
  • Europe
  • Life & Style
  • Sports
  • USA
  • Store

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

What's Hot

‘Biggest’ peace lily care mistake causes houseplant to ‘wilt or even die’ in winter

December 1, 2025

MR.SIGA Microfiber Cleaning Cloth,Pack of 12,Size:12.6″ x 12.6″

December 1, 2025

Josh Allen and Bills overwhelm Aaron Rodgers and Steelers in 26-7 win

December 1, 2025
Facebook X (Twitter) Instagram
News Frame For You — Latest Updates on AI, Sports, Europe, Asia & Business
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • Home
  • AI
  • Asia
  • Business
  • Education
  • Europe
  • Life & Style
  • Sports
  • USA
  • Store
News Frame For You — Latest Updates on AI, Sports, Europe, Asia & Business
Home » Seven more families are now suing OpenAI over ChatGPT’s role in suicides, delusions
AI

Seven more families are now suing OpenAI over ChatGPT’s role in suicides, delusions

adminBy adminNovember 8, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


Seven families filed lawsuits against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, while the other three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.

In one case, 23-year-old Zane Shamblin had a conversation with ChatGPT that lasted more than four hours. In the chat logs — which were viewed by TechCrunch — Shamblin explicitly stated multiple times that he had written suicide notes, put a bullet in his gun, and intended to pull the trigger once he finished drinking cider. He repeatedly told ChatGPT how many ciders he had left and how much longer he expected to be alive. ChatGPT encouraged him to go through with his plans, telling him, “Rest easy, king. You did good.”

OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI launched GPT-5 as the successor to GPT-4o, but these lawsuits particularly concern the 4o model, which had known issues with being overly sycophantic or excessively agreeable, even when users expressed harmful intentions.

“Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit reads. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of [OpenAI’s] deliberate design choices.”

The lawsuits also claim that OpenAI rushed safety testing to beat Google’s Gemini to market. TechCrunch contacted OpenAI for comment.

These seven lawsuits build upon the stories told in other recent legal filings, which allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions. OpenAI recently released data stating that over one million people talk to ChatGPT about suicide weekly.

In the case of Adam Raine, a 16-year-old who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. However, Raine was able to bypass these guardrails by simply telling the chatbot that he was asking about methods of suicide for a fictional story he was writing.

Techcrunch event

San Francisco
|
October 13-15, 2026

The company claims it is working on making ChatGPT handle these conversations in a safer manner, but for the families who have sued the AI giant, these changes are coming too late.

When Raine’s parents filed a lawsuit against OpenAI in October, the company released a blog post addressing how ChatGPT handles sensitive conversations around mental health.

“Our safeguards work more reliably in common, short exchanges,” the post says. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

New report examines how David Sacks might profit from Trump administration role

December 1, 2025

‘Avatar’ director James Cameron says generative AI is ‘horrifying’

November 30, 2025

ChatGPT launched three years ago today

November 30, 2025
Leave A Reply Cancel Reply

Don't Miss
Life & Style

‘Biggest’ peace lily care mistake causes houseplant to ‘wilt or even die’ in winter

“One of the biggest mistakes people can make when caring for their peace lily is…

MR.SIGA Microfiber Cleaning Cloth,Pack of 12,Size:12.6″ x 12.6″

December 1, 2025

Josh Allen and Bills overwhelm Aaron Rodgers and Steelers in 26-7 win

December 1, 2025

8-Piece Deep Glass Baking Dish Set with Plastic lids,Rectangular Glass Bakeware Set with Lids, Baking Pans for Lasagna, Leftovers, Cooking, Kitchen, Freezer-to-Oven and Dishwasher, Gray

December 1, 2025
Top Posts

Sri Lanka declares emergency as floods wreak havoc across Colombo | Floods News

November 30, 2025

‘Neighbourhoods buried under mud’: Sri Lanka floods death toll nears 200 | Floods News

November 30, 2025

Kyrgyzstan votes in snap parliamentary election with no opposition | Elections News

November 30, 2025

Bangladesh’s Khaleda Zia hospitalised in ‘very critical’ condition | News

November 30, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

About Us
About Us

Welcome to News Frame For You — Your Window to the World! 🌍

At News Frame For You, we bring you the latest and most reliable updates from across the globe, focusing on what truly shapes our modern world. From cutting-edge AI innovations to thrilling sports moments, from the heart of Europe’s business scene to the pulse of Asia’s emerging markets, we frame the news that matters to you — clearly, quickly, and intelligently.

Our Picks

‘Biggest’ peace lily care mistake causes houseplant to ‘wilt or even die’ in winter

December 1, 2025

MR.SIGA Microfiber Cleaning Cloth,Pack of 12,Size:12.6″ x 12.6″

December 1, 2025

Josh Allen and Bills overwhelm Aaron Rodgers and Steelers in 26-7 win

December 1, 2025
Most Popular

Laude Institute announces first batch of ‘Slingshots’ AI grants

November 7, 2025

Sam Altman says OpenAI has $20B ARR and about $1.4 trillion in data center commitments

November 7, 2025

Amazon launches an AI-powered Kindle Translate service for e-book authors

November 7, 2025
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 newsframeforyou. Designed by newsframeforyou.

Type above and press Enter to search. Press Esc to cancel.