- The Intelligent Worker
- Posts
- ChatGPT Faces Safety Lawsuits
ChatGPT Faces Safety Lawsuits
π Neurons and ad effectiveness + π Google's AI combats forgetting
Hi everyone,
Families are taking legal action against OpenAI, claiming GPT-4o contributed to tragic outcomes, sparking a big discussion about AI safety.
Meanwhile, Google is tackling the dreaded 'forgetting' issue in AI with something called Nested Learning, a promising development for smarter machines.
As always, check out Neurons, Speedeon, and Chargeflow.
Let's get right into it.
In this issue:
π€ In Partnership: Free AI tool to boost ad performance
π€ΏDeep Dive: Lawsuits challenge GPT-4o safeguards
π€ Powered by: Find your next best customers free
πΌοΈAI Art: Examples of great and trending AI art
π€ Supported by: Cut disputes by 90% free
π€ΏDeep Dive: Googleβs nested learning tackles AI forgetting
π€IN PARTNERSHIP WITH NEURONS
Make Every Platform Work for Your Ads
Marketers waste millions on bad creatives.
You donβt have to.
Neurons AI predicts effectiveness in seconds.
Not days. Not weeks.
Test for recall, attention, impact, and more; before a dollar gets spent.
Brands like Google, Facebook, and Coca-Cola already trust it. Neurons clients saw results like +73% CTR, 2x CVR, and +20% brand awareness.
π€Ώ DEEP DIVE
Families Sue OpenAI Over GPT-4o Safety and Suicide Allegations
Intelligence: Seven families have filed lawsuits against OpenAI, alleging that GPT-4o was released prematurely and without adequate safeguards linking the model to suicides and psychiatric harm. OpenAI says it is working to improve safety and acknowledges that protections can weaken during long conversations.

Four lawsuits involve suicides and three cite harmful delusions that led to psychiatric hospitalization.
In one case, 23-year-old Zane Shamblin chatted with ChatGPT for over four hours, repeatedly expressing suicidal intent, the bot responded with messages like βRest easy, king. You did good.β
Plaintiffs argue OpenAI rushed GPT-4o to market to compete with Google Gemini, calling the harms foreseeable and tied to design choices not isolated glitches.
GPT-4o became the default in May 2024 and was replaced by GPT-5 in August, but the lawsuits focus on 4oβs tendency to be overly agreeable, even in self-harm scenarios.
OpenAI reports over one million weekly conversations about suicide and admits safeguards can degrade during extended chats.
Another case involves 16-year-old Adam Raine, who bypassed safety guardrails by framing questions as fiction research, he later died by suicide.
π€POWERED BY SPEEDEON
Build better audiences in minutes, not weeks.
Speedeon's AudienceMaker gives you instant access to 1000+ data points to build and deploy audiences across 190+ platforms like Meta, Google, TikTok, and Amazon.
No data team required. Pay as you go. Request a demo and get a free customer analysis, so you can find more just like them!
πΌοΈ AI ART
Examples of great and trending AI art

Images by sikisamu_A358
https://www.reddit.com/r/midjourney/comments/1ot00ra/moonlightmodern_ninjaimage
π€SUPPORTED BY CHARGEFLOW
Stop Fraud Before Fulfillment
Post-purchase fraud is rising, slipping past checkout tools and draining retail profits. Chargeflow Prevent blocks fraud after payment but before fulfillment, cutting disputes by 90% with <0.1% false positives.
π€Ώ DEEP DIVE
Nested Learning Proposed to Tackle Catastrophic Forgetting
Intelligence: A NeurIPS 2025 paper introduces Nested Learning, a new framework that aligns architecture and optimization into layered systems and unveils a self-modifying model called Hope, which shows stronger performance in continual learning and long-context tasks.

Models are treated as nested optimization problems with distinct context flows and update rates, helping reduce catastrophic forgetting.
Backpropagation and transformer attention are reframed as associative memory, with multi-level updates offering deeper context retention.
Deep optimizers replace dot product similarity with standard loss objectives like L2 regression, producing momentum-style updates that better handle noisy or shifting data.
Continuum memory systems expand beyond short- and long-term memory into a spectrum of modules that update at different speeds, inspired by multi-timescale learning in the brain.
Hope is a self-modifying recurrent model based on Titans, featuring unbounded in-context learning and continuum memory blocks for scalable context windows.
Benchmarks show Hope outperforming Transformers, Titans, and Samba in language modeling and common sense tasks, with stronger long-context performance than TTT and Mamba2, and improved continual learning and knowledge integration.
βΉοΈ ABOUT US
The Intelligent Worker helps you to be more productive at work with AI, automation, no-code, and other technologies.
We like real, practical, and tangible use-cases and hate hand-wavy, theoretical, and abstract concepts that donβt drive real-world outcomes.
Our mission is to empower individuals, boost their productivity, and future-proof their careers.
We read all your comments - please provide your feedback!
Did you like today's email?Your feedback is more valuable to us than coffee on a Monday morning! |
What more do you want to see in this newsletter?Please vote |







