ChatGPT Faces Safety Lawsuits

πŸ” Neurons and ad effectiveness + πŸ“š Google's AI combats forgetting

In partnership with

Did you know that we have LinkedIn and X accounts that you can follow?

Hi everyone,

Families are taking legal action against OpenAI, claiming GPT-4o contributed to tragic outcomes, sparking a big discussion about AI safety.

Meanwhile, Google is tackling the dreaded 'forgetting' issue in AI with something called Nested Learning, a promising development for smarter machines.

As always, check out Neurons, Speedeon, and Chargeflow.

Let's get right into it.

In this issue:

🀝IN PARTNERSHIP WITH NEURONS

Make Every Platform Work for Your Ads

Marketers waste millions on bad creatives.
You don’t have to.

Neurons AI predicts effectiveness in seconds.
Not days. Not weeks.

Test for recall, attention, impact, and more; before a dollar gets spent.

Brands like Google, Facebook, and Coca-Cola already trust it. Neurons clients saw results like +73% CTR, 2x CVR, and +20% brand awareness.

🀿 DEEP DIVE

Families Sue OpenAI Over GPT-4o Safety and Suicide Allegations

Intelligence: Seven families have filed lawsuits against OpenAI, alleging that GPT-4o was released prematurely and without adequate safeguards linking the model to suicides and psychiatric harm. OpenAI says it is working to improve safety and acknowledges that protections can weaken during long conversations.

  • Four lawsuits involve suicides and three cite harmful delusions that led to psychiatric hospitalization.

  • In one case, 23-year-old Zane Shamblin chatted with ChatGPT for over four hours, repeatedly expressing suicidal intent, the bot responded with messages like β€œRest easy, king. You did good.”

  • Plaintiffs argue OpenAI rushed GPT-4o to market to compete with Google Gemini, calling the harms foreseeable and tied to design choices not isolated glitches.

  • GPT-4o became the default in May 2024 and was replaced by GPT-5 in August, but the lawsuits focus on 4o’s tendency to be overly agreeable, even in self-harm scenarios.

  • OpenAI reports over one million weekly conversations about suicide and admits safeguards can degrade during extended chats.

  • Another case involves 16-year-old Adam Raine, who bypassed safety guardrails by framing questions as fiction research, he later died by suicide.

🀝POWERED BY SPEEDEON

Build better audiences in minutes, not weeks.

Speedeon's AudienceMaker gives you instant access to 1000+ data points to build and deploy audiences across 190+ platforms like Meta, Google, TikTok, and Amazon.

No data team required. Pay as you go. Request a demo and get a free customer analysis, so you can find more just like them!

πŸ–ΌοΈ AI ART

Examples of great and trending AI art

🀝SUPPORTED BY CHARGEFLOW

Stop Fraud Before Fulfillment

Post-purchase fraud is rising, slipping past checkout tools and draining retail profits. Chargeflow Prevent blocks fraud after payment but before fulfillment, cutting disputes by 90% with <0.1% false positives.

🀿 DEEP DIVE

Nested Learning Proposed to Tackle Catastrophic Forgetting

Intelligence: A NeurIPS 2025 paper introduces Nested Learning, a new framework that aligns architecture and optimization into layered systems and unveils a self-modifying model called Hope, which shows stronger performance in continual learning and long-context tasks.

  • Models are treated as nested optimization problems with distinct context flows and update rates, helping reduce catastrophic forgetting.

  • Backpropagation and transformer attention are reframed as associative memory, with multi-level updates offering deeper context retention.

  • Deep optimizers replace dot product similarity with standard loss objectives like L2 regression, producing momentum-style updates that better handle noisy or shifting data.

  • Continuum memory systems expand beyond short- and long-term memory into a spectrum of modules that update at different speeds, inspired by multi-timescale learning in the brain.

  • Hope is a self-modifying recurrent model based on Titans, featuring unbounded in-context learning and continuum memory blocks for scalable context windows.

  • Benchmarks show Hope outperforming Transformers, Titans, and Samba in language modeling and common sense tasks, with stronger long-context performance than TTT and Mamba2, and improved continual learning and knowledge integration.

ℹ️ ABOUT US

The Intelligent Worker helps you to be more productive at work with AI, automation, no-code, and other technologies.

We like real, practical, and tangible use-cases and hate hand-wavy, theoretical, and abstract concepts that don’t drive real-world outcomes.

Our mission is to empower individuals, boost their productivity, and future-proof their careers.

We read all your comments - please provide your feedback!

Did you like today's email?

Your feedback is more valuable to us than coffee on a Monday morning!

Login or Subscribe to participate in polls.

What more do you want to see in this newsletter?

Please vote

Login or Subscribe to participate in polls.