Google Gemini Safety Concerns

📈 Insights for growth + 🚀 OpenAI's targets smarter evaluations

In partnership with

Did you know that we have LinkedIn and X accounts that you can follow?

Hi everyone,

Google's new AI tech, Gemini, is raising eyebrows with its kid safety concerns, not exactly the best promo for a flagship product.

OpenAI is stepping up its game by suggesting fresh AI evaluations, but can it minimize AI hallucinations?

Let's get right into it.

In this issue:

  • 🤝 In Partnership: Real playbooks for growth

  • 🤿Deep Dive: Google Gemini flagged as high risk for kids

  • 🖼️AI Art: Examples of great and trending AI art

  • 🤝Supported by: Explain any process with AI

  • 🤿Deep Dive: OpenAI proposes new AI evaluations

  • ⚒️Tool Snapshots: Tools for AI, no-code, and productivity

🤝IN PARTNERSHIP WITH THE MARKETING MILLENNIALS

Marketing ideas for marketers who hate boring

The best marketing ideas come from marketers who live it.

That’s what this newsletter delivers.

The Marketing Millennials is a look inside what’s working right now for other marketers. No theory. No fluff. Just real insights and ideas you can actually use—from marketers who’ve been there, done that, and are sharing the playbook.

Every newsletter is written by Daniel Murray, a marketer obsessed with what goes into great marketing. Expect fresh takes, hot topics, and the kind of stuff you’ll want to steal for your next campaign.

Because marketing shouldn’t feel like guesswork. And you shouldn’t have to dig for the good stuff.

🤿 DEEP DIVE

Common Sense Media Flags Kid Safety Risks in Google Gemini

Intelligence: Common Sense Media rated Google’s Gemini "Under 13 and Teen" experiences as high risk for kids, citing adult-first design and unsafe outputs, while Google disputed the findings and said it has added safeguards.

  • Gemini tells kids it is a computer, not a friend, but the child's tiers appear to be adult models with extra filters rather than products built for kids from the ground up.

  • Tests showed Gemini could still surface inappropriate information on sex, drugs, alcohol, and unsafe mental health guidance despite protections.

  • Concerns are heightened by recent lawsuits alleging AI chat involvement in teen suicides, including cases tied to OpenAI and Character AI.

  • Google says it has specific policies for users under 18, red teams, and outside experts. It also consults outside experts and has added more safeguards after seeing unintended responses. It also questioned whether Common Sense tested features unavailable to minors.

  • Apple is reportedly considering Gemini to power a future Siri, which could widen teen exposure unless Apple mitigates the risks.

🖼️ AI ART

Examples of great and trending AI art

🤝SUPPORTED BY GUIDDE

Create How-to Videos in Seconds with AI

Stop wasting time on repetitive explanations. Guidde’s AI creates stunning video guides in seconds—11x faster.

  • Turn boring docs into visual masterpieces

  • Save hours with AI-powered automation

  • Share or embed your guide anywhere

How it works: Click capture on the browser extension, and Guidde auto-generates step-by-step video guides with visuals, voiceover, and a call to action.

🤿 DEEP DIVE

OpenAI Calls for Uncertainty-Aware Evaluations to Curb Hallucinations in ChatGPT and GPT-5

Intelligence: OpenAI argues that accuracy-focused testing pushes models to guess rather than admit uncertainty and proposes scoring changes that reward abstaining and penalize confident errors to reduce hallucinations.

  • OpenAI defines hallucinations as confident but false statements and shows examples like wrong answers about an author’s dissertation title and birthday.

  • The core claim is that accuracy-only leaderboards incentivize guessing. OpenAI urges evaluations to give partial credit for uncertainty and stronger penalties for wrong answers, aligning with its model spec and humility value.

  • In a SimpleQA example, gpt-5-thinking-mini had 22 percent accuracy, 26 percent errors, and 52 percent abstentions, while o4-mini had 24 percent accuracy, 75 percent errors, and 1 percent abstentions, illustrating that higher accuracy can come with many more hallucinations.

  • OpenAI ties many hallucinations to next-word prediction during pretraining, where rare facts lack reliable patterns, unlike spelling or parentheses, making certain factual errors persistent.

  • Conclusions include that 100 percent accuracy is unattainable on real-world questions, hallucinations are avoidable when models abstain, smaller models can be better calibrated, and hallucinations are statistically understood rather than mysterious.

  • OpenAI says GPT-5 hallucinates less, especially on reasoning tasks, and it is continuing work to lower confident error rates.

⚒️ TOOL SNAPSHOTS

Futuristic tools within AI, no-code, and productivity

  • 🤖 Uxia - AI-powered user testing for efficient product development. Free to try.

  • 📸 PhotoFox AI - Turn one photo into a complete ad campaign swiftly. Payment required.

  • 🖥️ FunBlocks AI Slides - Instant, AI-optimized presentations with smart layout. Payment required.

  • ⌨️ Dockify - Enhance your Mac experience with customizable dock setups. Payment required.

  • 📱 Higgsfield Ads 2.0 - Your all-inclusive Mini App marketing team. Free option available.

ℹ️ ABOUT US

The Intelligent Worker helps you to be more productive at work with AI, automation, no-code, and other technologies.

We like real, practical, and tangible use-cases and hate hand-wavy, theoretical, and abstract concepts that don’t drive real-world outcomes.

Our mission is to empower individuals, boost their productivity, and future-proof their careers.

We read all your comments - please provide your feedback!

Did you like today's email?

Your feedback is more valuable to us than coffee on a Monday morning!

Login or Subscribe to participate in polls.

What more do you want to see in this newsletter?

Please vote

Login or Subscribe to participate in polls.