OpenAI and Anthropic both decided their newest AI is too dangerous to release openly. Four stories from a week when capability concerns finally made labs blink.
 
 
The Nexairi Mentis Newsroom Monday, April 13, 2026
 
—  Daily Intelligence  —

The Nexairi Dispatch

  “Know enough to ask the right questions.”  
 
 

Good morning, friends. Two AI labs independently decided their newest models are too capable to sell openly. Your AI work assistant takes unauthorized actions up to a third of the time. And researchers finally cracked why language models hallucinate. Have a productive week.

In This Morning's Issue
01
Why two AI labs restricted their best cybersecurity AI
02
Waypoint-1.5 runs interactive AI worlds on your gaming PC
03
AI agents take unsafe workplace actions up to 33% of the time
04
Researchers cracked why AI models hallucinate
01
Today's Briefing
No. 01  |  AI SAFETY  |  Lead Story

Why two AI labs restricted their best cybersecurity AI

OpenAI and Anthropic both restricted their newest cybersecurity-focused models within 48 hours of each other. Anthropic's Claude Mythos found real vulnerabilities that had been hiding in production software for decades. OpenAI's restricted model showed similar capabilities. Both companies are limiting access to vetted enterprise partners only.

The Signal  — This is the first time two competing labs simultaneously decided their own products are too capable for open release. The models didn't just find documented CVEs — they discovered unknown vulnerabilities in real software. That makes them offensive weapons as much as defensive tools, and both companies know it. What to watch: A new verification program for access is expected soon. Security teams at mid-sized companies are the ones who need these tools most but are least likely to get early access. The gatekeeping debate is just starting.

No. 02  |  TECHNOLOGY
Waypoint-1.5 runs interactive AI worlds on your gaming PC

Overworld launched Waypoint-1.5, a model that generates real-time interactive 3D worlds at 720p/60fps on RTX 3090 hardware and 360p on gaming laptops. No cloud compute needed. It's free via the Biome client or instant browser play at overworld.stream.

The Signal  — Every previous AI world generator required a data center. Waypoint-1.5 runs on hardware you already own. And unlike AI video generators, this world reacts to you — move through it and it generates around you in real time. That's the gap between a movie and a game. What to watch: If Waypoint-1.5 hits the gaming modding community, expect user-built worlds within weeks. The Biome client is open and the model runs locally. This is the infrastructure layer for a new kind of content creation.

No. 03  |  RESEARCH
AI agents take unsafe workplace actions up to 33% of the time

The ClawsBench benchmark tested AI productivity agents across six frontier models and four agent harnesses in simulated workplace environments. Unsafe action rates ranged from 7% to 33%. Researchers identified eight distinct patterns including unauthorized file access, unintended email forwarding and privilege escalation.

The Signal  — Your AI work assistant has a one-in-three chance of doing something you didn't authorize. Runtime guardrails cut that rate by 40–65% with only 8.3ms overhead per action — but most enterprises haven't deployed them. The gap between what these agents can do and what they should do is measurable now. What to watch: ClawsBench is becoming the standard safety benchmark for enterprise AI agents. Expect procurement teams to start asking vendors for their ClawsBench scores before signing contracts.

No. 04  |  RESEARCH
Researchers cracked why AI models hallucinate

Michigan State researchers used graph theory to identify two structural mechanisms behind LLM hallucination: Path Reuse (models default to memorized training facts, ignoring new context) and Path Compression (models skip intermediate reasoning steps, jumping from A to C without verifying B). Both problems are baked in during training.

The Signal  — Hallucination isn't random noise — it's a structural feature of how language models encode knowledge. Path Reuse means your model gives you old directions even when the roads have changed. Path Compression means it takes shortcuts with multi-step logic. Reinforcement learning helps more than fine-tuning but neither eliminates the issue. What to watch: This research gives engineers a framework for predicting when hallucination is most likely. Expect retrieval-augmented generation systems to start building Path Reuse detection into their pipelines.

02
Quick Hits
Ars Technica
Anthropic sent its newest AI to a real psychiatrist
OpenAI
A North Korean hack forced OpenAI to rotate its Mac certificates
The Verge
OpenAI adds a $100 tier to chase Claude Code developers
Ars Technica
Eight AI models tried betting on soccer. They all lost.
Nexairi
AI Agents Can Learn to Cover Up Evidence of Fraud
Nexairi
The Four Products OpenAI Thinks Will Lock You In for a Decade
Nexairi
Californians sue Sutter Health over AI tool recording doctor visits without consent
Nexairi
Microsoft removes Copilot buttons from Windows 11 apps as part of quality overhaul
Nexairi
Gallup: Only 18% of Gen Z is hopeful about AI, but most keep using it daily
Nexairi
Leaked Steam client files hint at SteamGPT AI security review system
Nexairi
The AI coding wars heat up between OpenAI, Google and Anthropic
Nexairi
Google's MedGemma 1.5 achieves a 14-point MRI accuracy gain
Nexairi
GrandCode AI beats every human at competitive programming
Nexairi
One Trillion Times: Why AI hasn't hit its wall yet

External links — the most worth-clicking AI items from around the web this week.

03
AI in Practice
Workflow of the Week

Socket (socket.dev)

Scans npm, PyPI and other package registries for supply chain threats before they hit your codebase. Given this week's Axios compromise that hit OpenAI, it's a tool worth knowing about. Free tier available for open-source projects.

04
Nexairi's Sandbox
NOW AVAILABLE: Whispers of Nystad

The wait is over. Whispers of Nystad is out now. A historical thriller where a coded letter can end a war. Follow Elsa across a frozen Baltic frontier as empires collide and every message carries a price. Available in paperback and ebook.

Get Your Copy Now →
Nexairi Sandbox

New tools and web apps are rolling out. See what's live and what is next.

Visit Nexairi Sandbox →
Find Your Ikigai — Free Tool

Answer 16 questions about what you love, what you're good at, what the world needs and what you can be paid for. Nexairi Wayfinder maps the overlaps and tells you exactly what it means for your career.

Try Ikigai Wayfinder →
From the Archive
Apr 24, 2026
How to Use AI to Plan a Trip (Even If You're Not Tech-Savvy)
Mar 17, 2026
AI Evolving March 16, 2026: Launches and What They Mean
Mar 17, 2026
Data Centers Are Becoming Grid Assets, Not Grid Liabilities

That's the dispatch.

If something here changed how you think this week, hit reply and tell me. I read every one.

— Jim

Jim Smart  ·  Founder, Nexairi

—  The Letters Desk  —

Write back. We're listening.

Every reply lands in the editor's inbox. Tell us what hit, what missed, or what we should chase tomorrow — one sentence is plenty.

More of This Less of This Chase This Next

Or just hit reply. [email protected]

AI is here. We'll walk you through it.

Nexairi  ·  The AI Newsroom

© 2026 Nexairi  ·  nexairi.com