AI news the way
a friend explains it.
No jargon. No press release spin. Just the stories that matter, explained like you're catching up over coffee — then linked to the full analysis if you want to go deeper.
Want to read first? Read past dispatches →·3 issues a week — Mondays, Wednesdays, Fridays
The Idea
Most AI newsletters read like press releases. We write like you're explaining something to a smart colleague at lunch — the actual story, why it matters to a normal human, and what to keep an eye on.
No hustle content · No doom · No hype cycles
Every issue
Same structure, every issue.
Three sections, always in the same order, so you always know what you're getting.
The news, with a real take
Three Nexairi stories per issue, each with what happened, why it matters, and what to watch. Plain English, active voice, real opinions — not hedged summaries.
What the feeds are saying
Three to five external stories with a one-sentence blurb each, a developer or AI tool worth knowing, and two deeper reads from Nexairi for when you want the full picture.
6–10 links worth a click
The fastest section to read. A bold subject and a link — mix of external news and Nexairi coverage. Enough context to decide if you want more.
Sample issue
Here's what lands in your inbox.
The actual format. No fancy graphics. Just clear writing.
Nexairi Dispatch · Wednesday, April 15, 2026 · Issue #2
Good morning, friends. An AI just scored a perfect 180 on the LSAT — and the real story isn't the score, it's what the researchers found when they removed the model's reasoning step. Meanwhile, the concept of the dark factory — software pipelines running on autonomous AI with no one watching — is no longer theoretical. Google DeepMind also published the security playbook enterprises need before they hand agents any real access. A lot is already moving this week and we're not even halfway through it yet.
In today's issue:
- ▸The dark factory: AI runs your software pipeline now
- ▸AI scored a perfect 180 on the LSAT
- ▸Google mapped six ways attackers break AI agents
- ▸Outside Nexairi — Stanford AI Index, supply chain attacks, and more
- ▸Tool pick + deeper reads
- ▸Quick hits from around the AI world
The dark factory: AI runs your software pipeline now
What happened
Simon Willison coined dark factory — a manufacturing term for plants that run with no humans inside — to describe AI systems that write code, run tests, fix bugs, and deploy changes autonomously. Early versions are live in startups, handling test generation and routine maintenance with minimal human sign-off. The shift from AI-as-autocomplete to AI-as-autonomous-engineer is underway.
Why it matters
Speed is the obvious win — dark factory pipelines run around the clock without handoffs or standups. The risks are less obvious: hallucinated code shipping to production, security vulnerabilities with no accountable author, and compounding errors no human caught. Most teams adopting these systems haven't built the oversight infrastructure to match the new pace.
What to watch
The missing layer isn't better AI — it's oversight tooling: audit logs, rollback systems, and accountability chains built for pipelines where no human made the commit. That infrastructure gap is the next engineering problem worth watching.
AI scored a perfect 180 on the LSAT
What happened
A frontier AI model achieved a perfect 180 on the Law School Admission Test — the maximum possible score. When researchers removed the model's thinking step before it answered, accuracy dropped 8 points. That single finding — reasoning process, not just model size, drives performance — is the actual result worth paying attention to.
Why it matters
If reasoning quality beats raw scale, the competitive landscape for AI models shifts. Smaller, more efficient models with better reasoning pipelines can close the gap on frontier giants. That matters for enterprises evaluating which models to build on: leaderboard size rankings may matter less than reasoning architecture.
What to watch
Process reward models — the technique that guides how a model reasons step by step — are now a primary differentiator. Watch for model releases that lead with reasoning methodology, not just parameter counts.
Google mapped six ways attackers break AI agents
What happened
Google DeepMind published a threat taxonomy for autonomous AI agents, identifying six distinct attack vectors: content injection, semantic manipulation, memory poisoning, behavioral control, systemic attacks, and exploitation of human-in-the-loop checkpoints. The framework documents attack patterns already observed in deployed systems — not hypothetical risks.
Why it matters
Enterprises are deploying agents with access to email, calendars, code repositories, and customer data. Most security teams are evaluating these systems using traditional threat models that don't account for prompt injection or cross-agent manipulation. The DeepMind framework is the first systematic taxonomy — it's what security teams should be reading before their next deployment.
What to watch
The first major enterprise breach traced to agent compromise is likely already in progress somewhere — it just hasn't been attributed yet. Security teams that wait for an incident before threat modeling are playing the wrong game.
Experts and the public see AI's job impact very differently
Stanford's 2026 AI Index found 73% of AI experts view the technology's impact on jobs positively, compared to just 23% of the American public. MIT Technology Review →
US and China are in a dead heat on AI model performance
Stanford's AI Index shows the two countries within fractions of a point on key model benchmarks, while chip supply chain fragility creates a structural risk that neither side can fully control. MIT Technology Review →
OpenAI responded to a supply chain attack on a developer tool
A vulnerability in the Axios developer tool prompted OpenAI to rotate code signing certificates. User data was unaffected, but the incident illustrates the upstream risk AI companies carry from third-party tooling. OpenAI →
Open Agents — A developer platform where autonomous AI agents write, test, and deploy production code with minimal prompting. It's a live example of the dark-factory pipeline we covered today — worth a look if you're building or evaluating agent-driven engineering workflows.
- OpenAI and Cloudflare Build the Enterprise Agent Cloud
The infrastructure bet that shifts AI competition from algorithms to distribution.
- How Autonomous AI Beats Human Researchers: 4.4x Speed Gains in 3 Domains
AI running full research cycles — no humans — across GPU optimization, model training, and traffic forecasting.
- ▸Gemma 4: Google's on-device multimodal model is out
- ▸Safetensors joins the PyTorch Foundation
- ▸Stanford AI Index 2026: the state of the field in charts
- ▸MedGemma 1.5: Google's medical AI achieves a 14-point MRI accuracy gain
- ▸Open-weight LLMs collapse from 90% to 35% accuracy in production
- ▸How AI made vertical farming profitable: a $7.5B market
- ▸How AI is supercharging fractional CFOs for year-round planning
- ▸CatDoes v4: an AI agent with its own computer builds your apps
NOW AVAILABLE: Whispers of Nystad — A historical thriller where a coded letter can end a war. Follow Elsa across a frozen Baltic frontier as empires collide and every message carries a price. Get Your Copy Now →
Nexairi Sandbox — Try the Ikigai Wayfinder, a free interactive tool that helps you map your purpose at the intersection of what you love, what you're good at, what the world needs, and what you can be paid for. Try the Ikigai Wayfinder →
That's it for today. See you Friday.
— James
Editor in Chief, Nexairi
That's the whole thing. Want it in your inbox? →
Who it's for
You don't need to be a developer.
If AI affects your work or your life, this is written for you.
Business owners
Keeping an eye on what AI actually does to their industry, not what Twitter says it does.
Marketers & operators
Who want to use AI tools without wading through Reddit threads or influencer hype.
Curious professionals
Who want to stay informed without becoming an AI hobbyist or reading ten newsletters.
Team leads & managers
Who need to make real decisions about AI adoption and can't wait for the quarterly report.
Students & early-career
Building literacy in the technology defining their careers and industries.
Newsletter refugees
Anyone who gets the tech newsletters but wishes someone would just explain it plainly.
3×
Per week
<5 min
Per issue
100%
Free forever
0
Ads or sponsors
AI news that matters.
Three times a week.
Monday, Wednesday, Friday. Five minutes to read. No spam. Free forever.
By subscribing you agree to our Privacy Policy. No spam. No selling your data. Unsubscribe anytime.
Already subscribed? Read past dispatches →