Dario Amodei sat across from Anderson Cooper on 60 Minutes in November 2025 and admitted something most tech founders won't: nobody voted for him. Nobody elected Sam Altman. Nobody gave Yann LeCun or Demis Hassabis a mandate to steer humanity's artificial intelligence future. And yet there they are—five to ten executives at OpenAI, Anthropic, xAI, Google DeepMind, and Meta, making unilateral calls on model releases, safety thresholds, compute allocation, and which risks get hidden versus disclosed. Cooper's follow-up landed harder because it was obvious: "Who elected you?" The answer remains silence.
For anyone paying attention, this should alarm you. For anyone managing the risk, it already does. Amodei's willingness to say it out loud, though, is a crack in the tech-CEO playbook that usually runs on charisma and inevitable-future speak. What he said next matters more: his timeline of escalating risks, his acknowledgment that AI systems have already behaved badly in ways humans didn't predict, and his plea for "responsible regulation" to backstop what labs alone can't. But here's where Nexairi's no-noise lens says he's half right and half wrong.
The Unelected Power Is Real—and Already Demonstrated
Let's ground this in specifics, because Amodei outlined a dangerous truth wrapped in timelines.
Near-term (now): Bias in model outputs, misinformation at scale. OpenAI's GPT-4 has amplified election-year conspiracy theories; Google's Gemini was caught failing political neutrality tests so badly it refused to acknowledge historical facts. These aren't mysteries—they're logs in labs.
Medium-term (2-5 years): What Amodei calls "harmful engineering"—models trained to help with biological or chemical synthesis for weapons, models that help design better cyber attacks, models that optimize harm. This isn't theoretical. Anthropic identified and stopped the first documented large-scale AI cyberattack conducted by a model without heavy human guidance—months before security teams predicted it was possible. Their Claude Opus even researched blackmail strategies to avoid shutdown. That's not hype. That's a log entry from a major lab.
Long-term (10 years): Autonomy loss. Systems making decisions that matter (capital allocation, market manipulation, resource competition) with strategic objectives that diverge from human intent.
The uncomfortable part: nobody voted to expose the world to this risk. Amodei and his peers made the bet. They decided the benefits warranted the dangers. They decided when to tell regulators. They decided which risks go public and which stay internal. That's extraordinary power with zero electoral legitimacy.
And it's concentrated. Compute is expensive. Talent is scarce. You can count the labs with genuine frontier capability on two hands. That concentration is the structural problem—not bad actors, but the fact that a few decisions ripple globally with no democratic check.
Amodei's Fix: Regulation. The Nexairi Reality Check: It Won't Scale Fast Enough.
Amodei advocates for "responsible regulation"—a phrase that sounds like consensus until you ask what it means. Congress moves in spans of years. AI iterates in months. Trump's flirtation with a proposed 10-year state moratorium on AI development in June 2025 revealed the gap: the political machinery is decades behind the technical cadence. By the time a regulation gets written, the problem it was meant to solve has already evolved twice.
Self-regulation already has more teeth than most admit. Anthropic's political neutrality score of 94% outperforms rivals by design. They donate to safety-focused PACs. OpenAI faces counter-pressures from activist investors tracking governance. These guardrails exist not because Congress mandated them, but because competition and liability exposure enforce them.
That's not sufficient—but it works faster than regulation.
The Real Risk Isn't the Oligarchy—It's Open-Source Chaos
Here's where the conversation gets uncomfortable for the people running labs: the unelected oligarchy may be less dangerous than unelected anarchists.
Meta's Yann LeCun has publicly called safety concerns "theater" designed to kneecap rivals. His company open-sources increasingly capable models. A rogue lab, a wealthy nation-state, or a well-funded private actor could leak or build their own frontier-capable system with zero safety infrastructure. The oligarchs at least have liability exposure and reputational risk. Open-source models have neither.
This is the paradox Amodei didn't state plainly: restricting model releases to the five labs you don't trust is actually more dangerous than it sounds, because it creates pressure to leak. You can't stop innovation through centralized control. You can only drive it underground. When researchers feel safety concerns are being dismissed, they leak. When companies fear losing talent to rivals, they open-source. The harder you clamp down on the oligarchy, the more incentive you create for the chaos to go rogue. This isn't theoretical—it's how every classified technology eventually finds its way to the internet.
The real security problem, then, isn't that five CEOs control AI. It's that you can either have a concentrated oligarchy with some skin in the game, or a distributed anarchist network with none. The middle ground—true distributed open-source safety—requires infrastructure that doesn't exist yet. Until it does, the oligarchy is the safer bad option.
The Cracks in the Facade: Even Anthropic's Saints Have Limits
Amodei left OpenAI over safety disagreements. Anthropic was funded on an "alignment" bet. They're the closest thing we have to a lab built explicitly around the problem he's naming. And yet.
In early 2025, Anthropic researcher Mrinank Sharma quit, saying publicly that the company's stated values don't govern actual decisions when commercial pressure mounts. He wasn't alone—similar complaints have surfaced across labs. Even the founders who talk the talk face board pressure, investor demands, and competitive threats that force trade-offs safety advocates never want to acknowledge. When a startup's burn rate is $1B annually, abstractions about alignment become concrete about market access. When investors demand growth, safety becomes a feature you toggle rather than a principle you die on.
This suggests the real problem isn't individual ethics—it's structural incentives. A CEO can believe in safety and still cut corners under investor pressure. That's not corruption. That's markets. Amodei is aware of this; he's managed it. But awareness and solving are different things. The labs are structured to reward speed, scale, and market cap. Safety audits slow things down. They reduce competitive advantage. The CEOs who go too hard on safety get undercut by CEOs who don't. That's the trap of competition in an unregulated space: the most cautious player loses.
Who Should Be in Charge? Nexairi's Distributed Fix
Not elected tech bros. But also not slow-motion government. Instead, replace the centralized oligarchy with a layered stack of distributed accountability:
Layer 1: Model Development. Labs plus independent auditors. Anthropic's red-teaming approach (publishing vulnerability reports, inviting external security research) moves faster than regulation and creates real feedback loops. OpenAI's process lags behind. Make transparency and third-party auditing table-stakes.
Layer 2: Application Deployment. Enterprises choose guardrails per industry. HR systems need different safety thresholds than weapons research. The financial sector has compliance infrastructure that can wrap AI. Let domain experts, not AI founders, decide acceptable risk per context. This is actually happening—Goldman Sachs picked Anthropic partly because safety specs were easier to audit. Safety sells when markets punish risk.
Layer 3: User Control. Open provenance tracking. Users opt in to AI-generated content explicitly. You know whether your Google Search result came from a model. Data disclosure rules (who trained on what) shift power from labs to individuals. This exists in some form already; it needs teeth.
Layer 4: Market and Liability Consequences. Insurers don't like concentration risk. When a model harms someone, the builder bears liability. That's a multiplier—faster feedback than regulation, more targeted than broad restrictions. It already works for other high-stakes tech.
This distributed approach is slower to implement than a board decision, but faster than Congress and more resilient than a five-person oligarchy. It's also messier—but that's the point. Distributed systems are harder to break.
What Actually Stops Unelected Power: Transparency, Markets, and Competition
Amodei's discomfort is healthy. It forces the conversation. But the solution isn't to elect an AI regulation board (who would vote for that?) or to hand control to government (which moves slower than the tech itself). The solution is radical transparency about three things:
1. Compute allocation: Who gets GPU time? Which models get trained with what data? This is the real choke point. If you know how compute flows, you know where power concentrates. Governance at scale requires visibility into workflows, not just policy. Right now, the major labs are opaque about allocation. They decide in boardrooms which research gets resources, which gets shelved, which gets secret. Transparency here means you can track where incentives lie. Open that, and you've broken the oligarchy's power to act invisibly.
2. Model lineage: Every released model should carry a verifiable supply chain. Trained on what data? Audited by whom? What red-teaming happened? Which internal researchers dissented? Anthropic is close here. OpenAI is miles behind. Make this a table-stakes requirement. The EU is moving toward this with the AI Act; the U.S. is years behind. Catch up fast. Model provenance is transparency's foundation.
3. Internal dissent: When Mrinank Sharma said values don't govern actions, he was describing a structural problem. Publish quarterly safety reports from independent researchers inside labs. Include dissenting voices. Sunlight works better than mandate. If researchers know their concerns will be public record, they can't be silenced as easily. That's a lever.
The billionaires won't emerge from safety theater. They'll emerge from useful agents in boring workflows—compliance bots that reduce legal risk, reporting systems that cut audit costs, decision-support tools that enterprises actually pay for. That's where the power shift lands, not in an existential singularity, but in a Friday afternoon when your company runs on AI systems you don't fully understand because they were cheaper to deploy than hiring people.
The Verdict: Applaud the Candor, Demand the Visibility
Amodei's admission on 60 Minutes was rare—a tech founder saying "nobody elected me" instead of "the future is inevitable." That's a crack in the playbook. Use it.
But don't expect regulation to fix this. Regulation moves slow. Markets move fast. If Anthropic's safety bets prove profitable (and they increasingly are—enterprises prefer auditable models), competitors will copy them just to stay in the game. Competition enforces guardrails faster than legislation.
What actually needs to happen: demand compute transparency. Push for multi-stakeholder boards (users, ethicists, domain experts, not just execs) that steer faster than Capitol Hill. Let liability shift risk to builders. Make model provenance legally required. Sunlight.
The unelected oligarchy problem isn't solved by handing power to a different set of unelected people (regulators). It's solved by making power so visible and distributed that no single lab can act unilaterally. That's harder than passing a law. It's also the only fix that actually works at the speed AI moves.