The Third Citizen

The Third Citizen

The Golden Opium

The Two-Faced Machine: A Citizen’s Guide to the State’s AI Schizophrenia

The Third Citizen's avatar
The Third Citizen
Apr 21, 2026
∙ Paid
Inside Anthropic's First Developer Day, Where AI Agents Took Center Stage |  WIRED

While the NSA quietly embraces a powerful AI named Mythos, the Pentagon warns that its creator is a systemic risk. This isn’t just bureaucracy; it’s a symptom of strategic schizophrenia. This deep dive dismantles the comforting narratives and delivers a practical framework for citizens to audit this dangerous incoherence before it’s too late.

The Symphony of Bureaucracy

Let’s begin by arguing for the devil, by embracing the most comforting and conventional explanation for the paradox of Mythos. The story goes like this: the American national security apparatus is a complex, sophisticated ecosystem with specialized roles. It’s not only normal but desirable for its different arms to have different perspectives. We want the NSA, the tip of the spear in the cyber domain, to leverage the most advanced tools available. Their use of an AI like Mythos isn’t reckless; it’s a necessary adaptation to a hostile digital environment.

On the other hand, we want the Pentagon, the institution responsible for the vast and fragile military supply chain, to be profoundly risk-averse. Of course their analysts flag a powerful, private AI developer like Anthropic as a potential risk. That’s their job. It’s a sign of prudence, of institutional checks and balances. This isn’t a contradiction; it is a healthy, functioning system. It’s a symphony of bureaucracy, where each instrument plays its part to perfection, creating a coherent national security strategy. The spies spy, the planners plan, and the citizen can sleep soundly, confident that the experts are in control.

This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.

The Cracks in the Official Story

This narrative is clean, logical, and deeply reassuring. It is also a complete fiction. The moment you scratch beneath the surface, the story of a well-oiled machine falls apart, revealing a sputtering engine running on contradictory fuel sources. Treating the same entity—a revolutionary AI model—as both a trusted asset and a systemic liability is not a sign of strategic depth. It is a sign of a system that has no core philosophy about the very nature of power in the 21st century.

It reveals a state so compartmentalized that its left hand has no clue its right hand is sharpening a knife it has been told is poisoned. A government that cannot form a coherent opinion on a foundational technology cannot be trusted to wield it. This isn’t a feature; it’s a catastrophic bug in the operating system of the modern state. And you, the citizen, are the one who will be left to deal with the system crash.

User's avatar

Continue reading this post for free, courtesy of The Third Citizen.

Or purchase a paid subscription.
© 2026 The Third Citizen · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture