The Technocrat’s Blindness: How Elite Impunity Is Shaping the AI Alignment Crisis
Why the resignation of Larry Summers from OpenAI exposes the dangerous gap between technical competence and moral character in Silicon Valley.
The Collapse of the Stabilizer
There is a seduction in the idea of the “Technocrat King.” We want to believe that there are people in high places—men like Larry Summers—who operate above the fray of petty morality, guided only by data, economics, and the cold logic of stability. When OpenAI imploded in 2023, Summers was installed not just as a board member, but as a symbol. He represented the establishment’s steady hand on the steering wheel of a runaway future. But the recent release of his correspondence with Jeffrey Epstein reveals that the hand on the wheel was shaking with the tremors of a deeply compromised past.
I want you to look past the salacious details of the emails. Focus instead on what they represent: a culture of impunity where access is the only currency that matters. Summers maintained these ties long after Epstein’s criminality was public knowledge. Why? Because in the stratosphere of the elite, moral judgment is often suspended in favor of mutual benefit. This is the class we have entrusted with the governance of Artificial Intelligence. We are asking men who could not distinguish a predator from a peer to teach machines how to distinguish right from wrong.
Thesis: The Separation of Competence and Character
The prevailing dogma in Silicon Valley and Washington alike is that professional competence can be surgically separated from personal character. This is the Thesis of the modern meritocracy. We tell ourselves that a man can be a brilliant economist, a visionary university president, or a wise board member, regardless of his private associations. We treat ethics as a distinct module, a plugin that can be added later, rather than the operating system itself.
This separation is a dangerous fiction. As the cyberneticist Norbert Wiener warned us decades ago, we cannot expect machines to handle the moral weight we refuse to carry ourselves. When we elevate figures like Summers based solely on their “effectiveness,” we signal that results justify the means, and that power absolves the holder of scrutiny.
The more we get out of the world the less we leave, and in the long run we shall have to pay our debts at a time that may be very inconvenient for our own survival.
– Norbert Wiener, The Human Use of Human Beings
We are currently paying that debt. The “inconvenient time” Wiener predicted is now. We are building systems that will amplify human intent a thousandfold, yet the intent at the top is clouded by a refusal to acknowledge basic moral boundaries.
Antithesis: The Fallacy of ‘Private Lives’
The Antithesis to this view—often shouted down in boardrooms—is that there is no such thing as a private flaw in a public steward. The defense offered for Summers, and for many who walked through Epstein’s doors, is that their professional judgment remained unclouded. But this ignores the psychological reality of the “Inner Ring,” a concept C.S. Lewis described as the desire to be inside the circle of power, regardless of the cost.
The profound danger is not just that these men made mistakes, but that they created a world where those mistakes were socially acceptable. When you normalize the grotesque in your personal life, you lose the sensitivity required to govern existential risks. If you cannot see the harm in associating with a trafficker of children because he is “interesting” or “well-connected,” how can you possibly see the subtle, creeping harms of an algorithmic bias or a runaway AI model?
Synthesis: The Alignment Problem is a Human Problem
The Synthesis we must reach is this: The AI alignment problem is not a technical hurdle; it is a mirror of our own moral disintegration. You cannot code “goodness” into a machine if the people overseeing the code have a fluid definition of the term. The resignation of Larry Summers is a necessary purge, but it is also a diagnostic flare. It lights up the reality that our institutions are run by people who value cleverness over wisdom.
It is only because the mind is not a machine that it can go wrong... A machine cannot lie, but it can be made to transmit lies.
– Simone Weil, Lectures on Philosophy
We are transmitting our lies into the silicon. The lie that power grants immunity. The lie that intellect excuses vice. The lie that we can build a heavenly future using the blueprints of a corrupt past. True alignment requires leaders who are not just smart, but whole—leaders whose private integrity matches their public rhetoric.
Application: Demanding a New Standard
So, what do we do? We stop accepting the “Adult in the Room” narrative at face value. When an organization like OpenAI appoints a board member, we must look beyond the résumé. We must demand a forensic accounting of their character. This sounds puritanical to modern ears, but the stakes demand it.
We need to cultivate a “Third Citizen” mindset that refuses to be dazzled by credentials. Scrutinize the networks. Ask who validates whom. If a leader is shielded by a web of favors and silence, they are unfit to guard the gates of the future. We must champion a new metric for leadership in tech: not just IQ, but MQ—Moral Quotient.
Key Takeaways
The Illusion of Separation: We can no longer afford to treat professional competence and moral character as separate entities, especially in high-stakes fields like AI governance.
The Alignment Reflection: Artificial Intelligence will reflect the values of its creators; if those creators are compromised by elite impunity, the AI will be too.
The Cost of Silence: The Summers resignation proves that past associations are not dormant; they are active liabilities that threaten institutional stability.
The New Mandate: We must demand transparency not just in algorithms, but in the human networks that control them.



