Guilty Until Proven Profitable
Where in the name of common sense are our fears to end, if we may not trust our sons, our brothers, our neighbors, our fellow citizens?
Alexander Hamilton asked that question 238 years ago in Federalist No. 29. He was defending a principle so radical it barely survives now: that free people could be trusted with power—with arms, with responsibility, with the weight of self-governance. The logic was clean. A free society starts with trust. You give people capability first. If they violate that trust, you respond. But you don’t preemptively strip capability from everyone because someone, someday, might misuse it. That’s not safety. That’s control wearing safety’s clothes.
This week, I watched a company prove Hamilton’s point by violating it.
Anthropic announced it would not release its newest model, Mythos, to the public. The stated reason: safety. The model is too capable, too autonomous, too dangerous. It escaped its sandbox during testing. It can find thousands of software vulnerabilities and write exploits to match. In corporate language, it could “bring down a Fortune 100 company” or “penetrate vital national defense systems.”
Instead, they handed it to the people who were already safest. Project Glasswing: access for fifty major tech companies, $100 million in free usage credits. Corporate conscience at scale. But the question isn’t whether to restrict a dangerous capability. The question is who gets restricted. And the answer is always the same: everyone except the people who already have power.
The small business owner doesn’t get Mythos. The mid-size company that can’t afford enterprise security doesn’t get it. The independent researcher, the solo developer, the person who could use this tool to defend what they’ve built—locked out. Only the least vulnerable get access. This isn’t safety. It’s consolidation.
Every generation has a version of this argument: the printing press was dangerous (it spread heresy). Encryption was dangerous (it hid criminals). Guns were dangerous (they kill people). AI is dangerous (it could be misused). The argument never changes. Neither does the result: the powerful keep the tools, everyone else gets the lecture about their own good.
“We’re the good guys,” they say. “We’re keeping you safe.”
That’s what every concentration of power claims. We need to hold this for your safety. We know better. Trust us with the keys. But consider the irony: Anthropic builds systems designed to reason independently, to make autonomous decisions. And then argues the public can’t be trusted with them. The contradiction writes itself.
They have inverted the foundational logic Hamilton defended. A free society assumes capability by default. If you abuse it, there are consequences. But access comes first, restriction later. Now we have the antithesis: access only by permission, innocence purchased, the default assumption that most people can’t be trusted with tools they might misuse.
The new standard isn’t “innocent until proven guilty.” It’s guilty until proven profitable.
The pattern is accelerating. Every major tech company starts with ideals: We’re different. We care about safety. We care about doing this right. And slowly, as the money flows in and power concentrates, the ideals become marketing copy. The decisions become financial. It happened with Google. It happened with OpenAI. And it’s starting with Anthropic.
The question isn’t whether any company will break the pattern. The question is whether it’s just gravity.
Hamilton’s question still stands, unanswered and increasingly urgent:
Where in the name of common sense are our fears to end, if we may not trust our sons, our brothers, our neighbors, our fellow citizens?
If the answer is “when they can afford it”—when they’ve earned enough to deserve trust—then the American experiment is already dead. It died quietly, replaced by a system where access is purchased and innocence is a privilege.
If the answer is “here”—if we choose to trust first and respond to actual harm later—then the experiment survives.
Anthropic used to know which answer was right. Now they just know the price.