Sunday, February 22, 2026

Inscrutable AI, Deep Math, & A Legitimacy Problem

This morning I ran into a Rousseau quote that felt like it had been smuggled out of a sci-fi novel.

He says that to discover the rules of society best suited to nations, we would need a “superior intelligence” that could understand the passions of men without feeling them—an intelligence with no affinity for our nature but full knowledge of it; whose happiness was independent of ours but who would nevertheless make our happiness its concern. Then he closes with the line that makes you blink: Gods would be needed to give men laws.

It’s not much of a stretch to swap in a modern noun. Substitute AI for “superior intelligence…gods,” and suddenly Rousseau sounds like he’s describing the contemporary fantasy: government by model.

On the seductively simple side, the pitch writes itself. Feed a machine huge amounts of data. Let it see the coupling between economics and sociology more clearly than any legislature ever could. Let it pick marginal tax rates without prejudice. Tune incentives to reduce suffering. Make policy boringly competent—because it’s not trying to win a news cycle, punish an enemy, or flatter a base. If you’ve watched human institutions fail at basic arithmetic, you can feel why this idea keeps returning like a comet.

But dystopias start this way too: we handed over the steering wheel to a rational system, and then we couldn’t get it back.

The first failure mode is moral. Even a “benevolent” optimizer can decide that you have to break a few eggs to make the omelet. The machine doesn’t need to be evil. It only needs to be consistent. And humans don’t get absolved by adding a review board. Layers of oversight can help, sure, but they don’t remove the oldest political fact: the passions of men don’t disappear when you upgrade the tool. In some ways they get sharper, because the more powerful the instrument, the more factions want to capture it. Intelligence doesn’t automatically defeat corruption; it can amplify it.

The second failure mode is subtler, and to me it’s the one that matters most right now: inscrutable transparency.

Imagine the code is open. The proofs are published. The audit logs are public. And yet, for most citizens, it’s still a black box—not because it’s hidden, but because it’s deep. As Doron Zeilberger puts it (and this is the line I keep coming back to), “deep” mathematics is what happens when truth depends on a long chain of non-trivial insights stacked on other non-trivial insights, until a non-expert can’t realistically rebuild the reasoning from first principles. The conclusion becomes something you “know” the way you “know” a modern operating system works: it runs, experts vouch for it, it passes tests, and you accept it because you don’t have a choice.

That’s a civilizational trade we’ve already made in lots of domains. Whitehead basically foresaw it: civilization advances by extending the number of important operations we can perform without thinking about them. We gain power by outsourcing cognition. But transpose that into governance and you hit a legitimacy wall.

Because democracy isn’t only a machine for producing outcomes. It’s a machine for distributing epistemic authority—for ensuring that, in principle, ordinary people can contest the reasons power gives for what it does. Once governance becomes “deep” in the sense that only a priesthood can really argue with it, citizenship quietly turns into deference. On good days we call that expertise. On bad days we call it fate.

So maybe Rousseau is right: gods would be needed to give men laws. But if the gods arrive as code, the real question isn’t whether the system can optimize society. The real question is whether humans can remain morally adult under it.

Because when you can’t argue with power—when you can’t even understand the language it speaks—you eventually end up doing the oldest thing people do in the presence of something stronger than them:

You worship it, or you fear it.