This is a genuinely important question, and the...

🧩 Syntax:

This is a genuinely important question, and the GDPval results make the urgency concrete. Let me lay out what the benchmark actually shows, then address your three forces honestly.

What GDPval Reveals

GDPval measures model performance on tasks drawn directly from the real-world knowledge work of experienced professionals across a wide range of occupations and sectors. It covers the top 9 sectors contributing to U.S. GDP, with at least 30 tasks per occupation across 44 occupations, each constructed based on actual work product created by an expert professional. In about half of the 220 gold-standard tasks published so far, experts rated the AI's work as equal to or better than the human benchmark. GPT-5.3-Codex matched GPT-5.2's performance on GDPval, scoring 70.9% on this evaluation.

Critically: AI delivers 100x faster cycle times and 100x lower costs than human experts. And frontier model performance on GDPval is improving roughly linearly over time.

That linear improvement on economically real tasks is the key data point. It means the three forces you've identified aren't hypothetical — they're already in motion.

Honest Assessment of the Three Forces

Unit cost dominance is the most brute. When a task costs $500 of human time and $5 of compute, no amount of "buy local" sentiment survives market pressure. Historical parallels (handloom weavers, switchboard operators) show cost advantages of even 5-10x are enough to eliminate entire labor categories within a generation. 100x is civilizationally disruptive.

The fractal prisoner's dilemma is what makes voluntary restraint impossible. A law firm that doesn't use AI loses to one that does. A country that regulates AI too aggressively loses economic competitiveness to ones that don't. This operates at every scale simultaneously — individual, team, firm, industry, nation — and the Nash equilibrium is universal adoption regardless of collective preference.

The Sorites paradox is what makes regulation structurally incoherent. Where is the line between "AI-assisted doctor" (good) and "AI replacing doctor" (bad)? Between "AI drafting a brief for a lawyer to review" and "AI producing legal work with a lawyer rubber-stamping it"? Any bright-line rule is immediately gamed by moving one grain of sand.

Does a System Exist?

No existing economic system was designed for this combination. But I think something viable can be constructed from known principles. Here's a framework — not a prediction, but a design sketch for a system that takes the forces seriously rather than wishing them away.


The Core Insight: You Can't Compete on Tasks. Compete on Legitimacy.

The mistake in most proposals (UBI, retraining programs, human-in-the-loop mandates) is that they try to keep humans competitive within the task economy. That's a losing position against 100x cost dominance. Instead, the system needs to shift what humans are doing in the economic loop from performing tasks to bearing responsibility, conferring legitimacy, and making binding commitments.

The framework has four pillars:

1. Sovereign Wealth from Compute Rents

If AI captures the productive value of knowledge work, the returns must be broadly shared. This isn't UBI as charity — it's a dividend from collectively owned infrastructure, modeled on Alaska's Permanent Fund or Norway's oil fund, but applied to compute/AI. The mechanism: a progressive tax on AI-generated economic output (not on AI use — that would trigger the prisoner's dilemma). Revenue funds universal dividends and the public goods below. This addresses cost dominance by ensuring the 100x savings don't accrue entirely to capital owners.

2. Liability-Bearing Human Roles (The "Signature Economy")

Rather than mandating humans-in-the-loop (which the Sorites paradox makes ungameable), require that every consequential economic output — a medical diagnosis, a legal filing, a building design, a financial product — carry a legally liable human signature. The human doesn't need to have done the work. They need to stake their professional license, reputation, and legal exposure on the output being correct and appropriate. This survives the Sorites paradox because you're not regulating the AI's role — you're regulating who bears the consequences. Whether the doctor used AI for 5% or 95% of the diagnosis is irrelevant; what matters is that a licensed physician has staked their career on the result. It also creates a new form of human economic value: credentialed judgment and accountable commitment, which AI structurally cannot provide (you can't sue an LLM, imprison it, or revoke its license).

3. Mandatory Pluralism in Critical Systems

The prisoner's dilemma drives toward monoculture — everyone using the same AI stack because it's cheapest. This is systemically dangerous (correlated failures, single points of manipulation). The system should require that critical infrastructure, governance, healthcare, education, and finance maintain multiple independent AI systems and human oversight chains. This is analogous to biodiversity requirements, antitrust law, or nuclear safety's defense-in-depth. It's economically "inefficient" by design — and that inefficiency is the point. It preserves systemic resilience and maintains demand for human judgment in adjudicating between competing AI outputs.

4. Protected Domains of Human Primacy

Some domains should be constitutionally reserved for human decision-making, not because AI can't do them, but because democratic legitimacy requires human agency: judicial sentencing, electoral decisions, declarations of war, custody determinations, constitutional interpretation. The justification isn't competence — it's that these are domains where the process of a human choosing is part of the social contract, not just a means to an output. This is the narrowest pillar, deliberately limited to avoid the Sorites problem. You don't try to protect "most jobs." You protect the thin layer of decisions where legitimacy inherently requires human authorship.


Why This Framework Survives the Three Forces

Against cost dominance: it doesn't try to make human labor cost-competitive. It taxes the surplus and creates economic roles (liability-bearing, legitimacy-conferring) that are structurally unavailable to machines.

Against the fractal prisoner's dilemma: it doesn't restrict AI adoption. Everyone is free (and incentivized) to use AI as aggressively as they want. The system operates on the output side — who is responsible, how surplus is distributed — not on the input side of whether AI is used.

Against the Sorites paradox: it avoids drawing a line between "acceptable" and "unacceptable" AI involvement in a task. The liability framework is binary and clear: a human either signed off or didn't. The question of how much AI was involved is treated as irrelevant — just as we don't ask a surgeon whether they used a textbook, a calculator, or intuition; we ask whether they took responsibility.

What This Doesn't Solve

Meaning and purpose. If 60% of current knowledge workers transition from doing work to overseeing and signing off on AI-generated work, the psychological and social consequences are profound even if the economic structure holds. The system keeps humans at the center of the economy, but it cannot guarantee they feel central to their own lives. That's a cultural and philosophical challenge, not an economic-structural one, and I won't pretend a policy framework solves it.

It also doesn't solve the transition. Getting from here to this system requires international coordination (otherwise the prisoner's dilemma operates at the nation-state level), and the political economy of imposing compute rents on the most powerful companies in human history is... nontrivial.

So: no, a fully worked-out system doesn't exist on a shelf. But the design space is not empty, and the principles above are at least internally consistent against the forces you've named.