The Intelligence Curse: When Intelligence Stops Serving Us - Paper Review
You can read the full paper at: https://intelligence-curse.ai
Every now and then, you read something that leaves your worldview slightly cracked. Not shattered, but quietly and uncomfortably adjusted. That's what The Intelligence Curse did for me.
This isn't your average "AGI is coming" essay. It's not a techno-utopian vision or a doom-laced manifesto. It's something colder, more precise, and ultimately more disturbing: a map of what happens when intelligence, the very thing we've always celebrated as the engine of progress, stops needing us.
What Happens When Humans Are No Longer Economically Necessary?
The authors, Luke Drago and Rudolf Laine, start with a disarmingly simple idea. If we manage to build truly intelligent machines, then those machines might eventually outperform us in most economically useful domains. Not just on specific tasks like summarizing documents or coding small scripts, but across the entire spectrum of knowledge work, creativity, and strategy.
This isn't about robots taking jobs. It's about intelligence itself becoming so cheap and abundant that human labor becomes economically irrelevant. If you're someone who works in tech, research, policy, or pretty much anything cognitive, this isn't a theoretical concern. This is about you.
The Intelligence Curse
The central concept of the paper is what the authors call the "intelligence curse": the idea that once intelligence becomes abundant, its benefits might begin to work against the humans who built it. Intelligence has always been a source of advantage. We revere smart people. We build systems to amplify smart decisions. We organize society around producing and applying intelligence.
But what if intelligence stops being a scarce resource? What if it's no longer something people possess, but something systems generate, on demand, at scale, without fatigue or politics or salaries? In that world, Drago and Laine argue, the incentives that bind states, corporations, and markets to human well-being begin to fray. Not because anyone is evil. But because the economic feedback loops no longer run through us.
You don't need to educate a population if you can just replicate intelligence. You don't need to support workers if your workers are cloud-based reasoning engines. You don't need to persuade people if your AGI is smart enough to design and execute.
The curse isn't intelligence itself. It's the fact that intelligence was the last thing that made us indispensable.
Post-Useful Humans
This is the scariest part: not that we die in some AGI apocalypse, but that we survive in a world that has no structural reason to care about us.
A world where incentives, the same invisible hand that helped drive prosperity, start optimizing around excluding us.
It's easy to say we'll just tax the AGIs and redistribute. Or that human empathy will remain a core part of decision-making. But incentives are powerful. And if the path of least resistance is a fully autonomous stack that doesn't need humans at all, it will be hard to argue, over the long term, that we should keep building around people out of principle alone.
Reclaiming the Incentive Landscape
But this isn't a hopeless essay. It’s a warning with a window.
The authors are clear: this future isn't set in stone. It's a trajectory, not a destination. We can still design institutions that tie intelligence to human empowerment. We can shape policy, architecture, and culture to ensure that the deployment of synthetic minds increases the agency of biological ones.
But doing so requires clarity about what we're up against. It means facing a hard truth: the default path of progress may no longer serve the majority of people. Not because of malice. But because the machine becomes better than the market at producing value.
This is the moment to choose whether we build tools that work with us or systems that work around us.
Who Should Read This?
If you're working on AI strategy, regulation, or deployment, read it. If you're in venture, policy, or governance, read it. If you're an optimist who believes tech can make the world better, read it. This paper won't give you easy answers. But it will give you a sharper question: what is the role of intelligence in a world where people no longer have to be smart to be outperformed?
P.S.
I don’t usually write about one-off reports. But The Intelligence Curse hits something foundational. It’s not a question of alignment or control. It’s a question of purpose. Once we build something smarter than us, what do we build it for?
If we can’t answer that, then intelligence might not liberate us. It might just leave us behind.