Moravec's Paradox: Why AI’s Biggest “Mystery” Might Just Be a Bad Metaphor
How a Flawed Comparison Between Human and Machine Intelligence Reveals More About Our Biases Than AI Itself
We have machines that can instantly pass the Universal Bar Exam in the 90th percentile, write functional Python code in seconds, and diagnose complex diseases with higher accuracy than a board-certified specialist. Yet, if you ask a multi-million-dollar robot to walk up a flight of stairs, open a door, and gracefully fold a fitted sheet, it will likely trip, fail to grasp the doorknob, and crumple the laundry into a chaotic ball.
In the AI community, this phenomenon is famously known as Moravec’s Paradox.
Coined in the 1980s by robotics researcher Hans Moravec, the observation goes like this: High-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. As Moravec himself put it: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Moravec’s Paradox isn’t actually a paradox at all—or at least, not in the way it's commonly framed. It only appears contradictory if we misunderstand how human intelligence developed.
The History of the Conscious Mind
To understand why this isn’t a paradox, we have to look at why early computer scientists were so surprised by it.
In the 1950s and 60s, the pioneers of AI looked at human intelligence through the lens of academia. What is “hard” for a human? Calculus, playing grandmaster-level chess, and solving complex logic puzzles. What is “easy” for a human? Recognizing a face, catching a baseball, or walking across a rocky beach without falling over.
Because math and logic require intense, focused, conscious effort, we assume they are computationally difficult. Because walking and seeing happen subconsciously and effortlessly, we assume they are computationally trivial.
But our conscious experience is a terrible metric for computational complexity.
Think about catching a baseball. To accomplish this, your brain must continuously calculate the parabolic trajectory of an object moving through three-dimensional space, account for wind resistance, instantly adjust the micro-tensions in hundreds of muscles across your legs, torso, and arms to maintain balance, and coordinate hand-eye timing down to the millisecond.
You don’t feel yourself doing this math. Evolution has hidden the user interface from you. But the computational power required to execute that physical action dwarfs the computational power required to play a game of chess. Chess is just a finite grid with 64 squares and rigidly defined rules. The physical world is an infinite plane of edge cases, friction, and gravity.
The Enterprise Architecture Analogy
If you want to understand this in business terms, let’s look at enterprise architecture.
Imagine a massive legacy corporation—let’s call them Biology Corp. For 500 million years, Biology Corp’s R&D department has been obsessively building, testing, and optimizing a backend logistics engine. This engine handles vision, balance, locomotion, and threat detection. It has been battle-tested in the most ruthless competitive environment imaginable (the food chain). If a single line of code in this backend engine failed, the unit died, and that code was removed from the gene pool. Hundreds of millions of years of ruthless, iterative optimization.
Then, about 100,000 years ago—which is practically last week in evolutionary time—Biology Corp decided to add a new feature. They built a thin, lightweight dashboard on top of the backend engine. They called this dashboard “Abstract Thought.” Eventually, they added a few widgets to the dashboard called “Language,” “Mathematics,” and “Logic.”
Now, imagine an arrogant Silicon Valley startup (Computer Science) comes along and decides to clone Biology Corp.
The startup looks at the product and says, “Well, the most impressive part of Biology Corp is that new Math and Logic dashboard. That must be the hardest thing to build.” So, they spend a weekend coding up a dashboard. It works brilliantly! They build an AI that can play chess and do calculus.
Then, the startup says, “Okay, the dashboard is done. Now let’s just whip up that backend logistics engine that handles walking and seeing. That stuff operates in the background anyway, how hard can it be?”
They fail completely. And because they failed, they throw up their hands and declare it a “paradox.”
It’s not a paradox. It’s simply a failure to respect technical debt and historical R&D timelines. The “easy” stuff is backed by half a billion years of evolutionary R&D. The “hard” stuff is just a recent app running on top of it.
Why This Matters for the Future of AI and Robotics
Understanding that Moravec’s observation is a logical reality—not a paradox— changes how you view the modern tech landscape. It explains exactly why the market looks the way it does today.
Here are my core takeaways for anyone trying to navigate the intersection of AI and physical robotics:
1. Language Models are “Cheap” Because Language is Artificial
We are currently living through the boom of Large Language Models (LLMs) like Gemini and ChatGPT. Why did AI conquer language before it conquered the ability to wash dishes? Because language is an artificial construct invented by humans. It is rule-based, abstract and operates in a clean, digital environment. Text generation is solving the 100,000-year-old evolutionary problem. It’s the dashboard.
2. Physical Robotics is Solving the 500-Million-Year Problem
When you watch a Boston Dynamics robot do a backflip, you aren’t just watching a cool engineering trick. You are watching computer science attempt to brute-force a half-billion years of biological evolution. The physical world is noisy, chaotic, and completely unforgiving. Friction changes depending on humidity. Lighting changes shadows. Objects have unpredictable weights. Robotics is fundamentally a much harder computer science problem than generative AI.
3. We Are Approaching the “Embodied AI” Era
For a long time, AI and robotics were handled as separate disciplines. AI researchers worked on the “brain” (logic, games, language), and roboticists worked on the “body” (actuators, servos, balance). The most exciting frontier today is Embodied AI—putting these massive, capable neural networks into physical machines and letting them learn from the physical world the same way biological creatures do: through trial, error, and sensorimotor feedback.
4. Stop Judging AI by Human Standards
The biggest mistake operators make is assuming human intelligence is a single, linear spectrum. We assume that if an AI is smart enough to pass the medical boards, it must be smart enough to navigate a hallway. But artificial intelligence isn’t human intelligence. It is an entirely alien architecture. Expecting an AI to be good at physical tasks just because it is good at math is like expecting a calculator to be good at driving a car.
The Bottom Line
There is nothing illogical about the fact that machines find math easy and walking hard. Math is a highly structured, low-bandwidth, recently invented cognitive framework. Walking through a crowded room requires real-time, high-bandwidth processing of complex physics, dynamic variables, and millions of sensory inputs—something nature spent hundreds of millions of years perfecting.
Moravec’s observation was brilliant, but his framing gave us a mental model that confused a generation of technologists. By dropping the word “paradox,” we stop treating the difficulty of robotics as a mysterious quirk, and start treating it with the deep, architectural respect it deserves.
We’ve successfully built the dashboard. Now, we just have to figure out how to build the engine.


