The Telescope Test: Why AI Struggles to See Backwards
When AI Fails at Being Wrong on Purpose: What It Tells Us About Real Imagination
A few nights ago, I stumbled across a stubborn thought: why can’t AI easily imagine someone using a telescope the wrong way? It sounds trivial. Maybe even silly. But try it. Open your favorite generative AI model. Ask it to create an image of a person looking through the wrong end of a telescope.
Most likely, it will quietly fail. You’ll get a polished, well-lit image of someone using the telescope correctly. Eye to the big lens, posture textbook perfect. The machine will default to what it has seen before, without hesitation.
Because that's what today’s GenAI systems are: they are extraordinary engines for reconstructing the statistically probable. They don’t invent "wrong" easily. They remix the familiar. This matters far more than it first appears.
At their core, generative AI models are built to regress toward the mean. They optimize for what has precedent but not for what breaks precedent. They create new combinations of old pieces. But they struggle to produce real inversions, where the basic purpose of an object or an action flips.
When a child flips a telescope around and squints into the wrong end, they’re not making a random mistake. They’re exploring the object's boundaries. They’re asking a question: what happens if I reverse this? That kind of playful violation namely the deliberate pushing against how things are supposed to work but is not just creativity. It's how meaning is born.
Current AI models don't naturally do this.
They predict. They complete. They conform.
We often say AI "hallucinates", meaning that it produces errors. But the more interesting observation is that AI rarely hallucinates the wrong kind of errors. It doesn't break things usefully. It doesn't misuse tools in ways that reveal new affordances.
It hallucinates facts (wrong dates, wrong names) but not conceptual misuses. It dreams inside a tightly drawn boundary. It imagines cities in the sky, new species of birds, strange hybrid fruits, but always starting from recombinations of things it already understands. Real creativity isn’t just remix. It’s revolt. It’s pointing the telescope backward and deciding that even the act of misusing it is worth paying attention to.
The deeper issue is that the more we optimize AI systems for fluency, clarity, and correctness, the more we strip away this natural friction. In trying to polish communication, we accidentally erase invention. The telescope, in AI’s hands, will always be pointed forward: clean, correct, optimized for narrative efficiency.
But invention often lives in the places where things feel wrong.
In the stupid mistakes.
In the awkward inversions.
In the uncomfortable contradictions that don't resolve neatly.
If we want truly creative machines and not just eloquent echo chambers, we have to ask harder questions. How do we teach systems not just to predict the next likely word, but to step outside the training distribution altogether? How do we build models that don't just hallucinate inside the known world, but deliberately break its walls?
Maybe that’s not just a matter of better optimization. Maybe it requires a different philosophy of learning altogether meaning the one that values exploration over efficiency, boundary-breaking over polishing. Until then, the telescope will stay pointed the way it always has. And true backwardness, true inversion, true creative error all these things will remain, beautifully and stubbornly, human.