AI Is Not an Oracle. It’s a Mirror, a Tool, and a Test of What We Value.

After a conversation with AI pioneer Maya Ackerman, a sharper question emerges: what if the real risk of AI isn’t that it will outthink us, but that we’ll stop thinking for ourselves?

April 14, 2026

We are living through a moment where machines can write like us, think alongside us, and increasingly, speak with a confidence that makes it tempting to trust them more than we trust ourselves.

And yet, something feels off.

In a recent conversation with generative AI pioneer Maya Ackerman, one idea lingered long after everything else: what if we’ve been asking the wrong thing of AI?

For the last few years, the dominant story about AI has been painfully narrow. We are told, over and over, that its highest purpose is speed: faster output, fewer people, leaner teams, cleaner summaries, more productivity packed into every hour. In that story, the ideal worker becomes a kind of human relay station for machine efficiency. The ideal company becomes one that can do more with less. And the ideal future is one in which intelligence is outsourced, judgment is automated, and humanity is, somehow, still supposed to feel fulfilled inside the system.

That vision is not only thin. It is spiritually exhausting.

There is another way to understand generative AI, and it begins by rejecting one of the most seductive myths in modern technology: the fantasy of the all-knowing machine. We have wrapped AI in the language of science fiction and prophecy, as though it were a superior being arriving to replace flawed human judgment with objective truth. But that framing distorts both what these systems are and what they are for.

AI is not an oracle.

It is, at best, an astonishing pattern engine. It can remix, provoke, suggest, imitate, summarize, flatter, persuade, and occasionally illuminate. It can be useful, delightful, and creatively catalytic. But it is not a final authority. It does not absolve us of the work of thinking. And when we treat it as if it does, we begin to erode the very capacities we most need to preserve: discernment, self-trust, imagination, and moral judgment.

That erosion is already visible.

In workplaces, AI is often introduced as a helper, but quickly becomes a pressure multiplier. What begins as support turns into expectation. If a tool can help you write faster, then why not write twice as much? If it can draft code, then why not reduce the team? If it can summarize a meeting, then why bother learning how to listen carefully, synthesize nuance, or write a strong brief yourself? The gains rarely land as spaciousness. They land as compression. Workers do not simply get time back; they inherit more demands.

This is why so many people feel strangely depleted even as their capacity appears to expand. They are not becoming freer. They are becoming more extractable.

And yet the problem is not AI itself. The problem is the framework into which we are forcing it. Generative AI emerged from traditions of experimentation, play, and creative exploration. It was once understood less like a calculator and more like a fountain: something that could produce nonsense, surprise, and, amid the noise, occasional gems. Its value was not that it was always correct. Its value was that it could spark something in a person.

That distinction matters.

When AI is designed and used as a collaborator in human growth, it can be genuinely transformative. It can help a novice begin. It can make the blank page less frightening. It can invite experimentation without shame. It can act as a partner that coaxes out dormant ability rather than replacing it. The gold standard is not whether a machine can do something for you. It is whether, after using it, you become more capable yourself.

That is a radically different ambition from dependence.

Imagine tools that help someone become a better writer, not just produce clean copy. Tools that help people learn songwriting, not just generate songs. Tools that strengthen a person’s thinking, confidence, and expressive range even after the subscription ends. Those systems would be measured not by retention tricks or task displacement, but by whether they leave the human stronger.

That sounds almost quaint in today’s AI market, precisely because today’s incentives reward the opposite. Much of the money flows toward replacement: synthetic outputs over human craft, automation over apprenticeship, convenience over development. The business logic is easy to understand. The human cost is harder to quantify, but no less real.

Young people may pay the highest price.

A generation entering school and work now is growing up surrounded by systems that speak with unnatural confidence. Many are learning to mistake polish for truth, fluency for intelligence, and instant answers for knowledge. If you have not yet built expertise in a domain, it is incredibly hard to know when AI is subtly wrong, contextually off, or merely parroting the dominant point of view. You do not know which questions to ask because you have not yet earned the instincts that experience produces.

That is not a case against using AI. It is a case for teaching alongside it, with much greater seriousness than we are now. Students and new entrants to the workforce need more than prompt tips. They need a theory of mind for machines. They need to understand that AI does not reveal reality; it presents a perspective. They need practice disagreeing with it. They need space to struggle, to make bad drafts, to sit with uncertainty, to discover that knowledge is not the same as retrieval and that truth is not a button you press.

In that sense, AI is becoming a cultural test. It is exposing what we believe about ourselves.

Do we believe humans are basically inefficient machines, in need of optimization? Do we believe creativity is valuable only when monetized? Do we believe judgment can be delegated without consequence? Do we believe speed is the same thing as progress?

Or do we believe that technology should deepen human life rather than flatten it?

This is where the stakes become larger than product design. AI is amplifying existing values, especially in societies already obsessed with productivity, competition, and scale. It is not creating all our dysfunctions from scratch. It is accelerating them. If a culture already struggles to honor rest, art, community, and intrinsic worth, AI will tend to push that culture further toward performance metrics and away from presence.

Which means the response cannot be merely technical.

We do need better tools, better safeguards, better education, and more honest conversations about bias, dependency, and power. But we also need a broader correction in what we admire. We need to recover the idea that not everything valuable can be reduced to efficiency. We need to defend practices that have no immediate ROI: singing for joy, writing badly until you get better, thinking without prompting, talking to a friend instead of a chatbot, making something no one asked you to monetize.

The deepest promise of AI may not be that it can help us do more. It may be that it forces us to ask what is actually worth doing.

Used carelessly, AI can make people more passive, more doubtful of their own minds, more vulnerable to authority dressed up as intelligence. Used wisely, it can become a creative partner, a reflective surface, even a spur toward growth. But it will not choose between those futures for us.

We will choose, through our products, our workplaces, our schools, our parenting, and our own habits of attention.

AI is not an oracle descending from the future. It is a mirror held up to the present. What it reflects back is not just our data, but our worldview.

And if we do not like what we see, the answer is not to worship the machine more convincingly.

It is to become more human on purpose.

Share this article

Latest insights

AI Is Not an Oracle. It’s a Mirror, a Tool, and a Test of What We Value.

READ MORE

Why VCs and PE Firms Need Trusted Advisors in Their Corner

READ MORE

Learning Faster Than Your Role Changes

READ MORE

Why Great Leaders Get Overlooked — and How to Make Your Board Pitch Unforgettable

READ MORE

Ripped From the Headlines: Reverberations of the Iran War

READ MORE

Subscribe to our Newsletter

This field is hidden when viewing the form