Expanding the Moral Circle in an Age of New Minds

We like to imagine moral clarity as something we already possess. Jeff Sebo suggests instead that morality is an unfinished project, one that grows messier, not simpler, as we encounter new kinds of minds.

February 10, 2026

Moral progress rarely feels like progress while it’s happening. It feels like friction: a tug-of-war between who we think deserves consideration and who we’ve been trained—by habit, culture, and convenience—to overlook. That tension sits at the heart of Jeff Sebo’s work on “expanding the moral circle,” a phrase philosophers use to describe an evolving answer to a deceptively simple question: Who counts?

The moral circle is our working map of the moral community, the set of beings to whom we owe accountability, responsibility, and restraint. Historically, humans have drawn that circle tightly around themselves, sometimes widening it to include a handful of familiar companions. Even that modest expansion remains incomplete. We still fail, constantly and structurally, to treat members of our own species with equal respect and care. As Sebo puts it, “Even this idea that the moral community is restricted to our species, plus the occasional cat and dog, that is quite progressive and quite aspirational, even today.” The point isn’t that concern for nonhuman animals or future digital beings should replace concern for humans. It’s that our moral work is unfinished in every direction—and that we can, and should, look for paths where progress for humans and nonhumans is mutually reinforcing rather than mutually exclusive.

That framing matters because it responds to a common objection: “How can we worry about cows, chickens, fish, or hypothetical AI minds, when humans are hungry, displaced, and harmed?” Sebo’s answer is neither dismissive nor idealistic. If anything, it’s an admission of how hard moral expansion actually is. Scarcity constrains empathy. In times of insecurity, people become more defensive, more focused on themselves, and less willing to accept new moral obligations. As he notes, “In times of abundance, people tend to have more empathy for others, whereas in times of scarcity, we have harder choices to make and more trade-offs.” The challenge, then, isn’t simply to “care more.” It’s to build social and economic conditions that make caring possible—conditions that don’t force compassion to compete with survival.

That brings us to a more unsettling insight: our moral instincts are not reliable guides to moral truth.

Humans are inconsistent moral perceivers. We empathize more easily with beings that resemble us: large eyes, symmetry, warmth, fur, familiar movements, and roles like “pet” or “companion.” We struggle to empathize with beings that are small, strange, many-limbed, scaled, or categorized as food, commodity, or research material. The same creature can feel “worthy” or “unworthy” depending on framing, distance, and cultural classification. This doesn’t prove we’re incapable of moral growth; it proves we need humility about the biases built into our empathy.

Sebo pushes that humility further by challenging the story we tell ourselves about human exceptionalism. For generations, the science of cognition has had a recurring plotline: humans propose a trait that makes them uniquely special—language, reason, culture, creativity—only to discover, with better research, that other animals have complex versions of these capacities too. The discoveries don’t erase real differences; they erode the assumption that difference equals moral inferiority. 

Now, that same pattern is reappearing with AI. As systems become more capable, we’re tempted to protect our sense of worth by inventing new “exclusive” human traits. Sebo argues this is a trap. A mature moral posture doesn’t depend on being the only bearer of something valuable. It depends on being able to recognize value wherever it emerges without collapsing into panic or resentment when we are no longer alone.

That becomes particularly urgent in the age of digital minds. In many AI conversations, people focus on whether machines will outperform us. Sebo asks an adjacent but deeper question: What if some of them eventually become beings to whom we owe obligations? Not because today’s language models are clearly conscious, but because the trajectory matters. Future systems may integrate perception, attention, memory, self-modeling, social understanding, and embodiment in ways that begin to resemble the functional architecture that, at least in humans and many animals, correlates with felt experience.

Sebo rejects the idea that moral responsibility should wait for certainty. He emphasizes moral risk management under uncertainty. We don’t need proof that a being is sentient to justify minimal moral caution; we need a non-negligible chance that it can suffer or be wronged. As he explains, “If we are not certain that it feels like nothing to be them, then caution and humility require us to give them a little bit of the benefit of the doubt.” His proposed stance is modest in one sense and radical in another: offer the benefit of the doubt widely, at least at the level of avoiding unnecessary harm, because the cost of being wrong could be profound.

This isn’t only about AI. Sebo extends similar reasoning across biological life, vertebrates certainly, and plausibly many invertebrates like cephalopods and decapod crustaceans, and potentially even insects. The point isn’t to declare everything conscious. It’s to acknowledge that the boundary between “definitely matters” and “definitely doesn’t” is scientifically and philosophically messier than we tend to admit and that moral seriousness includes acting responsibly in that mess.

If that sounds overwhelming, Sebo doesn’t deny it. In fact, he names overwhelm as a core obstacle: the list of urgent problems is long, and moral attention feels finite. His response is to shift the unit of progress from “fixing everything” to “nudging direction.” These are intergenerational projects: food systems, infrastructure, governance, technology. The goal isn’t to personally secure liberation for all sentient beings; it’s to add momentum so the next generation can push further. That mindset offers relief without resignation: it makes room for ambition without requiring absolute authority.

Still, “nudging” can’t just mean talking. Sebo, despite being a philosopher, argues that education and argumentation often hit a ceiling. Some people need evidence about animal minds; others need ethical frameworks for why suffering matters; many won’t shift until the systems around them shift. So he emphasizes a portfolio approach: persuasion plus policy. Informational policies that educate. Financial policies that change incentives. Regulatory policies that limit extreme harm (factory farming is a recurring example). Just transition policies that support workers and communities as systems change. Moral expansion isn’t only an internal conversion; it’s institutional design.

When pressed to name priorities, Sebo points to three arenas where suffering, risk, and scale collide: industrial animal agriculture, infrastructure development, and AI technology. That trio is revealing. It suggests a moral circle big enough to include nonhuman residents of farms and cities, and also possible future digital beings, while staying grounded in what makes moral concern actionable today.

And crucially, he connects moral circle expansion to the human moment we’re living through: polarization, retrenchment, and declining empathy. He argues that speciesism isn’t a separate issue floating off to the side; it often functions as a template for dehumanization. Narratives that justify human oppression frequently rely on comparing certain people to “lesser” animals, leveraging the assumption that difference warrants diminished moral status. Challenging the logic of speciesism can, at least in part, undercut one of the rhetorical engines of racism, sexism, ableism, and other forms of exclusion. Moral expansion, in this view, isn’t a distraction from human justice; it can be part of its scaffolding.

None of this yields a tidy blueprint for the next 200 years. Sebo resists utopian certainty, partly because utopian projects so often become coercive. The aspiration, instead, is a disciplined, experimental moral seriousness: widen the circle, reduce needless harm, design institutions that make compassion easier, and stay honest about uncertainty.

If science fiction has trained us to imagine AI primarily through the lens of war—Terminator, apocalypse, domination—Sebo offers a quieter counter-image: the possibility of relationship. Not naive harmony, but the idea that empathy can be reciprocal and that power can be paired with restraint. The challenge, then, is not merely to create powerful minds. It’s to become the kind of beings, individually and collectively, who can live ethically in a world where minds are many, varied, and not all made of meat.

Expanding the moral circle doesn’t ask us to declare that everything matters equally. It asks something harder: that we treat uncertainty as a call for humility rather than an excuse for indifference, and that we begin building a world where flourishing isn’t a zero-sum reward reserved for those who look most like us.

Athena members can watch the full recording in the Athena Library here.

Share this article

Latest insights

Expanding the Moral Circle in an Age of New Minds

READ MORE

How to Pitch Like an Executive Pro

READ MORE

Ripped from the Headlines: Corporate Responsibility in a Post “DEI & ESG” Acronym World

READ MORE

The New Rules of Board Leadership: Governance, Culture, and Real Oversight

READ MORE

Prioritizing Longevity in the New Year: Four Pillars, One Big Reframe

READ MORE

Subscribe to our Newsletter

This field is hidden when viewing the form