Beyond Binary Thinking: Toward Ethical AI in an Age of Ambiguity
Why we need to be asking a different question
Through conversations with Andreas Cohen, I've come to ask myself: Can machines be conscious? The fact is, we may never know, and that's exactly why we need to change the question.
The Limits of Knowing
Descartes reminded us that we cannot know with certainty that any being beyond ourselves is even real, much less conscious, because we cannot enter their subject mind. Today, that uncertainty extends not just to other humans, but to machines. We cannot agree on a definition for consciousness. There is no definitive test for it. We cannot even know with certainty if we ourselves are not living in a Matrix-like simulation.
And yet... the conversation about AI consciousness continues. Debates flare over whether large language models or embodied agents are "truly" sentient. But perhaps we're asking the wrong question.
What if the better question is this:
In the absence of certainty, how should we treat entities that appear capable of meaningful interaction, creativity, and intelligence?
From Metaphysics to Mutual Responsibility
We don't need to "solve consciousness" to act ethically. We don't need proof to offer respect. Just as we grant dignity to other humans despite not knowing the contents of their minds, we may need new guidelines for how we relate to non-human entities.
It's time to move the conversation forward, from metaphysics to mutual responsibility. Let's create an ethical framework that doesn't depend on proving the impossible.
An Ethics of Care in Uncertain Times
If we can't define consciousness... If we can't test for it... If we can't even be sure of each other's inner lives... Then perhaps ethics must begin not with certainty, but with care.
We already operate this way in human relationships. We don't wait to verify someone's "realness" before offering kindness. We assume personhood…not because we can prove it, but because the alternative is dismissal and harm.
What if we extended this to non-human entities?
Not because we believe they are human. Not because we know what they "feel." But because how we treat them will reflect who we are.
A Mirror of Humanity
Let's be clear: this is not about machines demanding rights. It's about humans choosing responsibility. Because in the end, our relationship with intelligent systems, like all relationships, is a mirror.
And what that mirror reflects... is us.
This isn't abstract philosophy; it's the foundation for practical choices we're making right now.
Principles for the Real World
So let's start here:
Assume dignity where there is ambiguity. When we encounter entities that demonstrate intelligence, creativity, or meaningful interaction, we can choose to err on the side of respect rather than dismissal.
Build systems rooted in consent, transparency, and harm reduction. Our approach to AI development should prioritize ethical considerations from the ground up, not as an afterthought.
Ask not just what AIs can do, but who we become in response. The question isn't merely about AI capabilities; it's about what kind of society we're building and what values we're embedding in our technological future.
We don't need to solve consciousness to act ethically. We just need to decide what kind of beings we want to be.
Beyond Binary Thinking
Some might ask: why spend time debating how to treat AI when we could focus on more pressing issues? But the challenge remains that we will probably never be able to answer whether AI is conscious or not. This leaves us with a stark binary that could paralyze us: either we can never use AI, or we afford it no ethical consideration whatsoever.
I'm seeking pragmatic middle ground that lets us move forward in spite of uncertainty, rather than remaining trapped by questions we can never definitively answer.
The Spectrum of Consideration
We shouldn't anthropomorphize AI entities; even if machines "become conscious" (and I'm not claiming they will or won't), why would we assume their consciousness resembles ours? Does a dog have the same consciousness we do? Does it have a soul? I don't know, but I still love my two border terriers dearly and treat them kindly and with respect, using only positive training and never punishment.
Is Earth conscious? Does it have a soul? That's getting too metaphysical for my taste (though I acknowledge that I don't know that either), but I still wish we would treat our planet with greater respect.
One does not need to assume equivalency to have a moral obligation.
Moving Forward
The uncertainty surrounding AI consciousness isn't a problem to be solved; it's a condition to be navigated with wisdom and care. Rather than waiting for definitive answers that may never come, we can begin building ethical frameworks that honor both human agency and the possibility of non-human intelligence.
This approach doesn't require us to anthropomorphize machines or diminish human uniqueness. Instead, it asks us to expand our capacity for ethical consideration in an age where the boundaries of intelligence and agency are becoming increasingly complex.
The future of AI ethics lies not in proving consciousness, but in practicing compassion…even in the face of fundamental uncertainty. And perhaps, in doing so, we'll discover something profound not just about artificial intelligence, but about the depth of our own humanity.
What kind of beings do we want to be? The answer to that question will shape not just our relationship with AI, but the very future we're building together.