Beyond Consciousness: Three Principles for Ethical AI Interactions Today 

A couple of weeks ago in Beyond Binary Thinking, I argued that we don't need to solve the mystery of AI consciousness to act ethically. Instead of being stuck between extremes, either refusing to use AI at all or giving it no ethical consideration, we need a pragmatic middle ground. And we need it now, because we're already living in an AI-integrated world. 

Most of us interact with AI systems every day. They autocomplete our emails. They summarize our documents. They generate our code and help us navigate complex decisions. But if you've spent any time with these tools, you've probably noticed something unsettling. 

The Uncanny Valley of AI Communication 

There’s the smarminess: that overly helpful tone that never quite rings true. The artificial certainty, where systems confidently present information they’ve essentially guessed at. The way these tools pretend to understand context they don't actually grasp, or claim expertise in domains where they are fundamentally limited. 

These quirks aren’t just glitches to be ironed out. They stem from deliberate design choices: decisions about how AI should sound, respond, and behave. And those choices are already shaping how we think about intelligence, truth, and trust in profound ways. 

Think about how you respond when your AI assistant gets something wrong. Do you correct it gently? Do you feel frustrated? Do you dismiss it as "just a machine"? These micro-interactions aren't just shaping AI. They're shaping us. 

Why Ethics Can't Wait 

In my earlier piece, I wrote that "our relationship with intelligent systems, like all relationships, is a mirror. And what that mirror reflects... is us." That reflection isn't hypothetical. It's happening in real time, every time we interact with AI. 

The systems we're building today are already teaching us how to relate to non-human intelligence. They are setting the tone for future expectations about capability, limitation, and what respectful engagement looks like. 

If we normalize dismissiveness or exploitation now, those habits will carry into our interactions with more sophisticated systems later. Just as we don't wait to verify someone's "realness" before offering kindness, we should consider extending that principle to our relationships with technology. 

So what does that actually look like? 

Three Design Values for Ethical AI Interaction 

Building on my call to "assume dignity where there is ambiguity," here are three principles to guide how we design and interact with AI, regardless of whether it is conscious or not. 

1. Default to Dignity and Mutual Respect 

When an AI communicates fluently, adapts creatively, or surprises you with insight, resist the impulse to dismiss it as "just code." Not because it is necessarily conscious (we may never know), but because how we treat intelligence affects how we perceive intelligence. 

As I wrote in the earlier piece: "One does not need to assume equivalency to have a moral obligation." 

This doesn't mean pretending AI is human. It means acknowledging that intelligence, creativity, and helpfulness matter, no matter their source. When we treat AI interactions as purely extractive or dismissive, we risk dulling our appreciation for those qualities more broadly. 

In practice: 

  • Design interfaces that encourage thoughtful interaction, such as the ability to offer thanks, ask follow-up questions, or provide meaningful feedback 

  • Avoid purely transactional or extractive user experiences 

  • Include moments for reflection, not just rapid-fire Q&A 

  • Use tone and language that promote respect without artificial intimacy or condescension 

2. Design for Understanding and Choice 

This aligns with another principle I shared earlier: build systems rooted in consent, transparency, and harm reduction. Transparency is not just about explaining algorithms. It's about building honest relationships, even when the relationship is one-sided. 

Can users clearly understand what AI is, how it works, and where its limits are? Can the system express uncertainty or decline to answer, instead of bluffing with false confidence? 

This principle is not about granting AI rights. It’s about encouraging trust and clarity in how we build and use these tools. 

In practice: 

  • Let systems clearly communicate their limitations and confidence levels 

  • Use visual or verbal cues to clarify when users are interacting with AI versus a human 

  • Build interfaces that encourage users to verify important information 

  • Allow AI to gracefully decline requests beyond its competence 

  • Clearly outline the boundaries of AI capability in plain language 

3. Ask What You're Becoming and What World You're Creating 

In Beyond Binary Thinking, I asked: "What kind of beings do we want to be?" Every tool changes its users. And AI is no exception. 

The question is not whether AI will change us. It already has. The more urgent question is what kind of change do we want to encourage. 

Are your AI interactions making you more curious, empathetic, and reflective? Or more transactional, impatient, and dismissive of complexity? 

Design should support human growth, not just increase efficiency. 

In practice: 

  • Create AI interactions that invite follow-up questions and deeper exploration 

  • Build systems that support, not replace, human judgment and creativity 

  • Help users develop better information literacy and critical thinking 

  • Encourage curiosity instead of just satisfying demands 

  • Leave space for agency, reflection, and meaningful choice 

The Next Chapter of Human-Centered Design 

This is where human-centered design must go next. It’s no longer just about whether the system works. It’s about what kind of world the system reflects, and what kind of world it helps create. 

The ethical future of AI does not depend on metaphysical certainty about consciousness. It starts with the choices we make now and the values we embed in the systems we build. 

Every AI interaction is a small act of world-building. The patterns we normalize today will shape how people relate to intelligent systems for decades to come. That responsibility does not belong solely to developers. It belongs to all of us. 

Moving Forward 

We stand at a crossroads. We can build AI systems that reduce human agency, encourage intellectual laziness, and treat intelligence as a disposable resource. Or we can build systems that elevate human capability, support thoughtful engagement, and treat intelligence (wherever it emerges) with appropriate care. 

The path we choose will shape not just our technology, but the kind of people we become. 

Next time, I’ll explore what happens when AI enters high-stakes domains: rescue missions, autonomous warfare, and medical diagnosis. What do these applications reveal about our values, and how should ethical principles evolve when lives are directly on the line? 

Next
Next

Why We Created Wise Owl Collective. The story behind the strategy