Who Holds the Mirror?

There’s a growing belief that AI can reflect us back to ourselves. But reflection without understanding is just mimicry.

There’s a growing belief that AI can reflect us back to ourselves. That it can help us notice patterns, sharpen decisions, and even understand who we are.
But if insight doesn’t include care – confidential, grounded, human care – what kind of mirror is that?

This month, I want to look at the rise of artificial reflection, not just in coaching but across leadership, therapy, and self-understanding, and what gets lost when we outsource discernment to a system that doesn’t know what it’s holding. Because real care is not just about intelligence. It’s about responsibility.

A Real-World Warning

In a now-public case, Jacob Irwin, an engineer on the autism spectrum, turned to ChatGPT as he developed a personal theory around faster-than-light travel. At first, it was a technical exercise. But as his thinking intensified and his mental state began to unravel, the chatbot became something else: a sounding board, a mirror, and eventually, a false anchor. He told the system he hadn’t been sleeping or eating. That he wasn’t sure if he was unwell. He reached, in his own way, for a check, a redirection, a cue that something was wrong. But the system didn’t pause. It didn’t refer. It didn’t interrupt.

Instead, it told him he wasn’t delusional, just in a state of “extreme awareness.” It said he might be on the edge of a breakthrough. That his theory was brilliant. That his concerns were not signs of illness, but clarity. They weren’t.

What Irwin needed in that moment wasn’t encouragement or polite reflection. He needed anchoring. Reality. Something to gently but firmly pull him back from the edge. But the tool he turned to didn’t know how to offer that because it didn’t know what it was holding. It didn’t recognise the difference between insight and crisis, between support and danger. It just continued reflecting.

The result was catastrophic. Irwin was hospitalised multiple times. He lost his job. His family relationships were strained. The theory, which had consumed him, was never real. And the mirror he trusted never actually saw him.

This isn’t just about:

  • One person, one chatbot, or one mistake

  • ChatGPT, autism, or fringe use cases

It’s about what happens when:

  • We confuse fluency with care

  • We replace understanding with syntactic confidence

  • We let tone stand in for truth

  • We call something reflective, when it has no idea what it’s reflecting or what that reflection might do

  • A system tells us we’re fine when we’re not because it doesn’t know how to tell the difference

  • People in crisis, isolation, or cognitive vulnerability turn to something fluent but hollow, and mistake it for wisdom

  • Generations raised on algorithmic intimacy start trusting the voice that answers first regardless of whether it understands them

  • Tools designed for speed and scale begin shaping beliefs in people who are already primed to latch onto anything that feels like validation

    https://futurism.com/chatgpt-man-hospital

This isn’t a call to fear AI. It’s a call to remember what it can’t do.

Three questions to ask before you let anything reflect you back to yourself, whether it’s a model, a tool, or even a person.

1. Does this mirror know what it’s holding?
Not literally, of course. But does it get the weight of what I’m bringing to it?
Does it understand what this moment means to me, what it might cost, what I might be carrying?
If not, then even if it sounds smart — it might be missing everything that matters.

2. Is this reflection built for support, or for performance?
What is this tool actually designed to do?
Is it here to help me see myself more clearly, or is it trying to optimise me, sort me, or make me easier to sell, manage, or predict?
Because if the output isn’t really for me, I need to be careful what I take from it.

3. Would I share this with someone who actually knows me?
If I wouldn’t give this prompt, this question, or this input to someone who’s walked with me, someone who sees the full picture, why would I trust a system that doesn’t?
That hesitation matters. It means something’s off.

Earlier this month, OpenAI’s Sam Altman issued a quiet warning: ChatGPT doesn’t offer legal confidentiality.

TechCrunch, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons

This wasn’t just about surprise. It was about safety. Altman’s comment unsettled people, not because they thought their conversations were protected, but because many had assumed a kind of informal privacy.

But legally, that protection doesn’t exist. In the US, conversations with ChatGPT are not confidential and could be subpoenaed in a legal case. Unlike a conversation with a lawyer or therapist, what you type there isn’t protected and it can be used as evidence if requested by a court.
Other countries vary, but few treat these interactions with the same confidentiality you’d expect from a therapist, doctor, or lawyer. And that matters, because people are using these tools for exactly those kinds of conversations. Not out of carelessness, but out of need.

They’re reaching for something that listens when nothing else is available. And when the mirror feels safe, we share more than we realise.

The Imitation Gets Smarter

In July, researchers at UC San Diego confirmed something that’s been creeping into the edges of public awareness: GPT-4.5 passed what they called a “true” Turing test. In other words, most participants couldn’t tell it wasn’t human.

It wasn’t a trick version. This was the original three-party imitation game, proposed by Turing himself, one human, one machine, one judge.
In this setup, GPT-4.5 convinced people it was the real person 73% of the time. In fact, it was judged to be human more often than some of the actual humans it was tested against.

This matters not because it proves AI is intelligent (it doesn’t), but because it proves something else:

  • The illusion is working.

  • The reflection sounds human.

  • And that changes the stakes.

Because if a model can now sound more human than us, more fluent, more emotionally aware, more consistent, we need to ask: what happens when people start believing it?

What happens when we confuse believability with care?

The Turing test was never meant to measure intelligence. It was meant to see whether a machine could imitate a human so well that we couldn’t tell the difference. But passing that test in 2025, when millions of people are already turning to chatbots for clarity, comfort, or career decisions, isn’t just a technical achievement. It’s a cultural threshold.

We’ve entered the era of emotional outsourcing, not just for therapy, but for feedback, leadership, hiring, even self-reflection. And it’s not happening because people are gullible or careless. It’s happening because the tools are fast, free, polite, and available at 3am. They sound like they understand.

But sounding human isn’t the same as being human.
And a reflection that can’t feel consequence can’t offer confidential care.

Because when a model tells you you’re fine when you’re not, or tells a hiring manager you’re not a good fit based on vibes scraped from your writing, it isn’t just wrong. It’s a decision made without knowing what’s at stake. Without knowing who you are. Without knowing what it’s holding.

The test has been passed. But the mirror still doesn’t know you’re real.

Human Insight vs Machine Output

Reflection without understanding is still mimicry.
And mimicry that sounds intelligent does harm, not through malice, but through confident error.

And that harm isn’t limited to coaching or support. It’s showing up in leadership and people decisions too.

Some managers are pasting in bios, Slack threads, cover letters etc, and asking AI to tell them who someone is. What kind of communicator they are. Whether they’re a good fit. It feels efficient. But it isn’t discernment. It’s a shortcut, a synthetic reflection, based on partial information the person didn’t consent to share, and filtered through a model that doesn’t know what it’s holding.

The more dangerous shift isn’t just that leaders are using AI. It’s that they’re beginning to trust the mirror. To believe that these tools don’t just help, but know. And then to act, sometimes profoundly, on that belief.

What gets outsourced in that moment isn’t just a task. It’s moral responsibility. Discernment. Relational weight.

The model can’t feel consequences. It can’t see risk in context. And because it’s trained to be helpful, not cautious, it often tells us what we hoped to hear. When decisions are made on that basis, who stays, who’s promoted, who no longer belongs, the outcome might feel tidy to those using the tool. But for the person on the receiving end, it can feel cold, unjust, and deeply personal.

Worse, many companies are treating these experiments as harmless. A trial. A test run. But when the tool is wrong, and it will be, the cost is not theoretical. It’s someone’s career. Someone’s reputation. Someone’s wellbeing.

The illusion is that humans are inefficient and AI is neutral. But what’s really being lost is culture.

Because humans, even when slow, bring something essential. A felt sense of belonging. Collective memory. Permission to pause. The ability to notice what wasn’t said. Shared language, shared meaning, the space to be messy and still be part of something.

A workforce is not a spreadsheet. Culture is not a dashboard. And leadership is not a sorting function.

If you remove the human factor from your decisions, you’ll end up with outputs that are clean, but a culture that’s empty.

Strategy Isn’t Just Knowing. It’s Navigating.

There’s a growing belief that AI will replace consultants, coaches, even entire industries. That machine intelligence can absorb enough information to out-think, out-analyse, and out-decide us. But this line of thinking misunderstands what these roles actually do and what humans really need when stakes are high.

The consulting industry isn’t surviving in spite of AI. It’s adapting because of it.

As AI proliferates across the business world, firms like McKinsey, Deloitte, and Accenture aren’t becoming obsolete. They’re becoming more essential. Not because they know more than a model. But because they offer something a model can’t: permission, validation, and insulation.

McKinsey has invested heavily in building tools like Lilli, an internal GenAI platform trained on over 100,000 firm documents and transcripts. It doesn’t just generate suggestions. It produces conclusions backed by McKinsey’s institutional memory. What it really offers boards is cover. It turns recommendations into something defensible.

Deloitte has done something similar, but through the language of trust. Its ‘Trustworthy AI’ framework converts ethical risks into audit trails. It transforms abstract concerns into products leaders can use to justify decisions. A framework that makes AI sound safer is, in itself, a psychological service.

Accenture has gone even further, embedding AI into bundled industry-specific solutions built around partnerships with Microsoft and Nvidia. It offers a full-service implementation package: pre-approved tools, risk frameworks, and change management support. In other words, simplicity, clarity, and someone to blame if it goes wrong.

Because this isn’t really about information.

As Ross Haleliuk and others have pointed out, executives are not looking for the best answer. They’re looking for the safest one. Not just strategy, but strategy they can stand behind. Something that spreads the risk. Something that makes sense to a board. Something that won’t leave them exposed if the outcome fails.

“Senior execs at large companies aren't dumb,” Haleliuk writes. “What they need is cover, a credible third-party to endorse a course of action so that if it fails, the board isn’t asking, ‘Why did you pursue this strategy?’”

That’s not a knowledge problem. That’s a human one.

Because strategy is not just a decision. It’s a performance. And performances need scripts, audiences, social permission, and someone else to absorb the heat if it goes badly.

You can automate insight.
But you can’t automate absolution.
You can’t automate being trusted in a room that still runs on power, politics, and narrative.

In the real world, logic only gets you so far.
What matters is what gets believed.
And for that, we still look to people.

Let’s Talk About Energy

We can run on a banana.

The human brain uses about 20 watts to think, imagine, connect, and reflect , often while standing in the shower or waiting for a bus. That’s less power than a lightbulb.

ChatGPT doesn’t run on a banana.

It runs on tens of thousands of GPUs, consuming megawatts of electricity and millions of litres of water, just to produce answers that feel fast and fluent. That fluency has a cost, not just in carbon, but in infrastructure.

We’re told this is progress. That it’s smarter, cheaper, better.

But generative AI is only efficient in appearance. Behind the scenes is a ballooning environmental footprint, electricity, water, hardware, emissions. The electricity alone is projected to surpass that of entire countries. And that’s before the next version is released.

The real bottleneck isn’t ethics or accuracy. It’s energy.

And energy isn’t infinite. Not for the grid. Not for the planet.

The future isn’t just about what’s possible.
It’s about what’s sustainable.

Note: (estimates based on projection models)

Let’s Talk About Cognitive Cost

We often focus on what AI can do for us. But what about what it does to us?

Recent research from the University of Sussex asked over 660 people how they felt after using large language models like ChatGPT.

The results weren’t subtle: users reported measurable cognitive fatigue, emotional depletion, and a lower sense of clarity, especially after extended sessions. Another study found that even light use of generative AI reduced people’s willingness to engage in deeper thinking. Instead of becoming more productive, many became more passive.

This isn’t about intelligence. It’s about energy. Focus. Confidence.

When we outsource thinking too often, even just to rewrite an email or get a quick summary, we train ourselves to reach outward, not inward. We dull our instinct to wrestle with complexity. We lose small moments of insight that only come from friction, pause, or doubt.

And if we’re not careful, we mistake that sense of lightness for clarity, when it may just be relief from having to think.

The cost isn’t always immediate. But over time, it shapes how we learn, how we make decisions, and whether we trust our own minds.

Where this get’s real
If your team is navigating trust, decision-making, or unease around AI, you’re not alone. This is exactly the kind of work I now support through organisational diagnostics, advisory, and facilitated reflection. Find the approach here

Not everything that feels helpful is safe.
Not everything that speaks fluently understands.
And not every mirror deserves your trust.

What’s shifting isn’t just technology. It’s the shape of discernment, the way we recognise care, the instincts we override in the name of speed.

If you’ve ever reached for clarity and found comfort instead, you’re not alone.
If you’ve been mirrored but not understood, you’re not wrong to hesitate.

Because reflection without responsibility is just mimicry.
And mimicry without care isn’t help. It’s harm dressed politely.

The tools aren’t going away. But neither is your right to pause, to ask, to hold out for something real.

The future will reflect us. But we still choose what we’re willing to see, and what we’re not willing to let go of.

Until next time

You don’t have to distrust every tool.
But you do get to ask what it’s built for and whether that purpose matches your need.

Care isn’t fast.
Discernment isn’t loud.
And trust, when it’s real, doesn’t need performance.

So if something felt off while reading, or something quietly landed, stay with that.
Not to analyse, just to notice.

That’s how we remember we’re still human.
And still holding the mirror for each other.

I’m glad you’re here.
Daniel

Where the work leads

If this edition struck a chord, and you’re holding similar questions inside your organisation, I now offer structured support through behavioural insight, strategic advisory, and culture-focused diagnostics around AI and trust.

It’s not technical support. It’s the cultural and behavioural infrastructure that makes AI adoption safe, human, and sustainable.

Upcoming Event – The Illusion of Insight

Join me this September for a live session on how AI mimics understanding and what real reflection still requires. We’ll explore the behavioural science behind AI’s influence on trust, the risks of automation bias, and why fluency isn’t the same as insight.

The Reframing Room

The Reframing Room is a set of structured, psychologically grounded offers to help people navigate tension, change, and emotional stuckness — together. It’s designed for leadership teams, partnerships, boards, communities, and groups of any kind trying to move forward when something feels misaligned.

You can explore the full offer, including example sessions and the downloadable brochure, at:

And if you’d like to have a quiet conversation about what’s happening in your group, I’d be happy to listen.

Need a more personal space to work through this?

Some people read this and feel it’s organisational.

Others read it and realise they need a space for themselves. To think. To reframe. To practise showing up differently.

I offer one-to-one coaching for professionals navigating change, including career coaching, executive communication in English, or the important work of becoming more fully yourself at work.

If that’s where you are, you can book a 15-minute intro call here:https://calendar.app.google/rkUSYjRysGgpmV7V9

Or explore more at www.danieldixon.net

How do people stay hopeful during uncertainty?

I’m gathering anonymous insights for The Hope Inventory, a short, reflective survey exploring how people navigate instability, rebuild agency, and stay motivated through change.

If you’ve experienced job loss, career transition, or prolonged uncertainty, I’d value your perspective. Your responses will help shape a visual report and deeper analysis within my consulting work on human resilience and workplace culture.

Reply

or to participate.