​What AI is Actually Changing (It’s Not What You Think)

11 min read
Jan 29, 2026 10:00:01 AM

I’m going to start with two admissions. Firstly, I can’t remember my partner’s mobile number. Not perhaps the most drastic admission - but I grew up with landlines, knowing necessary numbers off the top of the hat. Now, I can’t make it stick the same way. This isn’t about getting older - it’s about something we’re all doing without noticing.

Secondly - and related, as you’ll see - my stance on AI has shifted. A year ago, I was sceptical. Now? Differently so. Not because AI has suddenly become perfect or harmless, but because I've realised the conversation we're having about it is missing something crucial.

We're so focused on what AI can do that we're not paying enough attention to what it's doing to us. And I don't mean this in the apocalyptic, Terminator sense. I mean it in a quieter, perhaps more insidious way: the erosion of cognitive capacities we might not miss until they're gone.

The Phone Number Problem

My now-dodgy memory of phone numbers isn't just about me getting older or lazier. It's about outsourcing memory to technology. Research by Andy Clarke and others has shown that when we adopt new technology, our brain structures change, often within about a year. We stop using the neural pathways we no longer need, and they atrophy. It's not unlike a muscle that stops being exercised.

The phone number thing feels trivial until you need to use someone else's phone in an emergency and realise you can't actually contact anyone. Then it becomes rather less trivial.

The Scaffolding We Lean On

Clarke talks about our use of tools as "scaffolding", external structures we use to support thinking - and external ones replacing processes that we used to have internally. Calculators replaced slide rules and mental arithmetic. GPS is replacing our sense of direction and ability to read maps. Smartphones replaced our need to remember... well, almost anything.

Each time we adopt one of these tools, we make a trade. We gain convenience and capability in some areas whilst losing capacity in others. The question isn't whether this trade is happening, it objectively is. The question is: are we conscious about which trades we're making?

With AI, particularly large language models, we're now outsourcing not just memory and calculation, but potentially something more fundamental: the process of thinking itself.

What We're Trading Away

When you ask ChatGPT or Claude to write something from scratch, you're not just getting help with a task, you're skipping several cognitive processes that might actually matter:

The struggle of articulation: That frustrating period when you're trying to work out exactly what you want to say forces you to clarify your own thinking. When AI generates text instantly, you bypass that clarification process entirely.

The discipline of structure: Working out how to organise an argument or narrative requires you to understand the relationships between ideas. AI can give you a structure, but do you really understand why that structure works?

The craft of revision: The difference between a first draft and a final version isn't just polish, it's the thinking you do whilst recognising what doesn't work and why. If AI generates something "pretty good" on the first attempt, do you have the discernment to know what "excellent" looks like?

I've noticed this in my own work. I can generate parts of proposals or reports through Claude and think, "Yeah, that's... that's pretty good." But then there's this nagging voice: am I settling? Have I lost some of the discernment I used to have about what constitutes genuinely good writing versus merely adequate writing?

The Word Recall Question

There's something more immediate happening too that I’ve noticed: word recall seems harder. Is it age? Possibly. But there's a suspicion that relying on AI to articulate things means we're not exercising that particular cognitive muscle as much.

It's subtle. You're writing an email response, and formulating a nuanced reply with proper context and tone feels harder than it should. So you ask AI to draft it. It works. You tweak it a bit. Job done. But did you just make it slightly harder to write that email yourself next time?

The research on neuroplasticity suggests yes. When we stop using certain neural pathways regularly, they become less efficient. The more we outsource articulation to AI, the less practised we become at articulating things ourselves.

This matters particularly for tasks that require nuance, context, and tone - exactly the things that make professional communication effective. You can get AI to draft an email, but the version you might have written after struggling through it yourself could have been more authentically you, more attuned to the specific relationship and situation. And you would have had a deeper, more authentic, somatic sense of what you were saying. It would stick with you. (There’s a movement in some universities to ban laptops for note-taking in lectures. The evidence is showing that writing notes helps integrate the material, improving the ability to apply it and recall it.)

The Simplicity Trap

There's another issue with current AI systems: they tend toward simplified answers. Eighteen months ago, if you asked an image generator to show you a doctor, you'd get a man in every image. That's been corrected (mostly), but the underlying problem remains: AI systems are trained on existing patterns, which means they reinforce existing biases and simplifications.

In text generation, this manifests as AI giving you definitive answers when nuanced ones would be more appropriate. Ask it how to fix an organisational problem, and you'll likely get a clear, structured response that makes the solution seem straightforward. But complex problems - particularly in human systems - rarely have straightforward solutions.

If you're using AI extensively, you risk training yourself to expect simple answers. The world becomes more binary: problems have solutions, questions have answers, complexity can be resolved with a clear action plan. Except that's not how organisations, people, or most interesting problems actually work.

The Difference Between Getting And Knowing

There's a related trap I fell into with academic papers. I've always found them hard going - the language is drier, more formal, more specialised than I'm used to. So for a while, I started dropping them into Claude and asking: "Give me the key points and things I should think about, in plain English, as though you were talking to a smart friend."

It worked brilliantly. I got clear, accessible summaries that were far easier to read than the originals.

But here's what I noticed: they didn't stick. I'd read the summary, nod along, and three weeks later have only the vaguest sense of what the paper had said. The information had passed through without lodging anywhere and without changing how I thought - which was the main point of the exercise.

So I've changed how I use it. Now it's less "summarise this for me" and more "have a conversation with me about this - help me position it against what I already know, challenge where it conflicts with my assumptions, and work through the nuances with me."

The difference isn't just about retention. It's the difference between being given an answer and actually knowing something. The first is faster. The second is mine.

The Co-Evolution Question

Human beings and our technologies co-evolve. We're not separate from our tools; we're shaped by them even as we shape them. Which means you can't fully understand AI's implications from the outside - by reading about it, or watching others use it, or deciding in advance what you think. The only way to grasp what it's actually doing to cognition is to use it yourself, pay attention, and work it out as you go.

That's partly why I'm sharing my own experiments here. Not because I've figured it out, but because figuring it out requires engagement. The question isn't whether AI will change us - it already is. The questions are:

What cognitive capacities are we willing to trade away? Memory? Calculation? Fair enough - we made those trades decades ago. But what about articulation? Reasoning? Discernment? Creativity? Where do we draw the line?

What new capacities do we need to develop? If we're outsourcing certain types of thinking, what should we be getting better at instead? Perhaps it's the ability to prompt effectively, to direct AI toward useful outputs. Perhaps it's developing stronger evaluative skills - the ability to recognise good thinking from mediocre thinking. Perhaps it's something we haven't even identified yet.

How do we maintain critical distance? When a tool becomes indispensable, we stop questioning it. How do we use AI extensively whilst maintaining healthy scepticism about its outputs and awareness of what we're losing?

The Thought Partner Approach

I've settled (for now) on thinking of AI as a thought partner - but one that requires active management. This means being deliberate about which parts of the thinking process to outsource and which to keep.

For my newsletter, I'll use Claude to map out structure and create a rough content schedule. But I won't let it write the first draft. I write that myself, then use AI to identify what's unclear or where I'm using too much jargon. Then I rewrite it - I don't let the AI rewrite it for me.

For emails where I need to get across complex context and nuance, I might ask for a draft. But I'll ask it to start by asking me questions it needs answered to write a good response. Then I take that draft and make it properly mine. (As an aside - the exercise of having it ask me questions often helps my own clarity about the purpose and content of the email.)

The principle is: use AI to accelerate parts of the process, but don't skip the parts that build your own capabilities or are your unique strength.

The Skills We Should Develop

If we're going to use AI extensively (and let's face it,most of us are), we need to deliberately develop complementary skills:

Discernment: The ability to evaluate quality, to know when something is truly good versus merely adequate. AI can generate competent work quickly, but can you tell the difference between competent and excellent? Between accurate and slightly off?

Prompting: How to direct AI effectively, give it the right context, ask the right questions. This is becoming a skill in its own right, though perhaps a temporary one until interfaces improve.

Integration: Taking AI-generated content and making it genuinely yours - not just tweaking a few words, but ensuring it reflects your actual thinking and voice.

Meta-cognition: Awareness of your own thinking process. Which parts do you actually need to do yourself to maintain the skill? Which parts can be safely outsourced? What parts do you need to develop?

These aren't skills that develop accidentally. They require conscious effort and practice.

The Output Trap

A colleague recently shared a post about a professor catching students using AI to write essays. The tone was slightly triumphant - gotcha.

But something about it nagged at me. The professor's frustration assumes the essay is what proves thinking happened. He's asking students to produce a thing, not do a process - and then being surprised when they optimise for the thing.

I see this pattern elsewhere. I work with leaders on polarity maps - a tool for navigating tensions that can't be "solved," only managed. Early on, I noticed people treating the map itself as the deliverable. Get the quadrants filled in, job done.

I've learned to emphasise that the map isn't the point. The thinking you do while creating it is the point. The map is just a residue of that thinking - useful as a reference, but not where the value lives.

This is where AI gets interesting rather than threatening. An LLM can produce a perfectly adequate polarity map in seconds. But it can also sit alongside you while you wrestle with the tensions yourself - pushing your thinking, challenging your assumptions, asking awkward questions. One gives you an output. The other develops your capacity.

The question for education isn't "how do we stop students using AI?" It's "have we been assessing the right thing all along?" If we've been treating outputs as proxies for thinking, maybe AI is just revealing that the proxy was always a bit broken.

I've recently been working with a client on exactly this - developing prompts for their leaders to use with the company's approved AI. The shift was from "help me write this email" or "make me more productive" to "help me think through this issue using different frames" or "challenge my assumptions about this decision before I commit." Same tool, completely different cognitive relationship with it.

The Constraints We Might Need

Perhaps we need to start thinking about constraints. Not prohibitions exactly, but deliberate boundaries around AI use that help maintain important capabilities.

For my newsletter, my constraint is: AI doesn't write the first draft. That's my thinking process, and I need to maintain it.

For complex client work, my constraint might be: AI can help me structure my thinking, but I need to do the hard cognitive work of making sense of the problem myself first.

For others, constraints might look different. A student might commit to always writing their own first draft before asking AI for feedback. A manager might use AI for initial email drafts but insist on rewriting the crucial bits themselves.

The point isn't that there's one right approach. The point is being conscious about the trade-offs and making deliberate choices rather than drifting into complete dependence.

The Unintended Consequences We Can't Predict

Here's the uncomfortable truth: we don't know what we're trading away until it's gone. The full impact of any technology only becomes clear years or decades after adoption.

Nobody anticipated that smartphones would fundamentally change how we experience boredom, or that social media would reshape political discourse, or that GPS would affect our spatial reasoning. These were unintended consequences that only became apparent over time.

AI will have its own set of unintended consequences. Some we might guess at - the erosion of certain cognitive skills, the flattening of thinking into simplified patterns, the loss of comfort with uncertainty and complexity. But others will surprise us entirely.

The question is whether we're watching for them. Are we monitoring not just what AI enables but what it might be costing us? Are we paying attention to which cognitive muscles are atrophying? Are we noticing when we settle for "good enough" outputs because we've lost the ability to recognise what "excellent" looks like?

Moving Forward Thoughtfully

I'm not arguing for rejecting AI. I use it. I'll continue using it. But I'm trying to use it thoughtfully, with awareness of what I'm trading.

This means:

Regular self-assessment: Periodically checking - am I still able to do this myself? Has my word recall actually gotten worse? Can I still write a complex email without AI assistance?

Deliberate practice: Consciously exercising cognitive skills that I want to maintain, even when AI could do them faster.

Healthy scepticism: Questioning AI outputs, not accepting them uncritically even when they seem good.

Watching for atrophy: Noticing when tasks that used to be straightforward start feeling harder, and considering whether AI use might be contributing.

Accepting complexity: Resisting thet temptation to settle for simplified answers when nuanced ones are more appropriate.

The conversation around AI tends toward extremes - either enthusiastic adoption or fearful rejection. But the more interesting space is the middle ground: thoughtful, aware use that acknowledges both benefits and costs.

We're going to use AI. The technology is too useful to ignore. But that doesn't mean we have to use it unconsciously. We can make deliberate choices about which cognitive trade-offs are worth making and which aren't.

The key is keeping watch on ourselves - noticing what we're gaining and what we're losing, and being willing to adjust course when the balance tips wrong. Because the unintended consequences of AI won't announce themselves. They'll arrive quietly, one small trade-off at a time, until we look up one day and realise we've outsourced more of our thinking than we meant to.

The question is: will we notice before the capacity to notice has atrophied too?

V2_JDRGroup_TonyQuinlan-GuideCTA_735X268px_SN_BC_10Feb2025

Image Source: Canva

Subscribe by email

Get Email Notifications