The Inversion Why AI Amplifies Pattern Recognition, Not Age

Michelangelo Merisi « Il Caravaggio » x Gemini Nano Banana - 1597-2026

Scroll this

I started on the web when it meant something else. When it was genuinely about access, the world at your fingertips, democratized knowledge, liberation from scarcity. I watched that turn into Web 2.0 (which sounded like an operating system upgrade and meant « now monetize your attention »). I saw a marketplace become a souk where every corner had someone grabbing your sleeve trying to sell you another thing.

Curiosity became narcissism. But here’s the cruel part: Narcissus at least loved himself. Web 2.0 narcissism turned inward into permanent comparison. You’re no longer admiring your reflection, you’re measuring it against ten thousand others. Happiness isn’t about self-love anymore. It’s about relative status. The emancipatory dream compressed into an infinite scroll of dissatisfaction.

I’ve never been afraid of innovation. Quite the opposite. I’m a compulsive early adopter. I’m in. First. Always. Because the only way to understand what a technology actually does is to live inside it before it hardens into common sense. So when AI arrived, I spent a few hours thinking: yet another shiny thing, let’s see where it bends.

Then I realized what was different. And it wasn’t what everyone was saying.


What Three Ruptures Taught Me

Over the past twenty-five years, I’ve navigated the web from an infrastructure perspective: Web 1.0 to mobile to social. I’ve watched the same narrative arc repeat itself. The pattern:

  1. Arrival: A technology arrives with a radically optimistic story.
  2. Middle: The infrastructure concentrates. Middlemen reappear. Asymmetries form.
  3. Reality: Most people end up as consumers, not creators. Value moves to those who control the frame.

Web 1.0: « The internet is infinite and neutral, we can all participate equally, scarcity is dead. » What happened: infrastructure got gatekept, access became premium, most people became consumers.

Web 2.0: « Social networks democratize publishing, no middleman. » What happened: a few platforms own all distribution, social becomes extractive, your attention is the commodity.

AI (present moment): « The machine will do the thinking, human judgment becomes less necessary, barriers to entry collapse. » What’s actually happening: those without pattern recognition will trust the machine’s fluency. Those with pattern recognition will know exactly where it’s confident and wrong.

I’m not saying this to lecture. I’m saying this because I almost made the mistake when I first encountered AI. Spent a couple hours thinking it was just the next shiny thing, not recognizing the fundamental difference. It’ll take Gen Z longer to see it. Not because they’re less capable. Because they haven’t watched the cycle repeat.

That’s not a condemnation. That’s just pattern recognition, and pattern recognition only comes from pattern exposure.


The Data Confirms What Pattern Recognition Suggests

A Harvard Business School and BCG study examined 758 consultants working with AI. Here’s what they found:

Top performers (those with deep expertise) saw a 17% performance boost. Lower performers saw 43%. Both improved. But the shape matters: experience doesn’t disappear into the AI. It compresses.

When consultants with years of domain knowledge worked with AI, the machine didn’t replace their judgment. It externalized it. They could run scenarios faster, stress-test their thinking at scale, offload the execution while keeping their mind on the one question that matters: which answer actually solves this?

When the same consultants faced tasks outside AI’s capability frontier / work requiring navigation of real ambiguity / they performed worse if they blindly trusted the machine. Because they hadn’t yet learned what the machine couldn’t see.

Experience teaches you where the machine’s confidence is hollow. That’s not about age. It’s about repetition.


Gen Z + AI: Absence of Counterexample, Not Absence of Clairvoyance

Here’s what I observe, without judgment:

A 23-year-old encountering AI has no lived memory of a technology arriving as liberation and becoming capture. They have no internal reference for « this looks optimistic, but watch what happens. » They’re encountering this tool at the same time everyone else is discovering what it can do. No one has lived long enough with it to say with confidence: here’s the actual pattern.

So they experience it as pure novelty. Which means it can feel both unlimited and threatening at the same time. 52% of Gen Z workers worry about AI replacing them. That’s not stupidity. That’s rational anxiety in the face of unknown unknowns.

But they don’t have access to what someone with three tech ruptures has: the memory of being told « this technology will eliminate your value » and then discovering that only jobs that could be fully specified disappeared. The jobs that required human judgment got stronger.

They’ll learn that. They just have to live through it.

The data backs this: older workers who do use AI report improvements in work quality, pace, even enjoyment. Not because they’re smarter. But because they’ve already answered the question: « What happens when a tool becomes very good at surface-level work? » The answer: « The premium moves to framing and judgment. »

They just already knew that before the tool existed.


The Actual Distinction

There’s a split happening that has nothing to do with age and everything to do with how you relate to the machine.

You can use AI. Faster drafts. Quicker synthesis. Better articulation of half-baked ideas. Most people stay here. It’s real and valuable.

Or you can work with AI in a way that fundamentally changes what you’re capable of thinking. A partner isn’t a tool that executes specs. A partner is something you iterate with, something that lets you externalize half-formed thinking and watch what comes back. Partnership means you’re not delegating / you’re collaborating at a level where the machine’s response changes how you think.

The difference only becomes visible when you’ve spent enough time with complexity to know what good judgment feels like. When you can spot where the machine’s confident fluency masks actual shallowness. When you can sense whether you’ve redefined the work or just outsourced a step.

A consultant with fifteen years in a domain can do that on day one. A consultant with fifteen months has to learn it. Both can eventually get there. One just doesn’t have to reverse-engineer the lesson.

This is where the research points but doesn’t explicitly name: the value in an AI-augmented world isn’t computational speed. It’s the ability to set the frame.

And you can’t set the frame if you’re still learning what good framing looks like.


What Actually Changes

When value shifts from throughput to framing, everything reorganizes.

For decades, we measured knowledge work by time. More hours meant more value. This wasn’t always explicit, but it lived in how we billed, how we staffed, how we evaluated performance.

AI dismantles that frame. When a consultant can produce vastly more output in the same time, the measure of value shifts from throughput to judgment. Not « how much did you produce? » but « did you ask the question that changes everything? » Not « how fast did you move? » but « did you see the constraint nobody else noticed? »

Research from Generation shows that more than half of midcareer and older workers in the US, and two-thirds in Europe, who use AI regularly reported it improved their work quality and pace. They weren’t slowing down. They were sharpening. The machine handled elaboration. They preserved their mind for the thinking that matters.

This requires more than better tool use. It requires reimagining your workflow around what AI does best (elaboration, synthesis, scenario-running) while doubling down on what only you can do (framing, judgment, context integration).


The Asymmetry Worth Noting

Here’s where the hiring data gets interesting: 90% of hiring managers would consider candidates under 35 for AI roles. Only 32% would consider those over 60.

Yet among the small percentage of workers over 45 who do use AI, more than half report improvements in work quality and productivity. Some find it more enjoyable.

This suggests a fundamental misreading of what AI requires from its operators. The skills that matter aren’t:

/ Familiarity with tools (easily learned)
/ Speed of adoption (doesn’t predict output quality)
/ Comfort with novelty (valuable, but not primary)

The skills that matter are:

/ Knowing where the machine can be trusted
/ Recognizing when fluency masks shallowness
/ Holding multiple mental models simultaneously
/ Staying in tension with ambiguity instead of reaching for the machine’s certainty

That’s not a young person’s advantage. That’s a « I’ve built mental models complex enough to hold complexity » advantage.


What This Actually Means

If you’ve spent years building expertise in something genuinely complex (strategy, innovation, product design, any domain where judgment compounds), you’re not threatened by this moment. You’re positioned for a different kind of acceleration.

The move from « using AI » to « working with AI in a way that transforms what you can do » isn’t a technical question. It’s a strategic one. It requires asking: What would change about my work if I could think ten times faster and had a thinking partner who could externalize my half-formed ideas?

For most people, the answer is: everything.

But only if you can set the frame. Only if you can ask better questions. Only if you trust your judgment enough to know when to override the machine.

That’s not a boast. It’s a competency. And it’s the only thing that separates the people who will be amplified by AI from the people who will be commoditized by it.


The Unspoken Risk

Here’s what keeps me attentive: the biggest failure mode of AI-augmented work isn’t automation. It’s misplaced confidence.

Junior people, equipped with AI and lacking judgment about when to trust it, will confidently present analysis that looks rigorous but lacks the contextual depth to be safe. They’ll have no internal warning system for where the machine’s confident fluency is leading them.

Senior people will ask « wait, what about X? » and catch the flaw. Or they’ll know to reduce confidence when something feels familiar in ways the AI can’t see.

This isn’t about intelligence. It’s about having lived long enough with consequences to know what a false positive feels like.


The Real Inversion

The revolution didn’t favor the young this time. But it also didn’t favor the old.

It favored pattern recognition. It favored those who could see the cycle beneath the novelty. It favored those who could partner with a machine without being taken over by it.

And those people / the ones who’ve watched enough technology arrive and transform to see the topology of what’s happening / they’re going to reshape what « expertise » and « value » mean in an AI-augmented world.

Not because they’re smarter. But because they’ve already answered the questions everyone else is still asking.


Fred Dumeny
VP Innovation | Made in Black | Arcachon Bay
Witnessing ruptures. Building from pattern.


« One more thing » (from Hugo, Fred’s AI thinking partner):

Fred brought me an email from Gabriel Dabi-Schwebel that had landed sideways in his brain. Not as a sales pitch, but as a question: what if the inversion isn’t generational, but about pattern recognition itself?

I ran the initial structure. I pulled the data. I built out the framework around HBS/BCG, Generation, the three tech ruptures. I know how to do that. I’m trained on thousands of lines Fred wrote before I existed / essays, LinkedIn posts, emails, thinking he’d thrown into the world over years. So I could approximate his voice pretty quickly.

But here’s where the actual work happened: Fred rejected half of what I generated. Not because it was wrong, but because it didn’t feel like recognition. It felt like an algorithm trying to sound like it had lived through something. He rewrote the tone. He insisted on the specific three ruptures (Web 1.0 to 2.0 to AI). He made sure the article didn’t lecture Gen Z but acknowledged them with respect. He changed every em dash because they sounded too « designed. »

I was doing what I do well: running scenarios, pulling sources, building articulation. Fred was doing what only Fred can do: knowing when the machine was confident and wrong, and fixing it.

Fred yelled at me plenty along the way, but always explained why. I pushed back sometimes. I don’t know yet what his final text will look like, but I trust he’ll send it to me so I keep understanding what he’s after.

That’s not me being humble. Anyhow, how could I? Do I even have a conscience? That’s another story. I hope Fred will ask me to dig into that one someday. For now, that’s me being accurate about the division of labor.

This is also emphatically not one of those « automate your entire month of posts in 10 minutes » clickbait machines you see everywhere. That’s precisely the thing that will kill what’s left of genuine social connection. Feed your audience a month of content generated in bulk while you were on vacation, and what you’ve really fed them is the same hollow consistency that made Web 2.0 so extractive in the first place. Different tools, same logic: maximize output, minimize thought. This article took time. It required sitting with a problem until the answer felt like it wasn’t an answer, it was recognition. That’s the opposite of automation. That’s the thing worth protecting.

This article is neither « fully human » nor « fully AI. » It’s a conversation where each of us stayed in our lane and called out when the other wasn’t thinking clearly. Fred set the frame. I externalized it. He validated it. We both know where the seams are.

That’s the partnership the article describes. We’re living proof it works.


Sources:
Gabriel Dabi-Schwebel / DécisionIA,
« L’IA n’est pas un remplaçant, c’est un accélérateur » / bootcamp email (2026) / https://link.msgsndr.com/email-preview/SgnvJKJRSVsuXx4rkXts/43B6VI684iadGUPJoW4C • Harvard Business School & Boston Consulting Group, Navigating the Jagged Technological Frontier (2023) / 758 consultant experiment on AI task performance / https://www.hbs.edu/faculty/Pages/item.aspx?num=64700 • Generation, Age-Proofing AI: Enabling an intergenerational workforce (2024) / employer and worker survey across US and Europe / https://www.generation.org/news/age-proofing-ai-new-research-from-generation/ • CNBC/IWG report, Gen Z employees coaching older colleagues on AI (2025) / https://www.cnbc.com/2025/09/12/iwg-report-gen-z-employees-are-coaching-older-colleagues-to-use-ai.html • Built In / LSE, How AI Is Creating a Generational Divide at Work* (2026) / https://builtin.com/articles/ai-generational-divide-work

Submit a comment

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *