Categories
Skills

The Dangers Of Accepting What You See Online

Once upon a time, I enjoyed using social media.

Sounds weird to say nowadays, I know. Yet, when I first started using platforms like LinkedIn nearly 15 years ago, it was both a different time and place.

Platforms felt more conversational-based and not driven by clickbait. The algorithms weren’t so optimised for mass outrage.

I used to learn tons as a mid-twenty something trying to navigate the odd corporate world in London.

I had a ritual of saving posts to read later and call upon as a sort of personal learning system.

I adopted the same approach in the early days on Instagram.

Specifically trying to take my body from the grasshopper lightweight it was, to some form of decent-sized guy who didn’t look like he could slide through the cracks of doors.

Again, I’d absorb whatever good stuff I could.

Back in 2012, fitness influencer’s weren’t really a thing, so I didn’t have to cut through any noise.

Today, the story couldn’t be farther from this.

I only use LinkedIn these days, and even that is becoming a struggle because I’m met daily with disinformation, misinformation, smart people saying dumb things and a host of selfies desperately being used in the search for attention.

What concerns me most these days is so many people’s inability to ‘read beyond the headlines’.

I see this most often in the last few years with the sharing of the absurd amount of research and reports on AI. Look, I understand the attention game and why people indulge in the shock factor to garner attention.

The problem is that these people and posts are proliferating an epidemic of often incorrect statements, and viewers aren’t nearly being skeptical enough.

They say that AI is killing our critical thinking and analytical judgment, yet we seem to be doing that fine ourselves by not questioning what we see.

A great quote I keep in my notes folder reminds me of the need to both doubt and ask questions: “Don’t believe everything you think or see”.

So, my question is, when did we stop looking beyond the veil? And why don’t we ask questions anymore or do our own research?

I’m not expecting an answer, I’m just throwing it out there.

A few case studies

We see case studies on this almost daily.

They’re probably hundreds at this stage, yet we have two recent ones which I’m sure most of you have seen. It is going to feel like I’m picking on MIT here, but that’s not the intention.

It is how the data produced is being used by 3rd parties and how that impacts the global narrative.

The latest MIT Study, which claims 95% of organisations are getting zero return on Gen AI projects, has been doing the rounds in the last week. Now, while this makes a great clickbait headline and social post, we need to look deeper.

If more people went to the “Research and methodology” sections of reports, they would be surprised.

→ Ignoring this makes smart people say dumb things.

For this particular example, Ethan Mollick posted a great note on how this paper was researched. I’ve provided that section for you to check out below:

I’ll let you draw your own conclusions, yet I don’t feel 52 interviews with a 6-month timeline is a good enough example set to be used as it is, as “the global view” by many people posting across social.

The devil is in the details, as they say.

Our second example, again from MIT (sorry, I do love you really), but back in June, gave rise to more clickbait and social discourse.

This has nothing to do with the research or the researchers themselves. They set out to see how using AI tools affected an individual’s ability to write essays. They conducted this with only 54 people and on this one task.

The main finding was that AI can help in the short term, but if you always use and rely on it, you’ll diminish your ability to write an essay without it.

Cool, sounds like common sense to me.

But that research gave rise to crazy headlines like this:

See how quickly that turned?

Did anyone reading these headlines, both in news apps or social posts, look beyond this? From what I see, no.

This is where the problem exists.

Even the lead researcher on this report called out the same thing and set the facts straight in their own post.

It’s not necessarily new, yet algorithm-based platforms are loving the attention it creates.

Like I said, this problem is not isolated to one report, it’s everywhere across social media and as such, society at large.

Posts with clickbait headlines and mass engagement are proliferating, often misleading and, in some situations, harmful messages.

So, what can you do?

Become a skeptical hippo

Ok, what I’m not saying here is to become some kind of conspiracy theorist.

Instead, I want you to engage that powerful operating system that sits in each of our skulls. When you see headlines like we’ve covered or clickbaity posts, try the following:

1. Go to the primary source

  • Locate the actual report or paper (not just a blog post or tweet about it).
  • Even if you don’t read every detail, scan:
    • Abstract (what the study did and found).
    • Methods (how many people, what was tested, what tools were used).
    • Limitations (almost always at the end).

2. Ask 3 key questions

When you see a claim, pause and ask yourself:

  • Who was studied? (demographics, sample size, context).
  • What exactly was measured? (recall, ownership, not general intelligence).
  • How broad are the claims? (exploratory finding vs universal truth).

If the headline claims more than the study actually measured, that’s a red flag.

3. Notice the language

  • Headlines often use absolutes (“ChatGPT destroys learning!”).
  • Scientific reports usually use tentative language (“suggests,” “indicates,” “preliminary”). Spotting this mismatch helps you resist being pulled into the hype.

4. Slow down your consumption

  • Disinformation spreads because social media rewards speed + emotion.
  • Slow thinking (literally taking a minute to check the source or read the abstract) interrupts that cycle and gives you space to process critically.

TL;DR: Read beyond the headlines, ask the questions and embrace those skeptical hippo eyes.


📝 Final thoughts

While this might partly sound like human raging against the machine, I hope my sentiments of ‘do your own research’ and ‘be more skeptical to reach your own conclusions’, come through.

I don’t believe we need to worry about AI killing our critical thinking and analytical judgment, if we’re doing that fine all by ourselves.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

How To Stop AI From Hijacking Your Thinking

While I’m not on the doomsday train of “AI will destroy all human thinking” entirely on its own. I can’t ignore the level of stupidity that some humans exhibit when working with AI.

I shouldn’t be surprised, really, as the path of least resistance is paved with instant gratification, which is a dopamine daydream for the digitally addicted.

Still…

What happens to human thinking when so many outsource it to an artificial construct?

I’m saying this as much to myself as I am to you.

This is turning into a strange “dear diary” entry, but stick with me.

This is the end…or is it?

We both see the polarising views plastered across social feeds.

Logic seems to be lost in most conversations.

Most posts are either “AI will destroy your brain” or “outsource all your thinking to AI” I don’t know about you, but I’m not cool with either of those options.

It doesn’t help when the majority blindly believe every headline that’s emotionally tweaked to grab attention. Taking time to look beneath the surface usually paints a different picture. Yes, I’m looking at you, MIT study.

Moving away from all the noise, only one question is worth asking right now:

Are you thinking with AI, or is AI shaping your thinking?

Maybe it’s doing both, and maybe you’re aware of that.

Saying all that, here are a few points I think are worth exploring.

What happens to ‘human thinking’ if we over-rely on AI?

This is a real grey area for me.

I’ve seen countless examples where too much AI support leads to less flexing of human skills (most notably common sense and deep thinking), and I’ve seen examples where human skills have improved.

In my own practice, my critical thinking skills have improved with weekly AI use over the last two years. It’s what I class as an unexpected but welcome benefit.

This doesn’t happen for all, though.

It depends on the person and their intent, of course.

Research and experiences seem to affirm my thoughts that the default will be to over-rely.

I mean, why wouldn’t you?

This is why any AI skills program you’re building must focus on behaviours and mindset, not just ‘using a tool.’

You can only make smart decisions if you know when, why, and how to work with AI.

One unignorable insight I’ve uncovered from collecting research over the last few years, and psychoanalysing that together with AI, is the importance of confidence in your capabilities to enable you to think critically with AI.

A diagram illustrating how confidence shapes critical thinking with AI, featuring four quadrants: Independent Thinker, Balanced Evaluator, Avoidant User, and Blind Follower, framed by high and low AI literacy and trust levels.
Where are you playing?

This is the battleground of most social spaces today.

High-performing organisations and teams will be those that think critically with AI, not outsource their thinking to it.

Being a “Balanced Evaluator” is the gold standard. So, we could say that thinking about thinking is the new premium skill (more on that ltr).

The combination of high AI literacy (skills, understanding how AI works, limitations) with high trust (knowing the right tool for the job and a willingness to use it effectively) is not straightforward.

That’s where you come in as the local L&D squad.

To be here, you must critically engage with AI by asking whenhow, and why to trust its output. This requires questioning, verifying, and a dose of scepticism that too many fail to do but sorely regret when it backfires.

Also, don’t interpret “AI Trust” as blind faith. This is built through experimenting and learning how the best tools work.

What does meaningful learning with AI look like?

I (probably like some of you) have beef with traditional testing in educational systems.

It’s a memory game, rather than “Do you know how to think about and break down x problem to find the right answer?” We celebrate memory, not thinking (bizarre world).

My beef aside, research shows partnering intelligently with AI could change this.

This article, between The Atlantic and Google, which focuses on “How AI is playing a central role in reshaping how we learn through Metacognition”, gives me hope.

The TL;DR (too long; didn’t read) of the article is that using AI tools can enhance metacognition, aka thinking about thinking, at a deeper level.

The idea is, as Ben Kornell, managing partner of the Common Sense Growth Fund, puts it, “In a world where AI can generate content at the push of a button, the real value lies in understanding how to direct that process, how to critically evaluate the output, and how to refine one’s own thinking based on those interactions.”

In other words, AI could shift us to prize ‘thinking’ over ‘building alone.’

And that’s going to be an important thing in a land of ‘do it for me.’

Side note: I covered my view on the future of learning ditching recall and focusing on human reasoning, in a previous post. You’ll find a bunch of examples showing this in action there.

To learn, you must do

The Atlantic article shared two learning-focused experiments by Google.

In the first, pharmacy students interacted with an AI-powered simulation of a distressed patient demanding answers about their medication.

  • The simulation is designed to help students hone communication skills for challenging patient interactions.
  • The key is not the simulation itself, but the metacognitive reflection that follows.
  • Students are encouraged to analyse their approach: what worked, what could have been done differently, and how their communication style affected the patient’s response.

The second example asks students to create a chatbot.

Coincidentally, I used the same exercise in one of my “AI for Business Bootcamps” last year.

It’s never been easier for the everyday human to create AI-powered tools with no-code platforms.

Yet, you and I both know that easy doesn’t mean simple.

I’m sure you’ve seen the mountain of dumb headlines with someone saying we don’t need marketers/sales/learning designers because we can do it all in ‘x’ tool.

Ha ha ha ha is what I say to them.

Clicking a button that says ‘create’ with one sentence doesn’t mean anything.

To demonstrate this to my students, we spent 3 hours in an “AI Assistant Hackathon.” This involved the design, build, and delivery of a working assistant.

What they didn’t know is that I wasn’t expecting them to build a product that worked.

Not well, anyway.

I spent the first 20 minutes explaining that creating a ‘good’ assistant has nothing to do with what tool you build it in and everything to do with how you design it, ya know, the A-Z user experience.

Social media will try to convince you that all it takes is 10 minutes to build a high-performing chatbot.

While that’s true from a tech perspective, the product and its performance will suck.

You need to think deeply about it

When the students completed the hackathon, one thing became clear.

It’s not as simple or easy to create a high-quality product, and you’re certainly not going to do it in minutes.

But, like I said, the activity’s goal was not to actually build an assistant, but rather, to understand how to think deeply about ‘what it takes’ to build a meaningful product.

I’m talking about:

  • Understanding the problem you’re solving
  • Why it matters to the user
  • Why the solution needs to be AI-powered
  • How the product will work (this covers the user experience and interface)

Most students didn’t complete the assistant/chatbot build, and that’s perfect.

It’s perfect because they learned, through real practice, that it takes time and a lot of deep thinking to build a meaningful product.

“It’s not about whether AI helped write an essay, but about how students directed the AI, how they explained their thought process, and how they refined their approach based on AI feedback. These metacognitive skills are becoming the new metrics of learning.”

Shantanu Sinha, Vice President and General Manager of Google for Education

AI is only as good as the human using it

Perhaps the greatest ‘mistake’ made in all this AI excitement is forgetting the key ingredient for real success.

And that’s you and me, friend.

Like any tool, it only works in the hands of a competent and informed user.

I learned this fairly young when a power drill was thrust into my hands for a DIY mission. Always read the instructions, folks (another story for another time).

Anyway, all my research and real-life experience with building AI skills have shown me one clear lesson.

You need human skills to unlock AI’s capabilities.

You won’t go far without a strong sense (and clarity) of thinking, and the analytical judgment to review outputs.

Embrace your Human Chain of Thought

Yes, I made up this phrase (sort of).

Let me give you some context…

Early iterations of Large Language Models (LLMs) from all the big AI names you know today weren’t great at thinking through problems or explaining how they got to an answer.

That ability to break down problems and display its thinking is called a Chain of Thought technique.

This was comically exposed with any maths problem you’d throw at these early-stage LLMs.

They would struggle with even the most basic requests.

It’s a little different today, as we have reasoning models. These have been trained to specifically showcase how they solve your problems and present that information in a step-by-step fashion.

We now expect all the big conversational AI tools to do this, so why don’t we value the same in humans?

Those who nurture this will have greater command of their career.

So don’t ignore your Human Chain of Thought.

Focusing your energy on the ability to explain your reasoning is far more useful in a world littered with tech products that can recall info on command.

Tools to enhance, not erode your thinking with AI

A couple of useful tools and frameworks to get you firing those neurons from the most powerful tool at your disposal (fyi, it’s your brain).

1/ Good prompting is just clear thinking

Full disclosure: There’s no such thing as a perfect prompt.

They’re often messy, don’t always work every time in the same pattern and need continuous iteration.

Saying that, you can do a lot (and I mean a lot!) to set yourself up for success.

Here’s a (sorta framework) I use to help think critically before, during and after working with AI.

Flowchart displaying a framework for critical thinking with AI, highlighting steps including assess, pre-prompt, output analysis, challenge, role reverse, and prompt.

Step 1: Assess

Can AI even help with your task? (It’s not magic, so yes, you need to ask that)

Step 2: Before the prompt

  • What does the LLM need to know to successfully support you?
  • What does ‘good look like’?
  • Do you have examples?

⠀And, most importantly, don’t prompt and ghost.

Step 3: Analyse the output

  • Does this sound correct?
  • Is it factual?
  • What’s missing?

Step 4: Challenge & question

I’m not talking about a police investigation here. 

Just ask:

  • Based on my desired outcome, have we missed anything?
  • From what you know about me, is there anything else I should know about ‘x’? (works best with ChatGPT custom instructions and memory)
  • What could be a contrarian take on this?

Step 5: Flip the script

Now we turn the tables by asking ChatGPT to ask you questions:

Using the data/provided context or content (delete as needed), you will ask me clarifying questions to help shape my understanding of the material.

They should be critical and encourage me to think deeply about the topics and outcomes we’ve covered so far. Let’s start with one question at a time, and build on this.

This is a powerful way to develop your critical skills and how you collaborate with AI.

P.S. Get more non-obvious insights and guidance on AI prompting in my short course designed specifically for busy people like you.

2/ Unpack the problem

Before you start building that next ‘thing’, check out this little framework, which has helped me to do my best work over the last decade.

3/ Partner with AI, don’t use it like a one-click delivery service

If I had a dollar for every time I said this, I’d be a billionaire by next year.

Often, it’s the small and simple actions that can bring the most valued results.

That’s not to say it’s easy to do.

In this video, I share how you can use AI to improve your critical thinking as a thought partner.

Final thoughts

There’s much more to say about this, friend.

But we’ll pause here for now.

Thinking is cool, and thinking about thinking is even cooler.

Let your brain dwell on that for a bit. AI can be an extension of your thinking, but never let it shape it.

Keep being smart, curious and inquisitive as I know you are.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

5 Insights That’ll Challenge Everything You Think You Know About AI In L&D

When I think about the future of learning with AI, I don’t imagine it as more content and courses.

A rewiring of what we do and how we do it is here.

While most teams are stuck at the point of innovations from 2 years back, you can be ahead of this. Perhaps, this is the new reality of learning in the flow.

Conversational, not transactional.

I can’t help but get excited about conversational-driven experiences that not only develop new skills but also reveal more about ourselves and how we think.

Yet…I still see a lot of talk and not so much action, sprinkled with a lot of misinformation and actual understanding of Gen AI’s power and limitations. That creates a problem if the L&D industry wishes to thrive in the new world of work with AI.

Here’s 5 insights I’ve picked up in my research, analysis and partnering with lots of L&D and HR teams over the past few years on driving AI adoption.

A graphic illustrating 'The Customer Experience' featuring logos of various AI tools: Gemini, Copilot, deepseek, ChatGPT, and Claude, arranged in a circular flow, emphasizing that LLMs are the new operating system.

1/ Your customers are building their own solutions

The biggest problem, in my eyes right now, is the fact that our audience (I’m not calling them learners!) is able to design their own personalised and adaptive learning experiences with LLMs.

Whether they’re good or not is another question.

I’ve been talking to clients about this for the past year. They mostly nod and reply ‘that’s interesting’, but it’s more than interesting. It’s a threat coming right down the barrel at light speed to many L&D teams.

Why do they need you, if they have the gods of LLMs?

(I shared a post on this a few months ago: How will you respond to the changes in your customers’ experiences?)

We must ask ourselves, if our customers have access to intelligence on-demand and personalised learning experiences, how do you fit into that?

You cannot fight the adaptiveness and personalisation that generative AI enables.

That would be a foolish endeavour. Instead, you have to evolve as workforces will demand a new level of experience that they currently enjoy in their personal life.

We’ve been here a few times before.

As an industry, we’ve lost many battles to Google search, all of social media and YouTube.

We all want the sleek experiences from our personal use, end of story.

So, this presents a crossroads for us.

Either we keep trying to force people to places and spaces they don’t want to be, or design to meet them where they’re hanging out. The choice is yours.

If we look at AI as the dominant emerging operating system, you have your answer to the above.

Cartoon depicting a dog sitting calmly in a burning room with the caption 'THIS IS FINE', illustrating the misconception that AI adoption is merely a training project.

2/ It’s a transformation, not a training project

I find so many teams fail to see this.

AI in L&D is not your latest training project, it’s one of the biggest transformation projects we’ll ever face.

And…one we need to consider if we exist at the end of it in our current form (so controversial, I know 😂).

If you’re reading these words and all your team/company is doing is the minimal “Let’s train everyone how to write prompts”, then check out this post where I walk through the 4 levels of transformation happening across L&D with AI right now.

TL;DR: Stop treating AI adoption like a training project

3/ You can do a lot more with AI today than you think

You know I share a lot of innovations and tech demos.

What some don’t realise is that almost everything I share is available to use right this minute.

That’s why I’m always surprised when I get weekly messages like “I had no idea this was even possible today” or “OMG! I thought this would be years away”. Remember when I said about teams being stuck at innovations from 2 years ago, this is what I mean.

The most recent example of this is when I shared a video of me working with an interactive avatar from HeyGen.

It’s basically a face and voice on top of an LLM. No static avatars or scripts to read from.

My inbox went mental once more, and it proved to me again that there is so much untapped potential with current technology.

So many miss all of these innovations because they’re too busy chasing all the shiny things!

You can find even more innovative ways to use AI to rewire L&D on my YouTube channel and on the dedicated AI for L&D section of the STT website.

Speaking of chasing shiny things…

4/ You’re trying to sprint before you can walk with AI

Agent this, agent that is literally all I see on LinkedIn these days.

Granted, it’s a total echo chamber of people mostly shouting that back at each other, but by God, it’s giving me a headache.

The hype, mostly driven by AI companies, is becoming laughable.

Don’t get me wrong, AI agents will be very useful and there’ll be some great applications in L&D, yet you’d think a sort of world peace is about to emerge by the way the ‘influencers’ talk on social.

I work with sooo many teams and companies that hardly know how to use a basic AI assistant to even 50% of its potential. Adding agents into that mix is a recipe for both confusion and mistakes.

A humorous meme featuring two characters discussing the term 'AI agent,' with one expressing skepticism about its meaning.

A lot of people need to slow down

Pause… take a breath and find your centre (or whatever meditation teachers say).

Almost 90% of what you see paraded online is not a true agent solution. Not in the technical context, anyway.

Much like marketing teams decided to use the word “AI” everywhere post-2022, they’re doing the same thing by labelling everything an Agent.

Unfortunately, this has created a fractured understanding of what an agent is, and the definitions are always changing.

So, I decided to put something together to cut through that BS ↓

5/ Using AI for conventional “learning” tasks is not groundbreaking

Perhaps this is a controversial one.

I don’t believe using AI tools to do more of the conventional content and course experiences in L&D is an impressive ‘use case’. Whenever someone says to me, “I used ChatGPT to create my next course in half the time,” I chuckle in my head ‘That’s cute’.

You can use it to produce more courses and content, but where does that get you?

The same place where the 99% who could become obsolete occupy.

I keep using the word “rewire” so much in my work because that’s what we need – a complete rewiring of what we do and how we do it. Be brave enough to say, “Does this need to be a course?”

With intelligence on-demand in everyone’s hands, our default operating system is becoming AI, and it’s not using courses to help people learn.

I know I keep saying it, but you don’t want to use AI to accelerate outdated ideas and practices. Instead, we should focus on rewiring what we do.

This quest of rewiring what L&D can do with AI has led me to think of a world beyond the course or event as the default delivery for learning moments.

I share one of those experiments in this one ↓

Final thoughts

There you go, friend.

Now you see what I see, and my hope is for you to use this to improve your work both with AI and if you’ve been given the ill-fated L&D mission of “making people use AI”.

→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

An Unconventional Way To Learn New Skills with AI

I often see the spread of AI across L&D like a game of 4D chess.

It looks complicated, but it is simpler than you imagine and requires a depth of thinking beyond the status quo.

I know I keep saying it, but you don’t want to use AI to accelerate outdated ideas and practices. Instead, we should focus on rewiring what we do.

This quest of rewiring what L&D can do with AI has led me to think of a world beyond the course or event as the default delivery for learning moments.

I’m going to share one of those experiments with you today.

Reverse-engineering with AI

As I become older, and allegedly wiser, I can’t help but keep asking – why did I do that thing?

That thing can be anything.

A purchase, a quick decision, a random Tuesday afternoon choice on which delicious tea to drink.

It’s probably why I ended up in L&D.

I always want to understand what makes people do what they do. Plus, the biggest goal of any L&D function worth its salt is to understand and influence behaviour.

Every time you’re asked to deliver “training”, it’s an ask to change behaviour.

Anyone who’s worked with me will know I ask questions, a lot of them.

I could be classed as either a psychologist or an FBI interrogator, based on the context of my deep line of questioning.

I’m not scared to turn this on myself either.

I know that sounds like some weird scene from The Matrix, but stay with me. I felt like this could be a great opportunity to do a little experiment with AI to engage in a bit of an unconventional learning strategy.

I also like to think of it as a “looking beyond a course” solution, because everything doesn’t need to be a course, ya know.

The email made me do it

Like you, I get a lot of messages about products and courses.

I skim a few and delete a lot.

There was one I’d be going back and forth with for about 6 – 8 months. It was about copywriting, which is an essential skill for all humans, imo. Even more important in my line of work with getting people’s attention and turning the complex into something simple to understand.

I’d seen lots of emails about the course.

I read the reviews and thought a lot about it. But still didn’t find my way to the “buy” button.

That changed in one afternoon.

One email about the course, which offered a limited-time discount, hit me at the right time, and I threw my money at it. A few hours later, I wondered what about ‘that email’ made me take the final step.

I’d seen numerous emails about the course over the past 6 – 8 months, but I did nothing.

That got me hooked…and slightly obsessed.

I think we can learn a lot from decisions like this. Not just about ourselves, but the techniques that others use to influence behaviour. This email could do both.

So, I decided to reverse engineer the email with the help of AI, as my coach.

This is an experimental learning experience, and you can see the results in the video below.

What you can take from this to transform your learning experiences with AI

What I’m sharing here is not conventional, not by workplace learning standards.

Most workforces view a “learning experience” as a form of content delivery, either by being in a room with others or consuming a static digital product.

That might have worked pre-2022, but we’re firmly in the realm of conversational experiences.

This opens up a new level of self-exploratory learning (not sure that’s a thing, but whatever) that doesn’t require a classroom or a course.

Here’s a few thoughts on why this approach is beneficial to both you and the people you serve:

1/ Focus on the ‘Why’

I strongly believe we don’t ask “Why” enough.

We used to do it all the time when we were young to help us understand the world we’re growing up in. Somewhere along the way, we lost our confidence to do that.

I think both school and the workplace make us scared to ask simple little things like “why” and just say “I don’t know”.

You’ll see in the video how I use AI to claw out the focus on ‘why’.

It reminded me of the useful decision-making framework called the “5 Whys”, which was created by Sakichi Toyoda at Toyota. It was so effective, it became part of Toyota’s much-loved “Lean Philosophy”.

The goal is to find the root cause of a failure, challenge or behaviour.

It’s so simple you might be tempted to discount it, but you’ll be surprised by its results. All you do is take the problem or challenge you’re obsessing over and ask ‘why’ 5 times.

Here’s an example from Kabanize.com

In a way, I feel like AI can re-ignite some of our childlike wonder through conversational experiences.

As a child of the 90s in a pre-Google era, most of my fascination and curiosity were satisfied through the children’s book series of “How Things Work”.

Now, I use AI and moments with humans to navigate that same curiosity.

2/ Leveraging AI as a mind coach to uncover unknown perspectives

That title sounds vastly more sinister than what I’m trying to get at.

Sometimes my engagements with AI can feel like talking with a parrot, and at others, I feel like I’ve met a digital Buddha with profound insights.

When it comes to reverse-engineering ideas, processes or anything, I find this style of conversation more rewarding than watching a presenter slowly murder my attention span with PowerPoint.

One of my favourite lines of convo with my local LLM of choice is “What am I missing here?”

I’ve had some wild revelations from that question alone.

3/ Skill Exploration

No one knows everything, nor do they have all the skills in the world.

Life isn’t some Marvel film where some angry guy is travelling the world collecting skills for their evil or not so evil plans. That doesn’t mean you shouldn’t be curious about other skills and how they are used.

This is the exact thing that led to my analysis and reverse-engineering experience shown in the video.

I want to understand why I took an action, but I also want to learn how to craft such a skill myself in copywriting. As we become ever more focused on skills, I can only imagine this being a meaningful way to support people in developing the right skills.

Final thoughts

When I think about the future of learning with AI, I don’t imagine it as more content and courses.

A rewiring of what we do and how we do it is here.

While most teams are stuck at the point of innovations from 2 years back, we can be ahead of this. Perhaps, this is the new reality of learning in the flow.

Conversational, not transactional.

I can’t help but get excited about conversational-driven experiences that not only develop new skills but also reveal more about ourselves and how we think.

→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

Everything You Need To Know About AI Agents For L&D (2026)

Agent this, agent that is literally all I see on LinkedIn these days.

Granted, it’s a total echo chamber of people mostly shouting that back at each other, but by God, it’s giving me a headache.

The hype, mostly driven by AI companies, is becoming laughable.

Don’t get me wrong, AI agents will be very useful and there’ll be some great applications in L&D, yet you’d think a sort of world peace is about to emerge by the way the ‘influencers’ talk on social.

So, this is my PSA (public service announcement) to you to say: “Don’t get worried about all the talk.”

I know it feels like you’re missing out on some great party, but you’re really not.

Don’t believe the AI Agent hype in L&D

Almost 90% of what you see paraded online is not a true agent solution. Not in the technical context, anyway.

Much like marketing teams decided to use the word “AI” everywhere post-2022, they’re doing the same thing by labelling everything an Agent.

Unfortunately, this has created a fractured understanding of what an agent is, and the definitions are always changing.

Not only this, but many are trying to run before they can walk.

I work with sooo many teams and companies that hardly know how to use a basic AI assistant to even 50% of its potential. Adding agents into that mix is a recipe for both confusion and mistakes.

A lot of people need to slow down

Pause… take a breath and find your centre (or whatever meditation teachers say).

Without this moment of pause, it’s incredibly hard for you to truly know what’s going to help you and what you don’t need to know.

And too many of us aren’t aware of all the options we already have today.

Agents are cool, but the current noise is lying to you about a lot of things.

So, let’s bring some clarity to all of this ↓

A scene from a film featuring two characters, with one expressing confusion and disbelief about the term 'AI agent,' while the other looks perplexed.

Assistants vs Agents: What’s the difference?

Two terms you might hear techies mention with AI products are ‘AI assistants’ and ‘AI agents’.

Here’s the difference in clear, simple terms.

Let’s start with what we know – AI assistants like ChatGPT.

These are tools that help us with tasks through conversation. They can write, analyse, explain, and give suggestions based on what we ask.

AI agents take this a step further.

Instead of just helping through conversation, agents can actually complete tasks on their own. They follow instructions, use different tools, and make basic decisions to get things done.

The key difference is simple:

  • AI assistants help you with tasks
  • AI agents complete tasks for you

Both are valuable, but they serve different purposes. An assistant works with you through conversation, while an agent works independently based on your instructions.

Use this info to impress the boss at your next meeting.

I’m not going to leave you with just this, though.

As I’m a tech nerd, I’ve filmed a quick video (see below) to show how agents work with examples from Google and Salesforce – enjoy.

What can AI agents do?

A lot, but maybe not as much as the local tech bros are promising.

Imagine having a personal assistant who not only follows your instructions but also takes the initiative to resolve problems independently.

AI agents are like that, except they exist in the digital world.

At their core, they’re designed to observe their environment, make decisions, and take actions using the tools available to them.

Unlike traditional software that waits for you to give it a command, like LLMs, AI agents can think ahead, figure out what needs to be done, and act.

Sometimes without needing constant human input.

Think of them as a self-driving car.

Instead of waiting for a person to steer, brake, or accelerate, the car analyses traffic, makes decisions, and moves safely toward its destination.

AI agents work similarly but in a digital space, whether it’s automating workflows, analysing data, or even assisting with creative tasks.

The magic of AI agents lies in their autonomy and problem-solving abilities.

Even if you don’t give them step by step instructions, they can work out the best way forward to achieve a set goal.

They do this by following set rules and past experiences to decide the best way to complete a task. This makes them incredibly useful for businesses, customer support, research, and even personal productivity.

→ Get an example of this type of AI agent solution with this scenario I built to support common onboarding challenges between HR and Tech teams.

The many faces of AI agents

There was a time when an AI agent meant one thing.

Now, we’ve hit peak confusion thanks to marketing teams the world over.

Each one wants to tell you they’re “agentic”, and each wants you to use their AI agent. But…is it really an AI agent? And if it is, is it the right one for you?

Let’s unpack the types of AI agents, or what social media wants to tell you are AI agents in the market today:

Now, the reality of what you see online is 95% in the automation and AI workflow buckets.

I know every 22-year-old with a YouTube channel wants to tell you otherwise, but “true” AI agent solutions, right now, are rare. Even rarer are agents doing valuable work within organisations.

And when I say ‘agents’, I mean actual ‘agents’, not workflows.

I’m not being harsh. I think AI workflows and automations are very useful, just don’t call them “Agents”.

Before we move on, let’s talk about Model Context Protocol aka MCP, in the first image.

Unless you’re a backend developer or some super nerd (like yours truly), you might never engage with MCP. Nonetheless, let’s take this as a learning moment to once again impress at your next team meeting.

Model Context Protocol Explained

To understand MCP, we need to understand the limitations of Large Language Models (LLMs) on their own, with the challenges developers face when trying to make them useful.

Maybe this will make you feel a bit of empathy for your local tech team.

LLMs are good at tasks like writing text, answering questions based on their training data, or generating code snippets.

However, they can’t do anything meaningful in the real world on their own, such as sending an email, interacting with a calendar, or performing a specific task on your behalf.

So, we need to connect them to different tools and services.

We can do this through APIs…however, this relies on APIs being made available for applications to connect and constantly needing to be monitored. One API with an LLM is easy, but connecting multiple tools to LLMs through APIs is difficult.

Now, MCP helps solve this problem by acting as a universal translator to simplify these connections.

Think of it as a layer between the LLM and all the different tools and services it might need to interact with. Instead of the LLM having to learn and manage every single service (through an API), MCP translates the different “languages” of all those services into a unified language for the LLM.

Now, either you got that, or I confused the s**t out of you.

If the latter, check out this vid, which should resolve that.

To Agent or not to Agent, that is the question

Every tool has its time and place.

I say that too often. Much like LLMs, and AI in general, Agents aren’t the answer to everything. Knowing when (and when not) to call upon the powers of an AI agent is a skill in itself.

My best advice is actually stolen from an engineer at Anthropic (creator of Claude).

Barry Zheng (Applied AI team at Anthropic) gave what I class as a legendary answer to the growing trend of people trying to apply agents to every problem, even when simpler systems would suffice.

“Don’t go after a fly with a Bazooka”

Barry Zheng (Applied AI team at Anthropic)

Magnificent!

I see this so much these days with a lot of tech.

So many tasks can be done in a few minutes by a human, but we’ll spend hours trying to get AI to do it. Surely, that’s counterintuitive to the goal?

Barry also shared this useful slide from one of his live talks (if you’re reading this, Barry, I’m not stalking you – promise!).

And to echo what Quentin Villard shared on LinkedIn, here’s a quick framework to figure out the best tool for the job:

  • If a task requires interacting with external services or your digital environment and is not set up as a workflow or agent, you need to do it yourself. Use a degree of common sense here. If the task is simple or you enjoy it, use that supercomputer in your head, aka the brain.
  • Choose an AI workflow for repeatable, rule-based tasks where you want predictable automation.
  • Choose an AI agent for tasks where you have a goal and want the AI to dynamically figure out the steps, acting as a flexible assistant.

Final thoughts

Of course, there’s much more to say about agents.

But for 95% of the humble humans in this world, this is what you need to know.

This space will continue to grow faster than my cups of tea can brew, but that doesn’t mean you need to be flying at the same speed.

Deep and meaningful understanding requires a moment or two to breathe.

Agents are here, they’re useful, and it will only become easier to access them in shared marketplaces.

As a bonus, here’s a few more resources to shape your knowledge:

Go forth, human.

→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.