It’s odd because the validation of “adoption” has many definitions dependent on the context and environment. The common pitfall is to measure adoption as ‘use of AI tools’ alone.
As we know with previous technology, usage alone doesn’t mean meaningful adoption.
Setting what adoption looks like in your organisation is not a task for the L&D team.
Yet, we have an opportunity to contribute to long term and meaningful adoption of AI across workforces as part of a wide collaboration in a community.
Let’s go beyond the veil of bullshit we see online.
Access to an AI tool alone means nothing, and putting on one hour lunch and learns to “make people learn AI” is a comical up-skilling strategy.
If you’re a long time reader, you’ve heard me become a broken record when I talk about what it takes to nurture meaningful and long-term change.
We have much to consider with context, culture and constraints in each environment. No two workplaces are the same, that’s why the cookie-cutter “adoption frameworks” make me laugh.
They’re a good point of inspiration but you shouldn’t follow them like a strict set of instructions.
Saying that, what is it we need to consider beyond tools?
Read on…
People, Systems and Tools
As you’ve probably guessed, launching new technology and tools alone rarely leads to meaningful adoption.
There’s a bigger ecosystem at play.
We have to consider:
1/ People
Where are people at today and how do we meet them?
Everyone will have a different understanding, maturity and receptiveness to something new and unknown. In AI’s case, we have a mix of emotions from “will this take my job” to “I want it to do all this stuff I hate doing”.
The most difficult part of a change process is people because we’re all so unpredictable.
2/ Systems
Quite simply, how we work today.
What are the tried, tested and trusted conscious and unconscious systems we have in place. This covers both how we execute tasks and how we think about executing those tasks (deep, I know).
We each follow different types of systems in our day to day.
Understanding what these are and how AI will impact those is key in this change.
3/ Tools
The part you’re most likely more familiar with.
Here, we should consider the tools in use today alongside new ones being deployed, and how to bridge the gap in both understanding and knowing when and where to deploy them.
Too many forget the ‘when and where’ part at their own peril.
Where you can add value
Source: BCG
For us to recognise where we can provide support and drive value, we must note what’s changing.
I think this framework from BCG can help recognise the moments where performance support is most need with AI transformation.
They propose it for navigating AI transformation at scale, and through an L&D lens, I see this as a conversation point of what to map against when focusing on how best to support workforce’s.
It’s built on two key dimensions:
1️⃣ AI Maturity
It progresses from tool-based adoption by individuals, to workflow transformation, to full, agent-led orchestration. Most organizations, and even teams within them, operate across multiple stages at once, not in a linear path.
2️⃣ Workforce Impact
This spans how tasks are executed, to what skills are needed, to how teams are structured, to how organisational culture must evolve to support new ways of working.
While this covers the wider transformation AI brings across businesses, it acts as a roadmap for L&D.
A roadmap is often what we need because its not uncommon for senior leaders to treat “training” (as they call it) as a boomerang that’s thrown at will when they decide people need to know stuff.
The framework above provides a view to where the friction/pain points/ problems exists in the cycle of change. That’s where we should focus.
Map it out
I mentioned before to not blindly follow frameworks, and that advice is the same here.
This view from BCG is a useful foundation for each of us to think about “where can we add value”, but it will look different for each environment.
So, I’d recommend you map out what your organisational journey looks like today.
Explore the 3 pillars of tasks, talent and teams across your business and how/where AI is starting to and might impact these. It’s here you will uncover the friction and pain points where we can be of most service.
Some of that will be through tooling, no doubt.
Yet, I feel pretty safe in saying you’ll be spending a good deal of your time navigating changes within people and systems.
Final thoughts
There’s much to say, of course, but only so much attention span I can ask you to give.
I’m thinking of expanding some of this thought into a long-form video, if that sounds like something you’d like to see, let me know.
In the meantime, so additional resources to explore on this include:
Somehow, it’s been a year since I hit publish on that one.
Isn’t it funny how time works? I remember so clearly spending months researching and putting all the pieces together to look deeper into the real impact of AI on skills so far, and now, here I am talking about it like some sort of ancient text.
My reminiscing aside…
The message of that piece was to think deeply about the over-reliance we will easily slip into with AI, and how easy it will be to convince ourselves we’re learning how to do something, when in reality, AI is doing it for us.
A year later, I only see more activity, which has amplified both.
That’s not to say there are not those who are rejecting total delegation to AI and those finding the balance between artificial and human intelligence.
We’ll talk about some of those later.
Consequences
It’s such a serious sounding word, isn’t it?
Like something your parents would say to you.
Our choices can lead to consequences in many forms, that’s the risk we all take, and not to keep sounding like some old stoic, but life is essentially all about risk.
Back in October last year, when I spoke about AI over-reliance and the illusion of expertise, I only covered in small detail what the consequences of those choices could mean.
A year later, it’s clear to me that’s skill erosion.
The Great Erosion of Skills
Do you remember just after the pandemic, when every headline was something like “The Great ‘x’ of blah blah?”
I’m happy to make a contribution to that movement 😂.
Jokes aside, you might be noticing some people’s skill sets are eroding through lack of use, and some aren’t even learning the skills at all. This is being driven by the change in the tasks we now deliver.
As AI gets better and better at completing a wide variety of tasks, it means we (as humans) do less in certain areas.
That is not always a bad thing.
Cognitive offloading of some tasks can amplify our ability to perform better in the workplace. A good example of this is GPS. Before we had GPS in our lives to guide us to destinations, we’d spend hours pouring over gigantic maps with tiny text, trying to figure out the best route.
Now, at the touch of a button, we’re guided without having to activate one brain cell.
There’s another side to this coin, though.
Humans, for the most part, want to take the path of least resistance and favour instant gratification over the challenge (I’m no different here).
The problem is that real learning and thus improved performance are about navigating the challenges. It’s really hard to learn how, what and why if you don’t experience the struggle.
AI doesn’t take this away all on its own, how we use AI does.
In our quest for “more time”, “creative freedom”, “improved efficiency” and every other statement that tech CEOs blurt out about AI, we’ve become obsessed with the automation of everything.
This creates the consequences I’m talking about.
What we lose and what we gain
I always remember an old colleague saying, “You can have it all, you just can’t have it at the same time”.
While it was in relation to something else, I can’t help but think it fits well in this conversation.
I’ve found life to be a series of trade-offs.
If you say yes to one thing, you’re saying no to something else. It sounds like easy math (and it is), but it’s by no means a simple equation.
I’m not the first to consider the impact of AI in this way.
Gartner predicts that by 2028, 40% of employees will be trained and coached by AI when entering new roles, up from less than 5% today.
While this shift promises faster onboarding and adaptive, scalable learning, it also means fewer chances for employees to learn from experienced peers. Junior staff, who once relied on mentorship and hands-on experience, will learn primarily from AI tools, while senior staff increasingly depend on AI to complete complex work.
This shift accelerates the loss of foundational skills and weakens expert mentorship and relationship development across the organization.
Source: Gartner
We have skills eroding through lack of practice and application, and it seems, the quick expiring of skill creation with future generations entering the workforce.
Harold Joche put it nicely when he said, “One key factor in understanding how we learn and develop skills is that experience cannot be automated”.
So, what can be done?
Are we doomed to roam the world skill-less and watch AI-powered tools suck the life out of the world itself? Of course not, there is a way, my fellow human hacker.
Strategies and tactics to prevent skill erosion
So, instead of moaning about the great wave of skill erosion, I’d rather focus on doing something about it.
The good news is there’s a lot we can all do.
If you haven’t already, you can find a ton of my guidance in these articles:
Saves me repeating myself like a broken record here.
Plus, the folks at Gartner offer some basic but useful actions for the workforce:
Watch for AI mistakes and rising error costs, and keep manual checks in place where AI is used most.
Retain your senior staff and encourage peer learning to slow skill loss.
Focus on roles at risk and review your talent strategies regularly to keep key skills strong.
Pair AI with human oversight and maintain manual checks as a backup for AI.
Encourage employees to continue exercising core skills (e.g., analysis, coding, problem-solving) even when AI tools are available — through simulations, rotations and shadowing.
Use AI simulations and adaptive training, but make sure people still learn from each other.
My question to you: What would you add?
Final thoughts
There’s much more to ponder on this.
Like with everything in this space, whether it happens or not is down to your individual choices and intentions. So, if you want to craft a career for the long haul, make smarter choices when it comes to your skills.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.
Sounds weird to say nowadays, I know. Yet, when I first started using platforms like LinkedIn nearly 15 years ago, it was both a different time and place.
Platforms felt more conversational-based and not driven by clickbait. The algorithms weren’t so optimised for mass outrage.
I used to learn tons as a mid-twenty something trying to navigate the odd corporate world in London.
I had a ritual of saving posts to read later and call upon as a sort of personal learning system.
I adopted the same approach in the early days on Instagram.
Specifically trying to take my body from the grasshopper lightweight it was, to some form of decent-sized guy who didn’t look like he could slide through the cracks of doors.
Again, I’d absorb whatever good stuff I could.
Back in 2012, fitness influencer’s weren’t really a thing, so I didn’t have to cut through any noise.
Today, the story couldn’t be farther from this.
I only use LinkedIn these days, and even that is becoming a struggle because I’m met daily with disinformation, misinformation, smart people saying dumb things and a host of selfies desperately being used in the search for attention.
What concerns me most these days is so many people’s inability to ‘read beyond the headlines’.
I see this most often in the last few years with the sharing of the absurd amount of research and reports on AI. Look, I understand the attention game and why people indulge in the shock factor to garner attention.
The problem is that these people and posts are proliferating an epidemic of often incorrect statements, and viewers aren’t nearly being skeptical enough.
They say that AI is killing our critical thinking and analytical judgment, yet we seem to be doing that fine ourselves by not questioning what we see.
A great quote I keep in my notes folder reminds me of the need to both doubt and ask questions: “Don’t believe everything you think or see”.
So, my question is, when did we stop looking beyond the veil? And why don’t we ask questions anymore or do our own research?
I’m not expecting an answer, I’m just throwing it out there.
A few case studies
We see case studies on this almost daily.
They’re probably hundreds at this stage, yet we have two recent ones which I’m sure most of you have seen. It is going to feel like I’m picking on MIT here, but that’s not the intention.
It is how the data produced is being used by 3rd parties and how that impacts the global narrative.
The latest MIT Study, which claims 95% of organisations are getting zero return on Gen AI projects, has been doing the rounds in the last week. Now, while this makes a great clickbait headline and social post, we need to look deeper.
If more people went to the “Research and methodology” sections of reports, they would be surprised.
→ Ignoring this makes smart people say dumb things.
I’ll let you draw your own conclusions, yet I don’t feel 52 interviews with a 6-month timeline is a good enough example set to be used as it is, as “the global view” by many people posting across social.
The devil is in the details, as they say.
Our second example, again from MIT (sorry, I do love you really), but back in June, gave rise to more clickbait and social discourse.
This has nothing to do with the research or the researchers themselves. They set out to see how using AI tools affected an individual’s ability to write essays. They conducted this with only 54 people and on this one task.
The main finding was that AI can help in the short term, but if you always use and rely on it, you’ll diminish your ability to write an essay without it.
Cool, sounds like common sense to me.
But that research gave rise to crazy headlines like this:
See how quickly that turned?
Did anyone reading these headlines, both in news apps or social posts, look beyond this? From what I see, no.
It’s not necessarily new, yet algorithm-based platforms are loving the attention it creates.
Like I said, this problem is not isolated to one report, it’s everywhere across social media and as such, society at large.
Posts with clickbait headlines and mass engagement are proliferating, often misleading and, in some situations, harmful messages.
So, what can you do?
Become a skeptical hippo
Ok, what I’m not saying here is to become some kind of conspiracy theorist.
Instead, I want you to engage that powerful operating system that sits in each of our skulls. When you see headlines like we’ve covered or clickbaity posts, try the following:
1. Go to the primary source
Locate the actual report or paper (not just a blog post or tweet about it).
Even if you don’t read every detail, scan:
Abstract (what the study did and found).
Methods (how many people, what was tested, what tools were used).
Limitations (almost always at the end).
2. Ask 3 key questions
When you see a claim, pause and ask yourself:
Who was studied? (demographics, sample size, context).
What exactly was measured? (recall, ownership, not general intelligence).
How broad are the claims? (exploratory finding vs universal truth).
If the headline claims more than the study actually measured, that’s a red flag.
3. Notice the language
Headlines often use absolutes (“ChatGPT destroys learning!”).
Scientific reports usually use tentative language (“suggests,” “indicates,” “preliminary”). Spotting this mismatch helps you resist being pulled into the hype.
4. Slow down your consumption
Disinformation spreads because social media rewards speed + emotion.
Slow thinking (literally taking a minute to check the source or read the abstract) interrupts that cycle and gives you space to process critically.
TL;DR: Read beyond the headlines, ask the questions and embrace those skeptical hippo eyes.
📝 Final thoughts
While this might partly sound like human raging against the machine, I hope my sentiments of ‘do your own research’ and ‘be more skeptical to reach your own conclusions’, come through.
I don’t believe we need to worry about AI killing our critical thinking and analytical judgment, if we’re doing that fine all by ourselves.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.
While I’m not on the doomsday train of “AI will destroy all human thinking” entirely on its own. I can’t ignore the level of stupidity that some humans exhibit when working with AI.
I shouldn’t be surprised, really, as the path of least resistance is paved with instant gratification, which is a dopamine daydream for the digitally addicted.
Still…
What happens to human thinking when so many outsource it to an artificial construct?
I’m saying this as much to myself as I am to you.
This is turning into a strange “dear diary” entry, but stick with me.
We both see the polarising views plastered across social feeds.
Logic seems to be lost in most conversations.
Most posts are either “AI will destroy your brain” or “outsource all your thinking to AI” I don’t know about you, but I’m not cool with either of those options.
It doesn’t help when the majority blindly believe every headline that’s emotionally tweaked to grab attention. Taking time to look beneath the surface usually paints a different picture. Yes, I’m looking at you, MIT study.
Moving away from all the noise, only one question is worth asking right now:
Are you thinking with AI, or is AI shaping your thinking?
Maybe it’s doing both, and maybe you’re aware of that.
Saying all that, here are a few points I think are worth exploring.
What happens to ‘human thinking’ if we over-rely on AI?
This is a real grey area for me.
I’ve seen countless examples where too much AI support leads to less flexing of human skills (most notably common sense and deep thinking), and I’ve seen examples where human skills have improved.
In my own practice, my critical thinking skills have improved with weekly AI use over the last two years. It’s what I class as an unexpected but welcome benefit.
This doesn’t happen for all, though.
It depends on the person and their intent, of course.
Research and experiences seem to affirm my thoughts that the default will be to over-rely.
I mean, why wouldn’t you?
This is why any AI skills program you’re building must focus on behaviours and mindset, not just ‘using a tool.’
You can only make smart decisions if you know when, why, and how to work with AI.
One unignorable insight I’ve uncovered from collecting research over the last few years, and psychoanalysing that together with AI, is the importance of confidence in your capabilities to enable you to think critically with AI.
Where are you playing?
This is the battleground of most social spaces today.
High-performing organisations and teams will be those that think critically with AI, not outsource their thinking to it.
Being a “Balanced Evaluator” is the gold standard. So, we could say that thinking about thinking is the new premium skill (more on that ltr).
The combination of high AI literacy (skills, understanding how AI works, limitations) with high trust (knowing the right tool for the job and a willingness to use it effectively) is not straightforward.
That’s where you come in as the local L&D squad.
To be here, you must critically engage with AI by asking when, how, and why to trust its output. This requires questioning, verifying, and a dose of scepticism that too many fail to do but sorely regret when it backfires.
Also, don’t interpret “AI Trust” as blind faith. This is built through experimenting and learning how the best tools work.
What does meaningful learning with AI look like?
I (probably like some of you) have beef with traditional testing in educational systems.
It’s a memory game, rather than “Do you know how to think about and break down x problem to find the right answer?” We celebrate memory, not thinking (bizarre world).
My beef aside, research shows partnering intelligently with AI could change this.
The TL;DR (too long; didn’t read) of the article is that using AI tools can enhance metacognition, aka thinking about thinking, at a deeper level.
The idea is, as Ben Kornell, managing partner of the Common Sense Growth Fund, puts it, “In a world where AI can generate content at the push of a button, the real value lies in understanding how to direct that process, how to critically evaluate the output, and how to refine one’s own thinking based on those interactions.”
In other words, AI could shift us to prize ‘thinking’ over ‘building alone.’
And that’s going to be an important thing in a land of ‘do it for me.’
The Atlantic article shared two learning-focused experiments by Google.
In the first, pharmacy students interacted with an AI-powered simulation of a distressed patient demanding answers about their medication.
The simulation is designed to help students hone communication skills for challenging patient interactions.
The key is not the simulation itself, but the metacognitive reflection that follows.
Students are encouraged to analyse their approach: what worked, what could have been done differently, and how their communication style affected the patient’s response.
The second example asks students to create a chatbot.
Coincidentally, I used the same exercise in one of my “AI for Business Bootcamps” last year.
It’s never been easier for the everyday human to create AI-powered tools with no-code platforms.
Yet, you and I both know that easy doesn’t mean simple.
I’m sure you’ve seen the mountain of dumb headlines with someone saying we don’t need marketers/sales/learning designers because we can do it all in ‘x’ tool.
Ha ha ha ha is what I say to them.
Clicking a button that says ‘create’ with one sentence doesn’t mean anything.
To demonstrate this to my students, we spent 3 hours in an “AI Assistant Hackathon.” This involved the design, build, and delivery of a working assistant.
What they didn’t know is that I wasn’t expecting them to build a product that worked.
Not well, anyway.
I spent the first 20 minutes explaining that creating a ‘good’ assistant has nothing to do with what tool you build it in and everything to do with how you design it, ya know, the A-Z user experience.
Social media will try to convince you that all it takes is 10 minutes to build a high-performing chatbot.
While that’s true from a tech perspective, the product and its performance will suck.
You need to think deeply about it
When the students completed the hackathon, one thing became clear.
It’s not as simple or easy to create a high-quality product, and you’re certainly not going to do it in minutes.
But, like I said, the activity’s goal was not to actually build an assistant, but rather, to understand how to think deeply about ‘what it takes’ to build a meaningful product.
I’m talking about:
Understanding the problem you’re solving
Why it matters to the user
Why the solution needs to be AI-powered
How the product will work (this covers the user experience and interface)
Most students didn’t complete the assistant/chatbot build, and that’s perfect.
It’s perfect because they learned, through real practice, that it takes time and a lot of deep thinking to build a meaningful product.
“It’s not about whether AI helped write an essay, but about how students directed the AI, how they explained their thought process, and how they refined their approach based on AI feedback. These metacognitive skills are becoming the new metrics of learning.”
Shantanu Sinha, Vice President and General Manager of Google for Education
AI is only as good as the human using it
Perhaps the greatest ‘mistake’ made in all this AI excitement is forgetting the key ingredient for real success.
And that’s you and me, friend.
Like any tool, it only works in the hands of a competent and informed user.
I learned this fairly young when a power drill was thrust into my hands for a DIY mission. Always read the instructions, folks (another story for another time).
Anyway, all my research and real-life experience with building AI skills have shown me one clear lesson.
You need human skills to unlock AI’s capabilities.
You won’t go far without a strong sense (and clarity) of thinking, and the analytical judgment to review outputs.
Embrace your Human Chain of Thought
Yes, I made up this phrase (sort of).
Let me give you some context…
Early iterations of Large Language Models (LLMs) from all the big AI names you know today weren’t great at thinking through problems or explaining how they got to an answer.
This was comically exposed with any maths problem you’d throw at these early-stage LLMs.
They would struggle with even the most basic requests.
It’s a little different today, as we have reasoning models. These have been trained to specifically showcase how they solve your problems and present that information in a step-by-step fashion.
We now expect all the big conversational AI tools to do this, so why don’t we value the same in humans?
Those who nurture this will have greater command of their career.
So don’t ignore your Human Chain of Thought.
Focusing your energy on the ability to explain your reasoning is far more useful in a world littered with tech products that can recall info on command.
Tools to enhance, not erode your thinking with AI
A couple of useful tools and frameworks to get you firing those neurons from the most powerful tool at your disposal (fyi, it’s your brain).
1/ Good prompting is just clear thinking
Full disclosure: There’s no such thing as a perfect prompt.
They’re often messy, don’t always work every time in the same pattern and need continuous iteration.
Saying that, you can do a lot (and I mean a lot!) to set yourself up for success.
Here’s a (sorta framework) I use to help think critically before, during and after working with AI.
Step 1: Assess
Can AI even help with your task? (It’s not magic, so yes, you need to ask that)
Step 2: Before the prompt
What does the LLM need to know to successfully support you?
What does ‘good look like’?
Do you have examples?
⠀And, most importantly, don’t prompt and ghost.
Step 3: Analyse the output
Does this sound correct?
Is it factual?
What’s missing?
Step 4: Challenge & question
I’m not talking about a police investigation here.
Just ask:
Based on my desired outcome, have we missed anything?
From what you know about me, is there anything else I should know about ‘x’? (works best with ChatGPT custom instructions and memory)
What could be a contrarian take on this?
Step 5: Flip the script
Now we turn the tables by asking ChatGPT to ask you questions:
Using the data/provided context or content (delete as needed), you will ask me clarifying questions to help shape my understanding of the material.
They should be critical and encourage me to think deeply about the topics and outcomes we’ve covered so far. Let’s start with one question at a time, and build on this.
This is a powerful way to develop your critical skills and how you collaborate with AI.
P.S. Get more non-obvious insights and guidance on AI prompting in my short course designed specifically for busy people like you.
2/ Unpack the problem
Before you start building that next ‘thing’, check out this little framework, which has helped me to do my best work over the last decade.
3/ Partner with AI, don’t use it like a one-click delivery service
If I had a dollar for every time I said this, I’d be a billionaire by next year.
Often, it’s the small and simple actions that can bring the most valued results.
That’s not to say it’s easy to do.
In this video, I share how you can use AI to improve your critical thinking as a thought partner.
Final thoughts
There’s much more to say about this, friend.
But we’ll pause here for now.
Thinking is cool, and thinking about thinking is even cooler.
Let your brain dwell on that for a bit. AI can be an extension of your thinking, but never let it shape it.
Keep being smart, curious and inquisitive as I know you are.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.
A rewiring of what we do and how we do it is here.
While most teams are stuck at the point of innovations from 2 years back, you can be ahead of this. Perhaps, this is the new reality of learning in the flow.
Conversational, not transactional.
I can’t help but get excited about conversational-driven experiences that not only develop new skills but also reveal more about ourselves and how we think.
Yet…I still see a lot of talk and not so much action, sprinkled with a lot of misinformation and actual understanding of Gen AI’s power and limitations. That creates a problem if the L&D industry wishes to thrive in the new world of work with AI.
Here’s 5 insights I’ve picked up in my research, analysis and partnering with lots of L&D and HR teams over the past few years on driving AI adoption.
1/ Your customers are building their own solutions
The biggest problem, in my eyes right now, is the fact that our audience (I’m not calling them learners!) is able to design their own personalised and adaptive learning experiences with LLMs.
Whether they’re good or not is another question.
I’ve been talking to clients about this for the past year. They mostly nod and reply ‘that’s interesting’, but it’s more than interesting. It’s a threat coming right down the barrel at light speed to many L&D teams.
Why do they need you, if they have the gods of LLMs?
We must ask ourselves, if our customers have access to intelligence on-demand and personalised learning experiences, how do you fit into that?
You cannot fight the adaptiveness and personalisation that generative AI enables.
That would be a foolish endeavour. Instead, you have to evolve as workforces will demand a new level of experience that they currently enjoy in their personal life.
We’ve been here a few times before.
As an industry, we’ve lost many battles to Google search, all of social media and YouTube.
We all want the sleek experiences from our personal use, end of story.
So, this presents a crossroads for us.
Either we keep trying to force people to places and spaces they don’t want to be, or design to meet them where they’re hanging out. The choice is yours.
If we look at AI as the dominant emerging operating system, you have your answer to the above.
2/ It’s a transformation, not a training project
I find so many teams fail to see this.
AI in L&D is not your latest training project, it’s one of the biggest transformation projects we’ll ever face.
And…one we need to consider if we exist at the end of it in our current form (so controversial, I know 😂).
If you’re reading these words and all your team/company is doing is the minimal “Let’s train everyone how to write prompts”, then check out this post where I walk through the 4 levels of transformation happening across L&D with AI right now.
TL;DR: Stop treating AI adoption like a training project
3/ You can do a lot more with AI today than you think
You know I share a lot of innovations and tech demos.
What some don’t realise is that almost everything I share is available to use right this minute.
That’s why I’m always surprised when I get weekly messages like “I had no idea this was even possible today” or “OMG! I thought this would be years away”. Remember when I said about teams being stuck at innovations from 2 years ago, this is what I mean.
4/ You’re trying to sprint before you can walk with AI
Agent this, agent that is literally all I see on LinkedIn these days.
Granted, it’s a total echo chamber of people mostly shouting that back at each other, but by God, it’s giving me a headache.
The hype, mostly driven by AI companies, is becoming laughable.
Don’t get me wrong, AI agents will be very useful and there’ll be some great applications in L&D, yet you’d think a sort of world peace is about to emerge by the way the ‘influencers’ talk on social.
I work with sooo many teams and companies that hardly know how to use a basic AI assistant to even 50% of its potential. Adding agents into that mix is a recipe for both confusion and mistakes.
A lot of people need to slow down…
Pause… take a breath and find your centre (or whatever meditation teachers say).
Almost 90% of what you see paraded online is not a true agent solution. Not in the technical context, anyway.
Much like marketing teams decided to use the word “AI” everywhere post-2022, they’re doing the same thing by labelling everything an Agent.
Unfortunately, this has created a fractured understanding of what an agent is, and the definitions are always changing.
So, I decided to put something together to cut through that BS ↓
5/ Using AI for conventional “learning” tasks is not groundbreaking
Perhaps this is a controversial one.
I don’t believe using AI tools to do more of the conventional content and course experiences in L&D is an impressive ‘use case’. Whenever someone says to me, “I used ChatGPT to create my next course in half the time,” I chuckle in my head ‘That’s cute’.
You can use it to produce more courses and content, but where does that get you?
The same place where the 99% who could become obsolete occupy.
I keep using the word “rewire” so much in my work because that’s what we need – a complete rewiring of what we do and how we do it. Be brave enough to say, “Does this need to be a course?”
With intelligence on-demand in everyone’s hands, our default operating system is becoming AI, and it’s not using courses to help people learn.
I know I keep saying it, but you don’t want to use AI to accelerate outdated ideas and practices. Instead, we should focus on rewiring what we do.
This quest of rewiring what L&D can do with AI has led me to think of a world beyond the course or event as the default delivery for learning moments.
Now you see what I see, and my hope is for you to use this to improve your work both with AI and if you’ve been given the ill-fated L&D mission of “making people use AI”.
→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.