While I’m keen to see how that develops, I know a lot of us are in workplace L&D teams.
In my opinion, it’s less about learning at work and more about performance.
I’m sure there’ll be a few eye rolls at that sentence, but most organisations are looking to L&D to build capability and performance, not help people learn.
So, how can new tech support this?
Josh Bersin coined the phrase “learning in the flow of work” back in 2018, yet we never really had the tech to realise that vision. In 2025, it’s a different story.
Today, we’re exploring how AI is reshaping performance support at work with multimodal tools that can see and speak.
Since generative AI tools crashed into our lives, everyone from tech bros to social media gurus has declared that it’s going to reshape education and learning.
We’ve seen AI models built specifically for education, changing how students receive support in a previous edition covering when AI is trained on learning science a few weeks back.
But what about the workplace?
Performance is bigger than learning at work
Workplace learning is a multi-billion-dollar industry, and everyone wants a slice of that pie.
Unlike traditional education, a lot of workplace learning is focused on performance. It’s getting people the right support exactly when they need it.
I’m not talking about learning deep concepts or broad skills for life. That’s a different game entirely. Performance support is about immediate problem-solving.
It’s resources, not courses.
The reality is we face a series of small – mid-level performance blockers daily
Stuff like:
How do I use VLOOKUP in Excel?
How can I convert this PowerPoint into Google Slides?
Yes, basic things, I’m aware.
These micro-challenges fill our workdays, but the answers are often buried deep in Google searches or trapped in some long-forgotten company SharePoint.
Now this might not be what you class as ‘learning’ but they stop people from performing, and if we agree that workplace learning is actually enhancing performance, then these are big problems.
Multimodal AI Tools are reshaping our experience
For those unfamiliar with the term “Multimodal” it just means multiple inputs and outputs.
So you can have:
A text input and output
A text input and visual output
An audio input and text output
We’re no longer limited to just asking AI how to do X.
It can now see, hear, and respond in real time.
Voice and vision capabilities mean we can talk to AI tools, show them what we’re working on, and get on-demand help. Instead of scrolling through pages of search results, we can (actually) solve problems in the flow of work.
How you can try this at zero cost
I’m not going to assume everyone has a paid license to top of the range AI tools.
Instead, I’ll show you how this type of performance support can work with a zero-cost tool courtesy of those folks at Google.
In Google AI Studio, you can test multimodal features for free. You can talk to the AI, show it your screen, and work together to solve problems in real-time.
Select your AI model (Gemini 2.0), choose your output format (audio, text, etc.), and start collaborating.
And…of course, here’s a step-by-step video on how to do that ↓
Note: AI isn’t perfect. You, the human, still need to apply critical thinking and validate results.
AI as a Support Tool, Not a Replacement
This is where on-demand performance support is heading.
We’re not talking about replacing human expertise, but rather evolving traditional job aids, FAQs, and knowledge bases into dynamic, AI-powered conversational support systems.
We can:
Help employees understand new concepts.
Troubleshoot technical issues in real time.
Set up hardware and software.
Not everything needs a full-blown training course.
Sometimes, we just need an answer now.
Plus, solutions like this can only help us focus more on the human stuff that matters. I mean, do you really want to keep buying and running Excel courses for teams in 2025? I’ll leave you to ponder that.
Final Thoughts
This is still in its infancy.
ChatGPT, Google Gemini, and other AI tools are already capable of vision and real-time interaction. More will follow.
The key question is:How will organisations (and you) use this?
Perhaps. it’s time, once again to rethink how we provide support with today’s technology. It’s not a case of either/or. We have an opportunity to shape how this plays out.
This is just the beginning.
→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.
Two terms you might hear techies mention with AI products are ‘AI assistants’ and ‘AI agents’.
Here’s the difference in clear, simple terms.
Let’s start with what we know – AI assistants like ChatGPT.
These are tools that help us with tasks through conversation. They can write, analyse, explain, and give suggestions based on what we ask.
AI agents take this a step further.
Instead of just helping through conversation, agents can actually complete tasks on their own. They follow instructions, use different tools, and make basic decisions to get things done.
The key difference is simple:
AI assistants help you with tasks
AI agents complete tasks for you
Both are valuable, but they serve different purposes. An assistant works with you through conversation, while an agent works independently based on your instructions.
Use this info to impress the boss at your next meeting.
I’m not going to leave you with just this, though.
As I’m a tech nerd, I’ve filmed a quick video (see below) to show how agents work with examples from Google and Salesforce – enjoy.
What can AI agents do?
Imagine you have a personal assistant who doesn’t just follow your instructions, but takes the initiative to resolve problems independently.
AI agents are like that, except they exist in the digital world.
At their core, AI agents are smart programs designed to observe their environment, make decisions, and take actions using the tools available to them.
Unlike traditional software that waits for you to give it a command, like LLMs, AI agents can think ahead, figure out what needs to be done, and act – sometimes without needing constant human input.
Think of them as a self-driving car.
Instead of waiting for a person to steer, brake, or accelerate, the car analyses traffic, makes decisions, and moves safely toward its destination.
AI agents work similarly but in a digital space, whether it’s automating workflows, analysing data, or even assisting with creative tasks.
The magic of AI agents lies in their autonomy and problem-solving abilities. Even if you don’t give them step by step instructions, they can work out the best way forward to achieve a set goal.
They do this by following set rules and past experiences to decide the best way to complete a task.
This makes them incredibly useful for businesses, customer support, research, and even personal productivity.
In this scenario, an agent helps a user plan, find, book and check-in for a flight.
The agent has access to all the necessary tools and reasoning power to complete this on behalf of the human. You can see me build something similar for HR onboarding in this demo.
AI agents are still evolving, they’re already transforming how we interact with technology. For now, just think of them as the digital teammates working behind the scenes to get things done!
Examples of AI agents in action
AI agents are becoming part of our daily lives, wether you’re aware of it or not is another question.
They perform tasks that range from the mundane to the complex.
Two notable, and easily accessible to every one, examples are OpenAI’s “Operator” and Anthropic’s “Claude” with its “computer use” feature.
OpenAI’s Operator
Operator is an AI agent developed by OpenAI that can autonomously navigate the web to perform tasks on your behalf.
I get that sounds both odd and spooky.
It interacts with websites much like you and I would by clicking, typing, and scrolling to accomplish various objectives.
Operator can fill out forms, book travel arrangements, or even create memes by remotely interacting with a web browser (a big use case for me). This allows it to handle tasks such as purchasing groceries or filing expense reports, and streamlining processes that typically require manual input.
Just think, to never have to go searching for bananas on your local grocery app again, what a time to be alive.
Computer Use with Claude
Anthropic’s AI model, Claude, has introduced a feature known as “computer use”.
Bit of a boring name, but you gotta start somewhere,
As you’ve (probably) guessed by the name, this enables Claude to operate a computer just like we would.
Again, all the functionality that Operator has like filling out forms, ordering food, or managing emails autonomously. It has raving fans already as as Asana, Canva, and DoorDash are exploring ways to integrate this feature into their workflows.
Maybe the end of the trusty mouse and keyboard is closer than we think.
In Sum
Your AI Agent Cheatsheet 📝
Assistants help you with tasks, Agents do tasks for you
Agents works independently based on your instructions
The magic of AI agents lies in their autonomy and problem-solving abilities
They’ll be incredibly useful for businesses, customer support, research, and even personal productivity.
Agents are here as the next level of meaningful use of generative AI technology.
They serve a specific purpose in the ecosystem of AI-powered tools at our disposal. As always, if you’ve found this helpful, please consider sharing it wherever you hang out online.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.
If we’re being honest, it’s been hard not to for the last few years. One day, we were fighting off a super virus, and now we’re gushing over generative AI tools.
Crazy how fast things change.
The past 12 months have given me plenty of time to work with various teams and companies on AI skills programmes. It’s taught me a very important lesson: despite the current pace of AI tool adoption, there is a lack of investment in the mindsets, behaviours, and meaningful skills needed to leverage them effectively.
It’s generic to say that AI, particularly generative AI (which are not the same, FYI), has opened up a transformational shift in how we work, learn, and interact with the world.
Yes, I’m playing Captain Obvious, but stay with me…
With any major technological shift, achieving a successful ROI doesn’t happen overnight.
The journey from what I class as a curious “hobbyist” to a confident “adopter” is a gradual one, and I cannot overstate how much patience you need to develop here.
Social media doom-scrolling makes it easy to feel pressured to learn everything about AI instantly.
Everyone and their dog is an AI expert today, and apparently, they can make you master AI in 7 days. Be wary of these people. They will stunt your chances of long-term success.
Building a deep understanding of such a transformative technology requires time and effort.
And to be quite frank, no one has mastered it yet. They probably never will, as it’s always evolving.
You already know my views on this.
Meaningful AI adoption is about more than just knowing how the tools work. It’s about cultivating a mindset and building behaviours that allow us to integrate AI meaningfully and responsibly into what we do.
The 3 Stages of AI Literacy: Hobbyists, Experimenters, and Adopters
There are so many bloody maturity models out there right now.
While mine is not as fancy as a consulting firm, I believe it’s simple to use.
My work these last few years has shown most people are navigating through three broad stages of AI skills maturity: hobbyists, experimenters, and adopters.
Let’s unpack these ↓
Hobbyists
Hobbyists are those who dabble in AI, experimenting with tools like ChatGPT in their personal time but haven’t yet applied it systematically in their work.
They’re curious, but they haven’t reached a level of skill where AI significantly impacts their productivity. Mostly they create cat pictures and get AI to write crap social media posts stuffed full of emojis.
Experimenters
Experimenters have begun incorporating AI into their daily tasks, testing out its capabilities, and exploring use cases in real-world contexts. They’re still in the learning phase, figuring out what works, what doesn’t, and how AI fits into their broader workflow.
I like this level the most. To experiment, fail and learn is a beautiful thing. The majority of people who play here will do very well.
Adopters
Adopters have fully embraced AI, using it effectively and strategically in their context to enhance work.
They’ve developed a level of comfort and expertise that allows them to apply AI in ways that generate meaningful, long-term value. A caution here: I’ve found some who’ve gone too far down the rabbit hole have become blinded to AI’s limits. Try to avoid that.
Be balanced, in all things.
Aim for ‘Good enough’
Moving from one stage to the next is a slow process. Often frustratingly slow in a world where we expect immediate results.
That’s totally fine. It’s a necessary progression.
Without taking the time to fully understand the nuances of AI and how it can be harnessed, you risk missing out on the true potential of the technology.
A thread that weaves through each of these stages is experimentation and exploration. You will bounce between each stage as new advancements emerge. Right now, that’s like every other week.
It is entirely possible to be an adopter at the start of the month and find yourself back to a hobbyist without keeping up a practice of experimentation and exploration.
Always get clear on the ‘what, why and how’.
Classic advice for a reason.
Be intentional with AI skill building
This will sound counterintuitive, and yes CEO of x company, I know you want the ‘AI Effect’ today.
But with AI literacy, being more intentional can reap rewards for years – perhaps even decades.
I’ve seen this in some of my work with clients.
Senior executives have crazy expectations for workers to become ‘AI Experts’. They don’t even know what that means – I don’t even know what that means!
If we’re talking about tools like ChatGPT, becoming an expert on that with its almost daily updates is like chasing after your 5-year-old when they see an ice cream truck fly by.
Solid fundamentals will help, no doubt.
But fundamentals don’t = fully capable expert.
AI is not static.
Learning the fundamentals and taking time to put them into practice is key. Yes, I know that’s hard in a world where you need more than 1 week to show ‘ROI’.
By encouraging a more deliberate approach, you can craft the mindsets, new behaviours, and technical, and human skills to navigate AI transformations at large.
I know I’m preaching to the choir here.
(Note: Being more deliberate with crafting AI skills does not mean building bloated 3-month + learning experiences. No one wants or needs this!).
In sum: You need a bit of patience, time and structure but lots of experimentation. Again, counter-intuitive, I’m aware, but with a technology so transformational, we have to find ways for these elements to co-exist.
80% of AI projects fail because of this
Another report I’m reading, in what I must say, is an era for ungodly amounts of reporting on one topic, focuses on the root causes of failure for AI projects.
If I’m being fair, the findings of these failures apply to L&D projects too.
Anyway, one of the biggest factors for failure was being given the time for a project to succeed. You see executives are drinking the koolaid.
They think that what needs at least a year to succeed can be done in a week.
The writing is on the wall for most projects before they start.
You have no doubt suffered this exact problem with countless L&D projects.
Think of all the projects that have died because:
Expectations were unchecked
A problem was not defined to solve
The resources you need to succeed weren’t provided
You were given 1 week when you need 1 year
One word to define this – misalignment.
AI literacy is about building a long-term capability, not a short-term fix.
This creates a workforce that is not just technically competent, but equipped with the critical thinking, creativity, and adaptability needed to succeed in an AI-driven future.
Final thoughts
As a good BCG article once told me, “Treat Gen AI upskilling as a marathon, not a sprint”.
Yes, you need to move fast to help people unlock the potential of new technology. But, you also have to be smart. People won’t just get it after some 30-minute online course.
They will need more hand-holding than you think, and you need to inject a dose of realism into the ‘time to become proficient’ with your AI tools of choice. Marathons are a mixture of both fast and slower-paced elements.
Again, think constant experimentation and exploration. This is not a static game.
The investment in Gen AI fundamentals at most companies is criminally low.
Don’t fall into the trap of tools before educating on the basics. I’ve seen this back-fire too many times.
As the wise Uncle Ben said, “With great power, comes great responsibility” – and too many are forgetting the final part of that famous quote.
As I said in a recent newsletter:
With all-time high levels of use across millions of Gen AI tools and all-time low levels of AI literacy, we could be heading for a skills car crash of our own design.
Too many forget that AI is only as good as the human using it.
It’s, perhaps, the greatest ‘mistake’ made in all this AI excitement.
Here’s five things I suggest you do:
Teach AI Fundamentals: What is AI and Gen AI, and what is not? How LLMs work, etc
Behaviours + mindset: How to think critically and validate outputs. Understand AI hallucinations. Know when and when not to rely on AI tools
Practical use cases: Not cat pics, real work impact. You could combine this with ‘tools’ for experimentation.
Picking the right tools: Not every AI tool is created equal, so know the opps and limitations of yours
Upgrade human skills: You won’t go far without a strong sense (and clarity) of thinking and analytical judgment.
The key to all of this is time, patience and intention to build the right skills.
Sometimes that will be fast, others it will be slow.
[Bonus: Think about introducing some really simple and easy to follow guidelines for AI use at work. Don’t overcomplicate it with jargon! – think best practices, or as much of a best practice as you can give on this rollercoaster]
In sum: Don’t make the mistake of rushing the process of crafting meaningful AI skills and behaviours.
Overpriced tight-fitting clothes to impress the opposite sex = your 20’s
Not being judged for eating an entire chocolate log = Christmas
Using generative AI tools = ?
While I hope you agree with the rest, the last one is debatable.
Depending on your relationship with AI, your view on ‘when’ to use its delightful powers can be vastly skewed.
The ‘AI cultists’, as I like to call them, will proclaim we should use AI for everything, while ‘dooms dayers’ will warn you not to touch it as you’ll lose your humanity.
Of course, the truth of the matter is not so clear-cut.
There’s an interconnected web of assessments and decisions to be made. The good thing is this is all human-powered. The world has been so focused on ‘how’ to use new tools, that we’ve paid little attention to why and when.
Let’s change that.
📌 Key insights
AI is a tool, not a saviour
Boring and basic is where AI shines best with tasks
Balance your understanding and application for maximum benefits
I appreciate LinkedIn CEO Ryan Roslansky’s concept of assessing ‘tasks, not jobs’ in the context of generative AI at work.
This idea originates from Ryan’s Redefining Work article, where he explores how AI will accelerate workforce learning and amplify the importance of skills.
Ryan suggests moving away from viewing jobs as titles, and instead, seeing them as a collection of tasks. These tasks will inevitably evolve alongside AI and other technological advancements. He recommends breaking your job down into its primary daily tasks.
You can bucket those tasks in this format:
Tasks AI can fully take on for you, like summarising meeting notes or email chains.
Tasks AI can help improve your work and efficiency, like help writing code or content.
Tasks that require your unique skills – your people skills – like creativity and collaboration.
This sets the stage for how I currently recommend working with AI.
Where AI helps best
You might see glamorous examples of generative AI tools on social media.
In reality, the majority of benefits come from tackling boring and basic tasks. I’m talking about writing better emails, summarising reports, and brainstorming ideas.
It’s smart to delegate simple, mundane, yet time-consuming tasks to AI.
This creates space for more human-centred work.
I don’t understand why some people seem determined to have AI handle the human elements. What a boring life that would be! I want AI to handle the laundry via a workflow so I can focus on building cool stuff – not the other way around.
Source: Asana AI at Work Report
Source: Gallup
A bunch of smart folks have done lots of research on this.
The above visuals come from Gallup and Asana, but I want to talk a little bit about a joint research project from Boston Consulting Group and Harvard.
These two powerhouses wanted to cut through the hype to see if AI tools like ChatGPT can improve productivity and performance. They worked with 758 BCG consultants (about 7% of their individual contributor-level staff) and split them into three groups:
One without AI access
One with GPT-4
Another with GPT-4 plus some training on prompt engineering
These consultants tackled 18 real-world consulting tasks to see how AI would affect their work.
The results? Pretty impressive, I’ve got to say.
The consultants using AI managed to complete 12.2% more tasks and knocked them out 25.1% faster. But here’s what really caught my attention – the quality of their work shot up by more than 40%!
It’s one thing to do something at speed, but another to do it at such high quality too.
That’s the trap I see happening in every industry right now. Too many prioritise speed over quality. You can have both if you craft the right skills to collaborate with AI.
There was a catch though (when is there not!).
When consultants tried to use AI for tasks it wasn’t built for, their performance dropped by 19%.
I don’t see this as a negative. It’s very helpful to know where the limitations are. You cannot have a balanced approach without this. Another particularly interesting outcome was how the consultants ended up using AI.
Some folks took a hybrid approach, blending AI with their expertise, while others went all-in and relied heavily on AI.
Both styles seemed to work, but context was key.
While those marked as novice employees found the biggest performance gains, this dropped with those classed as experienced workers. Those in the latter category still saw a modest boost of 15% in most tasks.
TBH, I’d take that on most days.
You can’t buy time
Time is a fickle thing.
It’s our most precious and non-renewable resource.
If you’ve been to any of my keynotes in the past year, you will have heard me touch upon this. Perhaps it’s the broadened awareness of my mortality.
It’s probably got something to do with being very close to 40 years old, which my 23-year-old self didn’t expect to happen.
My impending mid-life crisis aside, time is something you should care about deeply.
You can always make more money, but you can’t buy more time.
The biggest promise and opportunity with AI tools is being able to reclaim that precious resource.
I’m not fussed about making 6-figures or building teams with AI only. I’m much more invested in getting time back to spend with those close to me and doing more of the human stuff I love at work.
We’re starting to see what people are doing with some of these time gains.
In Oliver Wyman’s AI for business research, they estimate Gen AI could save 300 billion work hours globally each year. I think that would be a wonderful outcome (as long as it doesn’t involve me doing more washing!).
Source: Boston Consulting Group (BCG)
Where AI Is Not Your Friend
I know this might break some hearts, but…AI is not your saviour.
Life is a mix of opportunities and pitfalls.
Research from BCG and Harvard offers an important lesson: generative AI works exceptionally well when used for tasks it can handle. However, beyond that, it’s the wild west.
As always, context is key in decision-making, and tools are constantly improving. This is where I like to appeal to everyone’s common sense. Yet, as I’m often reminded, common sense, it seems, isn’t so common these days.
It’s impossible for me to cover every task across every industry you might encounter.
Instead, here’s a general framework to help you determine when to use generative AI. The summary is simple: AI works well with tasks with pre-defined guidelines and less severe consequences of a f**k up. It should not be relied upon in what I class as ‘mission critical’ matters, aka the human stuff.
Over-reliance on AI is already a significant threat to education, work, and life.
In schools, new research has shown generative AI harms students’ learning because they over-rely on these tools, quickly losing key human skills. More alarmingly, we’ve seen the rise of AI companions as therapists and friends among 18–24-year-olds (especially men), replacing vital human connections.
This is why I always emphasise helping people develop the mindset and behaviours to use AI intelligently. Note:I define ‘using AI intelligently’ as understanding the why, what, how, and when of AI applications versus tasks.
Adoption can easily slip into addiction.
Choose wisely, human
How to identify tasks AI can help with
This is the thing we all need help with.
Where can and can’t AI help me?
There’s no clear-cut answer to this. I’d love to give you some fancy 2×2 framework but I don’t believe that will serve you well. Each scenario is context-specific, and generative AI tech is evolving so fast.
I tend to think about my tasks in a macro and micro view.
Your tasks can easily be broken down into sub-tasks (micro). We’ve talked about continuing to invest in your thinking in this era of AI. This is something that requires deep thought and reverse engineering your ideal outcomes.
As an example, I use a little table like this:
Simple and effective
It’s not fancy, but it does the job.
We have two macro tasks:
Presenting insights and actions on the L&D functions performance to senior leaders
Launching a new internal course
For our first task, my outcome is to deliver a presentation to senior leadership on L&D performance.
So, I break down (in my mind) the micro tasks to reach that, as you see above. I then assign each of those to a column. Note: The first column can be automation without AI.
I don’t use this for every task, only those that I believe, with my current experience of Gen AI, could be an opportunity to work smarter.
What’s key is the AI components are always low to mid-level, and the mission-critical parts are always done by me (the human).
Final thoughts
Knowing how to use AI tools is useful.
But understanding why and when to call upon their power is an advantage.
As we’ve covered, there is no one right way to assess this. The simplest part (imo) is to get clear on what are the uniquely human tasks in your work. Mark these as ‘mission critical’ – so you have zero or very minimal AI assistance.
Your low and mid-level should become clearer with this.
I say this sooo often, but it’s a damn good quote and continues to be relevant in this space:
With great power comes great responsibility
– Uncle Ben (Spiderman’s uncle)
Think wisely about when to wield that power.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.
The world has tons of reports/research on performance with Gen AI tools.
It has even more industry use cases by the day.
We know that, when used intelligently, these tools can enhance performance (see this, this, and this). Plus, it’s clear that we see the biggest short-term ROI in the boring and basic tasks.
I’ve found in L&D (and, to be fair, many industries) that we get distracted. We focus so often on the ‘shiny thing’ that we continually miss the point.
If AI ‘does it for you’, what happens to your skills?
Although I like the power, potential, and continued promise of AI tools, I’m troubled by the unexpected consequences of the manic pursuit of ‘AI at all costs.’
Especially, AI’s impact on skills.
I sense that we already over-rely on certain tools, and in doing so, we both create illusions of capabilities and fail to invest in moments of intentional learning.
Granted, a lot of this comes down to the intent and ability of human users.
But with all-time high levels of use across millions of Gen AI tools, and all-time low levels of AI literacy, we could be heading for a skills car crash of our own design.
Let’s unpack that.
📌 Key Insights:
Smart Gen AI use can expand skill capabilities for a limited time
We aren’t improving skills in most cases, mastery requires more than AI alone
The majority of people will over-rely on AI tools and become ‘de-skilled’
AI tools can help us improve critical thinking processes
We must be more intentional in how we approach skill-building in an age of ‘do this task for me’
The Controversial Idea: Skills will be destroyed if we let AI do everything
So many people are scared that AI will take their job.
They think they’ll lose because AI tools can do the tasks better.
But what if you lose, not because AI does your job better, but because you over-relied on the temporary power it grants? You’d be the master of your own demise.
It’s easy to think, “That will never happen to me.”
Maybe it won’t.
But I’d ask you to consider your use of AI tools today. My assumption is that most people use them in a ‘do this thing for me’ approach, rather than a “show me how to do this.”
Here exists a problem we aren’t paying enough attention to.
An AI-first approach will damage the capabilities and potential of skills (if you allow it).
Somewhat an observation for now, but a dark reality I’d like to avoid.
My thinking behind this comes both from real-world experience with consulting clients on Gen AI skills programs, and what I’ve seen in more advanced research this year.
An excellent piece of research from Boston Consulting Group has been one of my favourites on this topic. It unpacks, with an experiment involving 480 of their consultants, that Gen AI can increase productivity and expand capabilities.
That’s the headline, of course.
AI’s impact on skills: What we know today
The problem with most research and reports is that most don’t read beyond the headline.
Hence why we have so many cult-like statements about Gen AI’s endless power. It is powerful, in the right hands. But any power comes at a cost.
For those willing to go deeper, we find both a bundle of exciting opportunities and critical challenges.
Here’s some we haven’t discussed ↓
1/ Gen AI grants short-term superpowers
No surprise here, I think.
Gen AI tools grant easy access to skills we don’t possess. They can amplify the level of our current skills too. The BCG team coins this as an ‘exoskeleton’ effect.
Explained in their own words:
“We should consider generative AI as an exoskeleton: a tool that empowers workers to perform better and do more than either the human or Gen AI can on their own.”
Being a nerd, I compare this to something like Iron Man.
For those not familiar with the never-ending Marvel films, Tony Stark is a character who has no superpowers (but is a highly intelligent human). To play in the realm of superheroes, he creates his own suit of armour that gives him access to incredible capabilities he doesn’t have as a human.
The caveat is that he needs the suit to do those things.
Essentially, using an AI tool is like being given a superpower you can use only for 20 minutes. It exponentially increases your abilities, but without it, you go back to your normal state. And everyone has the same access to this power.
BCG found the same in this research.
We could call this a somewhat false confidence dilemma.
This presents a few challenges to navigate:
How do we combat the illusion of expertise?
What happens when you don’t have access to AI?
How do we stop addiction to the ‘easy option’?
Spoiler: I don’t have all the answers.
However, this temporary boost in abilities often leads to another problem – the illusion of expertise.
2/ We have to fight the illusion of expertise
This is a big challenge for us.
Getting people to look beyond AI’s illusion of expertise.
You know what I’m talking about.
Now everyone has access to creation tools, they all think they’re learning designers who can create their own amazing products. We both know how that’s going to turn out.
As an example in my work, I can build a decent website with AI, which does the heavy coding for me. But I can’t do it without it, not unless I learn how to do it.
Yes, I built x, but I’m not a software engineer.
There’s a big difference, and sadly, I see people falling into this trap already.
Now, with all this new tech, I don’t need to know the ‘why’ or ‘how’ behind something being built by AI.
But what does this mean for my skills?
A big part of skill acquisition focuses on the ‘why’ and ‘what’, in my opinion. I don’t need to know every little detail, but it helps to have a basic understanding.
I see a few unintended consequences if we don’t clearly define what is a ‘short-term expansion enabled by tech’ and what is ‘true skill acquisition’:
We over-rely on AI tools, and this over-reliance erodes critical thinking skills, a key element in real-world problem-solving
We lose context, a sense of understanding
Our human reasoning skills will erode in the face of “But AI can tell me”
Each of us will fall into different groups based on our motivations.
I’m not saying we’re all collectively going to become skill-less zombies addicted to a digital crack of Gen AI tools, but it will be a reality for some.
What happens to human skills if we over-rely on AI?
This is a real grey area for me.
I’ve seen countless examples where too much AI support leads to less flexing of human skills (most notably common sense), and I’ve seen examples where human skills have improved.
In my own practice, my critical thinking skills have improved with weekly AI use these last two years. It’s what I class as an unexpected but welcome benefit.
This doesn’t happen for all, though.
It depends on the person, of course.
BCG’s findings seem to affirm my thoughts that the default will be to over-rely. I mean, why wouldn’t you? This is why any AI skills program you’re building must focus on behaviours and mindset, not just ‘using a tool.’
You can only make smart decisions if you know when, why, and how to work with AI.
But we can learn with AI too
I (probably like some of you) have beef with traditional testing in educational systems.
It’s a memory game, rather than “Do you know how to think about and break down x problem to find the right answer?” Annoying! We celebrate memory, not thinking (bizarre world).
My beef aside, research shows partnering intelligently with AI could change this.
The TL;DR (too long; didn’t read) of the article is that using AI tools can enhance metacognition, aka thinking about thinking, at a deeper level.
The idea is, as Ben Kornell, managing partner of the Common Sense Growth Fund, puts it, “In a world where AI can generate content at the push of a button, the real value lies in understanding how to direct that process, how to critically evaluate the output, and how to refine one’s own thinking based on those interactions.”
In other words, AI could shift us to prize ‘thinking’ over ‘building alone.’
And that’s going to be an important thing in a land of ‘do it for me.’
To truly do so, you must know
Google’s experiments included two learning-focused examples.
In the first example, pharmacy students interacted with an AI-powered simulation of a distressed patient demanding answers about their medication.
The simulation is designed to help students hone communication skills for challenging patient interactions.
The key is not the simulation itself, but the metacognitive reflection that follows.
Students are encouraged to analyse their approach: what worked, what could have been done differently, and how their communication style affected the patient’s response.
The second example asks students to create their own chatbot.
Strangely, I used the same exercise in my recent “AI For Business Bootcamp” with 12 students.
Obviously, great minds think alike 😉.
It’s never been easier for the everyday human to create AI-powered tools with no-code platforms.
Yet, you and I both know, that easy doesn’t mean simple. I’m sure you’ve seen the mountain of dumb headlines with someone saying we don’t need marketers/sales/learning designers because we can do it all in ‘x’ tool.
Ha ha ha ha is what I say to them.
Clicking a button that says ‘create’ with one sentence doesn’t mean anything.
To demonstrate this to my students, we spent 3 hours in an “AI Assistant Hackathon.” This involved the design, build, and delivery of a working assistant.
What they didn’t know is I wasn’t expecting them to build a product that worked.
Not well, anyway.
I spent the first 20 minutes explaining that creating a ‘good’ assistant has nothing to do with what tool you build it in and everything to do with how you design it.
Social media will try to convince you that all it takes is 10 minutes to build a chatbot.
While that’s true from a tech perspective, the product, and its performance, will suck.
Just because you can, doesn’t mean you will (not without effort!)
When the students completed the hackathon, one thing became clear.
It’s not as simple or easy to create a high-quality product, and you’re certainly not going to do it in minutes.
But, like I said, the activity’s goal was not to actually build an assistant, but rather, to understand how to think deeply about ‘what it takes’ to build a meaningful product.
I’m talking about:
Understanding the problem you’re solving
Why it matters to the user
Why the solution needs to be AI-powered
How the product will work (this covers the user experience and interface)
Most students didn’t complete the assistant/chatbot build, and that’s perfect.
It’s perfect because they learned, through real practice, that it takes time and a lot of deep thinking to build a meaningful product.
“It’s not about whether AI helped write an essay, but about how students directed the AI, how they explained their thought process, and how they refined their approach based on AI feedback. These metacognitive skills are becoming the new metrics of learning.”
Shantanu Sinha, Vice President and General Manager of Google for Education
AI is only as good as the human using it
The section title says it all.
Perhaps the greatest ‘mistake’ made in all this AI excitement is forgetting the key ingredient for real success.
And that’s you and me, friend.
Like any tool, it only works in the hands of a competent and informed user.
I learned this fairly young when a power drill was thrust into my hands for a DIY mission. Always read the instructions, folks (another story for another time).
Anyway, all my research and real-life experience with building AI skills has shown me one clear lesson.
You need human skills to unlock AI’s capabilities.
You won’t go far without a strong sense (and clarity) of thinking, and the analytical judgment to review outputs.
Going back to the BCG report, a few things to note that support this:
1/ Companies are confusing AI ‘augmenting with skill building’
As we touched on earlier, AI gives you temporary superpowers.
Together (you and AI) you can do wonderful things. Divided, not so much (unless you have the prerequisite knowledge to do the task).
We can already see both companies and workers confusing their abilities to (actually) perform a task.
AI gives both a false sense of skills, and terror at the lack of them.
2/ Most people can’t evaluate AI outputs
Again, any of us can code with AI.
But that doesn’t mean we know what’s going on or how to check if it’s correct.
This is the trap anyone can fall into. Knowing how to validate AI outputs is critical. We need to pay more attention to this. You know, thinking about thinking, and all that.
3/ Without context, you’re doomed
Content without context is worthless.
That’s a general rule. Exceptions apply at times. Nonetheless, you need the context of when and when not to use AI tools to get results.
As we know, it’s not a silver bullet.
The solution to this is getting a better understanding of Gen AI fundamentals.
Another BCG report, in collaboration with Harvard, discovered that success in work tasks with AI came down to knowing when is the right time to call on those superpowers.
How to help humans use AI for REAL learning
Ok, we can see a potential problem if left unchecked.
Here’s a few ideas, tools and actions to do something about it:
1/ Cover AI fundamentals
Too often ignored with people going straight to tools.
Yet, knowing how and why a technology works means you become the chess player, and not a chess piece that’s moved by every new model and tool.
The world has lots of resources to help you with this.
2/ Don’t confuse ‘do it for me’ with ‘learning to do’
While AI can enable individuals to complete tasks they wouldn’t be able to do independently, this doesn’t automatically translate to skill acquisition.
Help people recognise the difference.
To truly learn anything, you need a combination of:
You might have heard me say “AI is only as good as the human using it” like a broken record.
Like any tool, it only works in the hands of a competent and informed user.
I learned this fairly young when a power drill was thrust into my hands for a DIY mission. Always read the instructions, folks (another story for another time).
Anyway, all my research and real-life experience with building AI skills have shown me one clear lesson.
You need human skills to unlock AI’s full potential.
4/ Encourage critical thinking before and after using AI
Despite what social media gurus say, we all very much need to use our brains when working with AI.
If you want to do useful stuff, that is.
I’ve shared a system you can use to achieve this with all your AI interactions before. You’ll stand out from the digital zombies with this.
5/ Prompt an Engineer’s Mindset
BCG refers to this as the ‘engineer’s mindset’ as it originates from mostly engineering roles (both physical + digital).
I call it the ‘Builder’s mindset’, and I think this is a cheat code for life.
I would say I’m only as successful as I have been because of it. I learned it during my teenage years of coding in SQL and Java. It’s built around the principles of understanding what, why, and how of building anything.
Back in the day, I used it to build SQL-based reporting applications.
I didn’t even think about building the app before I knew more about the consumer.
Simple things like:
Who are they?
What problems are they having?
Why are those problems happening?
What would this look like if it were easier for them?
Over the years, I’ve adapted this into all my work, especially writing.
As of today, before I begin any work, I ask:
Why am I building this?
What problem is it solving?
The ‘So What’ test?
How will you build it?
I can only solve a problem or create a meaningful post/product/newsletter/video if I know the above.
Like a builder, you piece together an end goal.
When you reveal this, the next part is easy → Reverse engineer this process.
As this is such an important point, I need more than the written word to explain this.
So, here’s a short video where I explain how to use this framework: