The LMS is dying (apparently), Roger, the latest automated AI workflow, is going to replace my content engine and make me trillions of dollars while I sleep (allegedly), and Ninja claw, Tigerclaw or Dragon claw will do life for me so I can just wander around asking “why am I here?” (how fabulous).
Jokes aside (I do love them, though), I’ve never seen such a typhoon of pressure on the everyday human.
So, I think we all need to…BREATHE.
Better? Great.
What we can focus on is: What actually matters to us?
I believe that part of the answer to that is our skills. I get it’s hard to build the “right” skills to work with AI. That’s why I focus on skill sprints of 3-6 months these days because so much is expiring, evolving and emerging all at once.
In this one, we’re exploring 3 high-value AI skills you need in 2026.
What if your AI skills from last year are already outdated?
All the feelings
I know that’s a scary question, but damn does it move fast.
One week I’m happy making my AI assistants smarter with internal knowledge files and now I’m running around extending their capabilities with MCP’s, plugins and all sorts of sorcery.
My point being, sitting still for too long on the technical side of AI tools is becoming somewhat dangerous. I know too many people chasing after tools, but I’m more interested in cross-platform skills that enable high value from the majority of tools.
I have 3 today which I believe will serve you well across the never-ending spectrum of AI applications.
What are AI skills?
An engineer will have a different view on this vs an L&D pro.
I break AI skills into two components:
One component is “How do I use ‘x’ tool and it’s features”
AI ‘fluency’ and ‘literacy’ are better ways for us to frame “AI skills”
These frameworks acknowledge that using a generative AI tool, most popular being an LLM, is about more than the technical skill of using the tool alone. Personally, I like the term ‘AI fluency’ because the technology moves so fast. Which means both the tech skills and behaviours you need to work with it are forever updating.
For the purpose of today’s conversation, we’ll focus specifically on the technical skills side of this because I have written a lot lately on the behavioural side.
Know what’s expiring, evolving and emerging (The 3 E’s)
I do a quarterly skills review.
It used to be every half year, but AI changed that plan.
Your skills are an ever-flowing organism (best analogy I could think of in this moment). That means they never stand still, and without attention, they’ll degrade.
If your goal is to build a talent stack (a combo of your skills, behaviours and attitudes) that’ll keep you employed for the long term, you’d be wise to perform regular maintenance on yours too.
I use a simple framework I picked up from a Gartner report over a decade ago.
All you do is look at your current skills and ask:
What skills are expiring and no longer serve me and/or the world today?
What skills do I need to evolve to meet the demands of today?
What are the emerging skills I can get ahead of?
While I say simple, that doesn’t always equal easy.
You could even spin this as a bit of a life analogy too. We all need to know what to leave behind, what to double down on and what to keep a lookout for.
I’ve tagged each of the 3 AI skills I recommend you invest in with one of these categories to give you a sense of where it’s in your current skill cycle.
3 high-value AI skills you need in 2026
1️⃣ Prompting [Evolving]
No, it’s not dead, but it has evolved.
AI skills have such a short shelf life, and prompting was hailed to be the most important skill of the century. While I didn’t agree then (or now), it is the main way we interact with LLMs.
At its core, prompting is just inputting a query.
A lot of frameworks from 2023-2025 are no longer needed because models have become much more capable with memory and custom instructions. Prompting is also weird because no one template works and two people with the exact same prompt can get widely different outputs.
You still need to prompt, just less skilfully than before because of our next two skills.
These sound the same but I’m framing them in two ways.
One for builders/engineers and the other for non-technical users.
For builders, context engineering is the science of high-signal curation. They’re not stuffing AI with every possible file. The goal is to build dynamic systems that “just-in-time” retrieve the exact tool, specific knowledge chunk, or agent-to-agent communication needed for the next step.
This is not too dissimilar from what end users like me, and you can now do.
As we start using more AI agents, knowing how to provide the right source, in the right format at the right time is critical. We already see these opportunities with memory, custom instructions, connecting our favourite apps and uploading multiple file sources into our chats.
Context engineering for the everyday human is about building the infrastructure around the AI, so it has everything it needs to make better decisions.
An art and science in itself.
We could say that prompting + context layering + skills/instructions = the 3 layers of a strong AI response.
Fyi, I have a guide and a video series on context layering/engineering coming in a few weeks.
3️⃣ Creating Skills for AI [emerging]
The best example of this is Claude Code and Cowork.
You can create a skills package which teaches the LLM how to perform a task. It’s a combination of context, instructions and guidelines that can repeatedly do the task with little oversight.
To make the best use of this, you’ll need to get comfortable with creating Markdown files. In these, you’ll unpack not only how to complete a task but why, and all the micro elements that make up the big choices. Then maintaining that skill just like you do with your own.
We’ll see this capability with more LLMs across this year. The great thing is you can take your .md files to any tool, so maybe we’ll all be building .md files now.
The real skill here is knowing how to break down a process for an LLM to understand with a combination of instructions, examples and evaluations.
Final thoughts
Ok, folks.
That’s my thinking out loud for the day.
Obvs, AI moves so fast, these are relevant skills right now, but maybe not forever.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.
Whenever someone asks me, “How do I learn AI?” I have to pause because it feels like it’s a double-loaded question.
I understand why people want to ask me that, especially in my line of work and visibility in showcasing different use cases of artificial intelligence specifically in the realm of learning and development.
Despite this, it’s still really hard for me to give a concrete and clear answer.
It’s difficult because there are so many courses, resources, and experiences that are available to you. For the most part, they all sound the same, look the same, and tell us that they’re gonna give us the same outcome.
I’m always of the belief that there isn’t one tool, one method, or one strategy to rule them all.
And… to top it all off, the term “learn AI” is pretty ambiguous.
What I assume they mean is to understand how a generative tool, like an LLM, works and how they can work with it to get their desired results.
What is the context of the individual and organisation?
What are the constraints they may be experiencing?
What is their current starting point?
Without understanding all of these key inputs, it’s very hard for anyone to give you a clear answer to the question I’m constantly posed.
Now I know you’re probably reading this and thinking “oh God please stop writing so much and just tell me what I should do.”
So that is exactly what we are going to do today.
While I’m not gonna give you a clear 100% “this is the course to rule them all”, what I will do is share a selection of choices.
What is AI Fluency?
I’ll be using this term a lot, so let’s unpack it.
A simple way to define AI fluency is the ability to understand, use, and effectively collaborate with AI tools in your day-to-day life and work.
AI fluency is not about turning people into software programmers or highly technical experts. Instead, it simply means equipping people with the confidence and capability to think and work alongside AI.
In practical, easy-to-understand terms, being “AI fluent” means you know how to do the following:
Know when to use it: Understanding what AI is actually good at, recognizing its limitations, and deciding when a task needs a human touch instead.
Communicate clearly: Knowing how to give AI tools the right instructions (prompts), context, and constraints so they give you helpful and accurate results.
Apply human judgment: Critically evaluating the information AI gives you rather than taking it at face value. This means being able to spot mistakes, biases, or missing context.
Use it responsibly: Understanding the basic ethical guidelines of AI, such as protecting sensitive or private data, and taking ultimate responsibility for whatever the AI helps you create.
I like this framing because it’s about treating AI as a capable “thought partner” to make your life easier, while always relying on your own human judgment to steer the ship.
Now we’ve clarified that, onwards we go.
How I assessed these zero-cost AI courses
Okay, here’s the method to my madness with this assessment.
I’ve looked across the market at all of the courses which are tagged with ‘AI fluency’.
There are a lot of free resources on YouTube with a varying degree of quality to consider, but I’ve not included these here because that’s really based on your own taste.
This led me to focus my testing on the leading AI and tech companies building the tools we know. The idea is since they’re building these tools, they might just be a good place to learn about their capabilities.
With that focus, I spent the last six weeks completing courses from the following companies:
Google
DeepLearning.ai
Anthropic
Microsoft
That gives us four courses in total.
You’ll notice one big omission here from OpenAI.
I did visit OpenAI’s Academy to try and find some resources and courses for the general population. But I found the Academy very difficult to navigate. I didn’t really find anything beyond the ability to sign up to webinars, and rather outdated looking help articles.
That’s not to say that I dislike OpenAI at all.
I would be more than happy to add them into this once I see an improved offer and can feel confident in recommending that to you.
My recommendations for each course will include:
What does the course cover?
Who would it benefit the most?
What will you tangibly walk away with that you can use the next day?
Now, a disclaimer for you…
The right choice for you is contextual, and depends on the 3 questions I posed in the opening section. I’ll give you my recommendations on which courses are truly worth your time in the sea of hundreds currently available.
Yet, your choice on which of the 4 is most useful to you is…well, down to you.
You can use these options for your company, too
Long-time readers will know I’m all about the value!
Since I’m not part of the AI bros club (let’s be honest my hair is far too fabulous and fashion sense too cool for that), where you get literally nothing outside of “This changes everything” comments, I’m making these recommendations not only for your development, but that of the teams you support too.
A lot of organisations are asking you to lead or play a big support role in adoption of AI.
The good news is once you’re confident with the content, you can then use it to your advantage to enable your org to be just as smart with AI (if you want to, of course).
The 10 second answer
The 4 AI fluency courses I’d recommend for L&D pros
Let’s get this show on the road.
This will be the only playbook you need, and I’ll do my best to keep it updated as the years roll or until my battery dies (hopefully, not anytime soon).
AI For Everyone by Deeplearning.AI
Funny name for this one because I don’t believe it’s for everyone.
Of all the courses I share today, this one is the most advanced-ish, so I’d reserve this for those who are farther down the yellow brick road of AI awareness.
Who it’s for:
This was one of the first AI courses I came across in mid-2023, although it covers much more than Generative AI. It’s taught by Andrew Ng, Founder of Coursera and Google Brain (a deep learning predecessor to the more well known Google DeepMind), and an actual computer scientist.
So yes, this guy legit knows what he’s talking about.
While I found this experience great for my inner nerd, it’s certainly only for those who want a broader understanding of everything AI, not just LLMs and everything under the generative umbrella.
Maybe one best placed for my fellow learning tech folks.
What it does well:
The course gives you everything you’d need to know about AI without being an engineer.
Talking head videos and resources are clearly explained, and easy to download for your future viewing pleasure.
Where it falls short:
Now, this only falls short depending on your context.
If you’re not an engineer or don’t care about getting deeper into an AI specialised role, then it will probably fall short for you in many areas with its overwhelm. It is stuffed with lots of info, and most is non-relevant if you just use a standard LLM every day.
I mean, it has no hands-on use of AI tools but I find it hard to see where this could be weaved in with all the Matrix level explanations.
Verdict:
❌ Skip, if you’re 95% of average AI users.
✅ Dive in if you’re an engineer, curious nerd or want to make a career pivot to AI as a broader category.
While they came late to the useful LLM party, it can’t be denied they’re certainly a frontrunner with their abundance of tools and Godzilla-sized ecosystem.
Who it’s for:
This is pitched at the pure beginner. If you or your teams look at an LLM and have no clue what’s going on, this is the place to start.
Google’s team give you an overview of the basics that’ll give you a better standing in those workplace conversations.
What it does well:
Like I said, it’s framed perfectly for beginners, and in the traditional online course delivery we all know, but maybe don’t love. There’s plenty of focus on low-level tasks where you can collaborate with AI, and a surprisingly good overview of prompting too.
Where it falls short:
Like most choices on this list, the videos are super corny.
Even more so with these Google ones. I feel like the real humans could be avatars because the delivery is so scripted and robotic which breaks any form of immersion.
If you know the basics and are confident with generic LLM activities, you won’t get much from this.
Note:If you want to go to the next level, Google offers a paid professional certificate which bolts on a “How to build with AI” category, which is missing from all of the above. Free to access tools like Google AI Studio are criminally underrated because they’re pitched as “developer” tools but are very easy to use once you’re set up with some basic knowledge. This course provides that next step in your AI fluency journey.
Verdict:
❌ Skip if you’ve been using LLMs for a few years and know the basics.
✅ Dive in if you’re at beginner territory and need to build the foundations with understanding and working with LLMs
AI Fluency: Framework and Foundations by Anthropic
I know I said “this isn’t a competition with one winner”…but…
This one is my most highly recommended for everyone.
I like this one because it’s focused on “how to work and think with Gen AI and LLMs as a category”, rather than, “here’s how to use our tool”. It does a good job on prompting you (pun intended) to think about how you engage with AI and its outputs.
This is more valuable for the long game with AI. The tools will change, but investing in how you think and work with AI won’t.
Who it’s for:
It’s for anyone ready to move beyond tools and templates and start thinking with AI.
So, basically, everyone reading this.
Simply having access to AI doesn’t make you competent and fluent with AI.
Anthropic
What it does well:
The course gives you a clear fluency framework you can pick up and implement with a few tweaks. The exercises (when you find them) are practical, immersive, and teach you to think about how you’re working with AI, not just what you’re asking it.
There’s a strong breakdown of techniques that improve both AI outputs and your own thinking in the process, and I liked the encouragement to work alongside a LLM to solve the problems the course unpacks.
Where it falls short:
The UX needs work.
Exercises are buried below the fold with no direction to scroll down. I was surprised an instance of Claude wasn’t embedded for these. Why not include live demo exercises in the videos to actually model thinking with AI?
There’s also a section covering transformers, compute, and tokens. Nice to know, but for most people this isn’t the advantage the course frames it as.
Verdict:
✅ Take it.
It teaches you how to work with AI, not just how to use a tool. That alone sets it apart from 90% of AI courses out there right now.
I can’t think of someone who wouldn’t benefit from it.
I know a lot of companies are using Microsoft infrastructure, so it was a no brainer to include this one.
The ultimate assessment of ‘if this is right for you and your teams’, is if you’re using Copilot as your go-to company LLM.
Who it’s for:
The easy answer is if your organisation’s LLM of choice is Copilot and you want to make the most of it across all the popular MS apps.
What it does well:
A comprehensive walkthrough of Copilot that’s perfect for any MS powered organisation. If you’re using Microsoft tools, you can’t go wrong here.
You also get a good overview of AI Fluency (according to Microsoft) and ideas on how to amplify its principles across a team, department and org level.
Where it falls short:
It’s painfully long, and stuffed with too much unnecessary content.
When you look at it, it’s just poor design. There’s some good stuff here but it’s buried by mountains of content. It felt like the brief was ‘how much can we stuff in?’. This is the biggest problem for this one. If you’re a copilot user, it’s made for you but you’ll be going through a lot of ‘nice to know’, not ‘this is actually gonna help me do better work’ content.
I don’t like the gamified angle with experience points either.
That might be just me, yet I felt like I was in some 90’s retro game at points, and I didn’t want to be there.
Verdict:
❌ Skip if you’re not using Copilot
✅ Dive in if you’re using Microsoft across your organisation with Copilot but select the parts that make most sense for your work, as you don’t need everything on offer here
3. Create a free Google account to access Google AI Studio, where you can test prompts, build apps and learn about agents in one place.
That’s it.
Google’s AI Essentials gives you everything you need to know about generative AI, and Anthropic’s fluency course builds on this by teaching you how to work and talk to AI without degrading your power of thought. Google’s AI Studio is the cherry on top as it lets you practice everything in one space with free access to top models.
That’s my zero-cost learning plan for you 😉.
Final thoughts
There you have it, friend.
These are my picks of the most impactful zero-cost AI courses worth checking out. Like I said, there’s no overall winner, just what works best based on your context, experience level, and goals.
I hope this helps you build your skills and enable those around you with AI too.
Let me know your thoughts on these courses, and any I missed that you think I should check out.
See you next time, human.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.
L&D teams trying to and/or being responsible for total company AI adoption is a fools errand.
We both know this is stupid, yet I see too many instances of companies trying to “train” their way into successful AI adoption.
Of course, L&D teams alone aren’t going to make any organisation achieve meaningful AI adoption (however you measure that). Yet, we do have a part to play, and recognising where we can best support is critical.
So, lets explore How L&D Can Support Meaningful AI Adoption.
It’s odd because the validation of “adoption” has many definitions, depending on the context and environment. The common pitfall is to measure adoption as ‘use of AI tools’ alone.
As we know, with previous technology, usage alone doesn’t mean meaningful adoption.
Setting what adoption looks like in your organisation is not a task for the L&D team.
We both know this is stupid, yet I see too many instances of companies trying to “train” their way into successful AI adoption.
Of course, L&D teams alone aren’t going to make any organisation achieve meaningful AI adoption (however you measure that).
Yet, we have an opportunity to contribute to long-term and meaningful adoption of AI across workforces as part of a wider collaboration in a community.
Let’s talk about that…
It takes more than access
Let’s go beyond the veil of BS we see online.
Access to an AI tool alone means nothing, and putting on one hour lunch and learns to “make people learn AI” is a comical up-skilling strategy.
If you’re a long-time reader, you’ve heard me become a broken record when I talk about what it takes to nurture meaningful and long-term change. We have much to consider with context, culture and constraints in every environment. No two workplaces are the same, that’s why the cookie-cutter “adoption frameworks” make me laugh.
They’re a good point of inspiration, but you shouldn’t follow them like a strict set of instructions.
Saying that, what is it we need to consider beyond tools?
People, Systems and Tools
As you’ve probably guessed, launching new technology and tools alone rarely leads to meaningful adoption.
There’s a bigger ecosystem at play.
We have to consider:
1/ People
Where are people at today, and how do we meet them?
Everyone will have a different understanding, maturity and receptiveness to something new and unknown. In AI’s case, we have a mix of emotions from “will this take my job” to “I want it to do all this stuff I hate doing”.
The most difficult part of a change process is people, because we’re all so unpredictable.
2/ Systems
Quite simply, how we work today.
What are the tried, tested and trusted conscious and unconscious systems we have in place? This covers both how we execute tasks and how we think about executing those tasks (deep, I know).
We each follow different types of systems in our day to day.
Understanding what these are and how AI will impact those is key to this change.
3/ Tools
The part you’re most likely more familiar with.
Here, we should consider the tools in use today alongside new ones being deployed, and how to bridge the gap in both understanding and knowing when and where to deploy them.
Too many forget the ‘when and where’ part at their own peril.
For us to recognise where we can provide support and drive value, we must recognise what’s changing.
I think this framework from BCG can help recognise the moments where performance support is most needed with AI transformation.
They propose it for navigating AI transformation at scale, and through an L&D lens, I see this as a conversation point of what to map against when focusing on how best to support workforces.
It’s built on two key dimensions:
1️⃣ AI Maturity
It progresses from tool-based adoption by individuals to workflow transformation, to full, agent-led orchestration. Most organisations, and even teams within them, operate across multiple stages at once, not in a linear path.
2️⃣ Workforce Impact
This spans how tasks are executed, to what skills are needed, to how teams are structured, to how organisational culture must evolve to support new ways of working.
While this covers the wider transformation AI brings across businesses, it acts as a roadmap for L&D.
A roadmap is often what we need because it’s not uncommon for senior leaders to treat “training” (as they call it) as a boomerang that’s thrown at will when they decide people need to know stuff.
The framework above provides a view of where the friction/pain points/ problems exist in the cycle of change. That’s where we should focus.
Map it out
I mentioned before not to blindly follow frameworks, and that advice is the same here.
This view from BCG is a useful foundation for each of us to think about “where can we add value”, but it will look different for each environment.
So, I’d recommend you map out what your organisational journey looks like today.
Explore the 3 pillars of tasks, talent and teams across your business and how/where AI is starting to and might impact these. It’s here that you will uncover the friction and pain points where we can be of most service.
Some of that will be through tooling, no doubt. Yet, I feel pretty safe in saying you’ll be spending a good deal of your time navigating changes within people and systems.
Examples of AI adoption and transformation in action
Ok, I want to share some useful case studies and “what good could look like”, but that’s hard to quantify. Especially in L&D, where the measurement of ‘adoption’ is different to many.
The default I’m seeing, just like all other learning tech, is measured on – engagement.
But we both know that logging into a tool doesn’t = capable and driving value with those tools.
So, I’m going to share two examples with you that I find interesting.
I’m not saying they are the ‘way’. They’re good reads to see what large organisations are doing, learn a bit and decide if any of that is useful for you.
Notion’s AI Transformation Model
Why is a note taking and productivity app providing AI transformation services?
No idea.
Yet, I found their open-sourced AI transformation model released this month interesting. Not only for the fact that it looks remarkably similar to one I built for L&D over a year ago (are you watching me, Notion?), but it felt more like a common sense approach.
Notion’s model respects that an organisation will not move at the same pace together.
It’s impossible when you think about it.
Each department has different talents, goals and systems. The pace of innovation with AI means that the level you are today might not be the same level next week.
An example AI-native assessment chart for each department
Notions customer adoption playbook
AI maturity levelling framework
Customer case studies (a bit vapid, tbh)
Looks like they’ll be releasing more resources soon.
I’ll update these to this guide when they drop.
Zapier’s AI Fluency Rubric
I’m gonna be honest…I have no idea WTF a “rubric” is.
Zapier started releasing this in 2025, and V2 just dropped. I didn’t much like the last edition. While this isn’t really an adoption model, it does help us understand how organisations are assessing employee’s AI skills and use.
As Zapier were the first to market publicly with this type of solution, naturally many areas of the market started to use it.
So while I’m not saying if it’s good or bad (that’s for you to decide), we can learn from it.
The AI for L&D Transformation Model: How to reshape what we do and become AI-native
If you read this far, you’re about to get the good stuff.
I’ve transformed my 2025 article on “How L&D can transform with AI” into a new playbook packed with additional commentary and new resources.
It gives you the 4 levels of transformation I’ve seen teams go through (and continue to) these past 4-ish years now. It’s also layered with where I think the industry “could go” based on current technology.
In the playbook I break down:
The 4 levels of AI transformation in L&D
What teams demonstrate at each level
A curation of my best resources to improve your capability with AI
Yes, it is another report about AI, and about AI in L&D.
I know there have been lots over the years already, and I’m not here to try and replicate those because that would be boring for both of us.
I’m no KPMG, PWC, BCG or any other 3 word consultancy with an army of consultants and flowing cash. What I am armed with is nearly 20 years in learning tech as both a practitioner and consultant, a large audience of L&D professionals who share their stories with me and a curious mind that wants to craft that altogether.
My mission with this report is help our industry understand the habits, behaviours and choices that L&D teams are making with AI.
Specifically we’ll unpack:
The tried and trusted AI tools teams are using today
Why they choose to use these tools
How they’re using these tools to meaningfully drive value
Through all of this we’ll understand a little more about how AI is being used as a co-worker across global L&D teams.
TL;DR
A quick snapshot of everything you need to know:
85% of L&D professionals use AI daily
ChatGPT is the most popular tool among L&D teams in 2026
80% of AI tools used are paid or enterprise licenses, but practitioners fill the 20% with personal AI tools, aka the “shadow stack”
NotebookLM is part of many practitioners “Shadow AI Stack”
AI provides 5 big value drivers for most L&D teams
There are 3 levels of AI value in L&D: Efficiency, Quality and Strategy
Practitioners are ditching the content factory mentality in favour of thought partnering with AI
What L&D teams are actually doing with AI today
Ok, here’s what we’re going to do.
We have 3 main questions to explore:
Which AI tools do L&D pros use most often in their work?
Why do they find these tools useful vs others?
What value are these tools delivering for teams?
We’ll unpack each in more detail, of course, with some standout comments, use cases and surprising insights.
Then I’ll finish off our time together by exploring how all of this impacts our habits, behaviours and choices as we integrate AI as a co-worker.
1/ These are the AI tools L&D pros are using
So, who is actually using what behind both the walls of corporations and in their personal ecosystem? Perhaps few surprises here, but let’s see how we go.
For the sake of simplicity, I’ve broken the tools down into these 4 categories:
Large Language Models (LLMs)
Research & Knowledge Management
Content Creation
Specialised Tools & Integrations
👑 Who is the King/Queen of LLMs for L&D?
Ok, no surprises… it’s ChatGPT.
I mean, were you expecting anything else?
This OG tool from OpenAI took the top spot with both its enterprise and free plans for teams and practitioners. Alongside this, we had the usual competitors in Copilot from Microsoft, Gemini from Google and Claude from Anthropic.
Over 80% of responses specified using enterprise or paid versions of these tools, so read into that what you will.
🧐 Research & Knowledge Management
While LLMs are cool, they’re also a very jack of all trades or all in one solution, which isn’t bad but can sometimes mean they don’t perform as well for your niche use cases.
This came through a lot in the data.
When it comes to research, analysis and crafting a place to store all of that for evergreen access, two tools kept coming up.
They were NotebookLM and Perplexity.
Again, no surprise given they’re built specifically for these use cases, and as long-time readers of this newsletter will know, I must talk about NotebookLM every other week.
📝 Content Creation
While I loathe to focus on this use, I can’t deny it is still the number one use case for the industry in its current state.
I have nothing against that as we stand, because context is everything, and until the system is rebuilt, you can’t blame teams for trying to do more with less. That’s a much wider discussion to have outside of today.
So, outside of the LLMs we’ve covered, the tools that kept being mentioned on the content front were:
Synthesia
Elevenlabs
HeyGen
Canva
Seems like you’re all loving those AI avatars and voiceovers, and who can blame you.
They can be powerful in the right hands.
📼 Specialised Tools & Integrations
This is the area for tools that didn’t quite fit into one category or could span them all.
There was a real mix here, so I’m not going to list everything.
What I can share is on the image generation front, it seems a lot of you are loving Midjourney, and I can see a lot of use with bolted on AI-powered features in Articulate Rise, Adobe Creative Cloud and, of course, the mighty Microsoft 365.
Curious Insight: Shadow AI Stacks
While most respondents have access to a suite of AI tools at work, they’re not huge fans of them.
Many respondents reported poor performance due to instances of approved company AI tools not being on the level of the widely available paid models that many use personally. It might be no surprise that Microsoft Copilot took most of this hate.
It seems many have access to Microsofts flagship AI product, yet they’d rather not.
This has created a lot of friction, and I’m sure its not exclusive to L&D. What I found in the data is that teams will use their mandated company AI tools for very little, and instead, engage with external tools as they provide much better quality.
These external tools can be classed as “shadow stacks”, aka tools being used in secret to complete work.
Look, I’m not the police, so its not for me to tell you what to do.
Its just fascinating that some people are willing to take risks with company data with these tools in the pursuit of doing stuff faster. so, if you’re doing this, it seems you’re not alone.
2/ Why L&D Pros choose these AI tools
I’m sure you can imagine that the most obvious answer here will be: “Because these are my company-approved tools”, which is mostly spot on.
You know, I hope common sense screams don’t go leaking data to AI tools that aren’t approved by your company. Besides this main factor, we see a few more variables that affect both our purchase of tools and their use.
Ease of Use & Accessibility
Many of you use these tools because they are easy to use, accessible, and often (but not always) the only tools approved or available on work devices due to company IT/security restrictions.
Speed, Efficiency, and Time Savings
These chosen tools are highly valued for their ability to generate content, ideas, and complete tasks faster, leading to quicker work and significant time savings in summarising, analysing, and content creation.
Quality of Output and Niche Functionality
Many of you mentioned the preference for tools that provide high-quality, precise, and relevant outputs.
Specific tools were highlighted for their distinct strengths, such as Claude for high-quality writing, NotebookLM for deep content analysis and knowledge base creation, Midjourney for consistent image generation, and ElevenLabs for natural-sounding voiceovers.
Easy Integration
Integration with existing ecosystems (like MS365 or Google Workspace) and the ability to maintain context across conversations (e.g., ChatGPT’s continuous context or custom GPTs connected to company data/SharePoint) make them more effective and relevant to personal/organisational needs.
Yes, that’s a no brainer, but still good to see in writing.
Trusted and Reliable
Trust is a complicated word in the workplace AI game.
A lot of you chose tools because they are company-sanctioned, allowing for the safe use of confidential data, or because you trust the accuracy and reliability of the sources they pull from.
3/ The value these AI tools really deliver
This is the killer question, and in my eyes, the more important one than “How much money did this make us?”
Without value, we have very little, if nothing to show for all these investments. Safe to say this is the part of the data I spent most of my time scrolling through.
⏰ Reclaiming time
The most valued benefit mentioned is saving significant time by speeding up content creation (first drafts, outlines, storyboards, copy), administration, analysis, and summarising large volumes of content, freeing up time for higher-value activities.
That makes sense. After all, time is our most precious non-renewable resource; just don’t sacrifice quality for speed!
🤔 Developing ideas and structuring thoughts
AI tools serve many of you as a valuable ‘thought partner’, ‘sounding board’, and ‘sparring partner’ for brainstorming, generating new ideas, challenging assumptions, validating concepts, and looking at topics from different angles.
📈 Improving quality
The important one, if I may say so.
Many of you highly value your trusted tools for increasing the quality of work through better writing/copy, editing, adapting content for specific audiences, restructuring, and simplifying complex topics into understandable snippets.
🔎 Better Research & Analysis
Everyone goes on about being data savvy in L&D, but it ain’t easy.
AI excels here, and it seems many of you agreed. I had so many comments on the quality of support with research, data analysis, synthesising information, extracting key themes, and summarising content from multiple sources.
⚒️ Crafting New Skills
So many examples here, including creating websites, learning to code (HTML, Python, APIs), building Q&A bots, developing specialised agents, and being coached through difficult conversations with AI.
More on this, but you don’t need them right now. I’ll be sharing more as the weeks and months go on, so fret not.
“These tools save me significant time.
They help me quickly summarise content from multiple sources, recall and organise information, and search back through large transcripts, websites, and white papers. They also allow me to iterate on ideas and refine wording multiple times until the message is clear and impactful. Call note-taking and action-item extraction have been game changers, enabling me to capture details I’d never have been able to track manually.
Overall, the ability to pull together diverse perspectives, distill them, and adapt content for specific audiences has elevated the quality and effectiveness of my work vs. the level of effort and time spent.”
Survey Respondent
What this tells us about L&D teams habits, choices and behaviours to derive value from AI
Ok, so what can we learn from all this data?
This is where I see many research reports die.
They share lots of valuable data, yet provide no simplified insights on what we can take action on. Fret not, friend, I’m not going to leave you hanging.
To answer the obvious, what we do know is that most teams/pros are heavily LLM-based when it comes to AI tool usage, and the tool of choice is dominated by what’s approved in the workplace. That makes sense.
The bigger piece to talk about is what we can note from the way teams and individual pros get value from these tools.
What’s most revealing is that understanding the value proposition also provides a framework for adoption (stick with me). I see the value of AI for L&D teams as 3 levers. Some sit in one level, while others move freely across all 3 as new tools emerge. There is no one right way, and you might make a first point of entry into any of these.
From a high level, the value AI can bring to your work as of 2026 is with efficiency, quality and strategy.
Let’s unpack each of these.
Level 1: The Efficiency Engine
From an adoption standpoint, this is what I class as the gateway drug to AI for L&D teams.
Its the most common and immediate value driver for most.
It’s about speed, not necessarily quality (see next level). The tantalising prospect of saving time on the most mundane of tasks is so incredibly alluring that even the biggest AI haters will struggle not to turn their head.
This is where we sit in what I class as “The Efficiency Engine”.
Here we see the benefits of freeing up time and automating our routine tasks. Once people experience this, they often want to know what else these tools can do.
It’s both a value driver and the entry point for creating behaviour change.
And I should point out that this was not with content creation alone. Many of you mentioned the speed to summarise, analyse and find niche research.
Level 2: The Quality Amplifier
I’ve always believed that any tool is only useful in the hands of a competent user, and this is no different with AI.
AI helping you to make your best work even better is highly admirable.
I know so many are obsessed with delegating work to sit on some mythical beach somewhere. Those people aren’t going to do much in life. Instead, those who use AI to amplify what they do today will be the winners.
This is why I’ve become rather obsessed with tools like NotebookLM.
What’s clear is that quality counts when working with AI, and knowing which niche tools can provide that is going to be your strategic advantage.
Speaking of strategic…
Level 3: The Strategic Partner
This is where I’ve always seen the real value of AI since day 1.
I do bash on teams using AI solely for content creation a lot, yet that’s only because I know how powerful LLMs can be as strategic partners.
Even 3 years removed from the launch of ChatGPT, I still see many LLMs vastly underutilised by L&D teams in this way.
The ability to pull together diverse perspectives, distill them, and adapt content for specific audiences has elevated the quality and effectiveness of my work vs. the level of effort and time spent.
Respondent
I see strategic partnering as bringing human thought/intelligence together with AI to uncover insights, points of view and develop ideas to do our best work. This comes through clearly from the survey responses.
Many of you referenced looking beyond AI for content creation and enhancing your cognitive processes by unpacking collaborative and critical thinking tasks with AI.
Use cases that surfaced included:
Acting as a sounding board, especially for solo pros
Challenging assumptions and expanding current perspectives
Refining and sense-making of thoughts
Facilitating critical analysis of data and scenarios
So, when we talk about value and helping teams recognise this, and supporting adoption journeys, this is a useful framing to consider.
The Most Interesting Case Study: Building a New Workshop Booking Engine
We’ve covered a lot of ground with AI assisting many of you as a thought partner and automation machine, but not so much as a builder.
In the past 18 months, the market has been flooded with more AI-powered coding apps than any one person can keep up with. They come with mixed results, and as always, heavily rely on the expertise of the user.
Perhaps one of the most remarkable stories shared in the survey came from a manager who needed to build a new booking engine for their company but lacked expertise in the required coding language.
Using Cursor, an AI-powered coding tool, they were able to accomplish this task, which would have otherwise been impossible for them. This wasn’t the only mention of coding-based tools, either.
What this tells me is that more L&D pros are experimenting firsthand to uncover “How does this thing work?” They might not always achieve their goal, yet there is a lot we can each learn from these experiences.
“Cursor has aided me in coding in a language I didn’t know beforehand, in doing so developing a new booking engine for the company from scratch.”
Respondent
Final thoughts
That’s your highlights of the top insights on how high-performing L&D teams are crafting value with AI today.
Here’s a few more ways for you to get into this data:
A customised AI assistant trained on all the survey data to guide you through the insightsand ask all of your questions. I thought there’s no better way to unpack L&D teams’ behaviours with AI than by using AI (be gentle with it, as it’s still in a testing phase, so it will do odd things at times). It’s built on Google’s Gemini Pro LLM, so it has all those sexy thinking capabilities too.
It’s odd because the validation of “adoption” has many definitions dependent on the context and environment. The common pitfall is to measure adoption as ‘use of AI tools’ alone.
As we know with previous technology, usage alone doesn’t mean meaningful adoption.
Setting what adoption looks like in your organisation is not a task for the L&D team.
Yet, we have an opportunity to contribute to long term and meaningful adoption of AI across workforces as part of a wide collaboration in a community.
Let’s go beyond the veil of bullshit we see online.
Access to an AI tool alone means nothing, and putting on one hour lunch and learns to “make people learn AI” is a comical up-skilling strategy.
If you’re a long time reader, you’ve heard me become a broken record when I talk about what it takes to nurture meaningful and long-term change.
We have much to consider with context, culture and constraints in each environment. No two workplaces are the same, that’s why the cookie-cutter “adoption frameworks” make me laugh.
They’re a good point of inspiration but you shouldn’t follow them like a strict set of instructions.
Saying that, what is it we need to consider beyond tools?
Read on…
People, Systems and Tools
As you’ve probably guessed, launching new technology and tools alone rarely leads to meaningful adoption.
There’s a bigger ecosystem at play.
We have to consider:
1/ People
Where are people at today and how do we meet them?
Everyone will have a different understanding, maturity and receptiveness to something new and unknown. In AI’s case, we have a mix of emotions from “will this take my job” to “I want it to do all this stuff I hate doing”.
The most difficult part of a change process is people because we’re all so unpredictable.
2/ Systems
Quite simply, how we work today.
What are the tried, tested and trusted conscious and unconscious systems we have in place. This covers both how we execute tasks and how we think about executing those tasks (deep, I know).
We each follow different types of systems in our day to day.
Understanding what these are and how AI will impact those is key in this change.
3/ Tools
The part you’re most likely more familiar with.
Here, we should consider the tools in use today alongside new ones being deployed, and how to bridge the gap in both understanding and knowing when and where to deploy them.
Too many forget the ‘when and where’ part at their own peril.
Where you can add value
Source: BCG
For us to recognise where we can provide support and drive value, we must note what’s changing.
I think this framework from BCG can help recognise the moments where performance support is most need with AI transformation.
They propose it for navigating AI transformation at scale, and through an L&D lens, I see this as a conversation point of what to map against when focusing on how best to support workforce’s.
It’s built on two key dimensions:
1️⃣ AI Maturity
It progresses from tool-based adoption by individuals, to workflow transformation, to full, agent-led orchestration. Most organizations, and even teams within them, operate across multiple stages at once, not in a linear path.
2️⃣ Workforce Impact
This spans how tasks are executed, to what skills are needed, to how teams are structured, to how organisational culture must evolve to support new ways of working.
While this covers the wider transformation AI brings across businesses, it acts as a roadmap for L&D.
A roadmap is often what we need because its not uncommon for senior leaders to treat “training” (as they call it) as a boomerang that’s thrown at will when they decide people need to know stuff.
The framework above provides a view to where the friction/pain points/ problems exists in the cycle of change. That’s where we should focus.
Map it out
I mentioned before to not blindly follow frameworks, and that advice is the same here.
This view from BCG is a useful foundation for each of us to think about “where can we add value”, but it will look different for each environment.
So, I’d recommend you map out what your organisational journey looks like today.
Explore the 3 pillars of tasks, talent and teams across your business and how/where AI is starting to and might impact these. It’s here you will uncover the friction and pain points where we can be of most service.
Some of that will be through tooling, no doubt.
Yet, I feel pretty safe in saying you’ll be spending a good deal of your time navigating changes within people and systems.
Final thoughts
There’s much to say, of course, but only so much attention span I can ask you to give.
I’m thinking of expanding some of this thought into a long-form video, if that sounds like something you’d like to see, let me know.
In the meantime, so additional resources to explore on this include: