Categories
Artificial intelligence

Everything You Need To Know About AI Agents For L&D (2025)

Agent this, agent that is literally all I see on LinkedIn these days.

Granted, it’s a total echo chamber of people mostly shouting that back at each other, but by God, it’s giving me a headache.

The hype, mostly driven by AI companies, is becoming laughable.

Don’t get me wrong, AI agents will be very useful and there’ll be some great applications in L&D, yet you’d think a sort of world peace is about to emerge by the way the ‘influencers’ talk on social.

So, this is my PSA (public service announcement) to you to say: “Don’t get worried about all the talk.”

I know it feels like you’re missing out on some great party, but you’re really not.

Don’t believe the AI Agent hype in L&D

Almost 90% of what you see paraded online is not a true agent solution. Not in the technical context, anyway.

Much like marketing teams decided to use the word “AI” everywhere post-2022, they’re doing the same thing by labelling everything an Agent.

Unfortunately, this has created a fractured understanding of what an agent is, and the definitions are always changing.

Not only this, but many are trying to run before they can walk.

I work with sooo many teams and companies that hardly know how to use a basic AI assistant to even 50% of its potential. Adding agents into that mix is a recipe for both confusion and mistakes.

A lot of people need to slow down

Pause… take a breath and find your centre (or whatever meditation teachers say).

Without this moment of pause, it’s incredibly hard for you to truly know what’s going to help you and what you don’t need to know.

And too many of us aren’t aware of all the options we already have today.

Agents are cool, but the current noise is lying to you about a lot of things.

So, let’s bring some clarity to all of this ↓

A scene from a film featuring two characters, with one expressing confusion and disbelief about the term 'AI agent,' while the other looks perplexed.

Assistants vs Agents: What’s the difference?

Two terms you might hear techies mention with AI products are ‘AI assistants’ and ‘AI agents’.

Here’s the difference in clear, simple terms.

Let’s start with what we know – AI assistants like ChatGPT.

These are tools that help us with tasks through conversation. They can write, analyse, explain, and give suggestions based on what we ask.

AI agents take this a step further.

Instead of just helping through conversation, agents can actually complete tasks on their own. They follow instructions, use different tools, and make basic decisions to get things done.

The key difference is simple:

  • AI assistants help you with tasks
  • AI agents complete tasks for you

Both are valuable, but they serve different purposes. An assistant works with you through conversation, while an agent works independently based on your instructions.

Use this info to impress the boss at your next meeting.

I’m not going to leave you with just this, though.

As I’m a tech nerd, I’ve filmed a quick video (see below) to show how agents work with examples from Google and Salesforce – enjoy.

What can AI agents do?

A lot, but maybe not as much as the local tech bros are promising.

Imagine having a personal assistant who not only follows your instructions but also takes the initiative to resolve problems independently.

AI agents are like that, except they exist in the digital world.

At their core, they’re designed to observe their environment, make decisions, and take actions using the tools available to them.

Unlike traditional software that waits for you to give it a command, like LLMs, AI agents can think ahead, figure out what needs to be done, and act.

Sometimes without needing constant human input.

Think of them as a self-driving car.

Instead of waiting for a person to steer, brake, or accelerate, the car analyses traffic, makes decisions, and moves safely toward its destination.

AI agents work similarly but in a digital space, whether it’s automating workflows, analysing data, or even assisting with creative tasks.

The magic of AI agents lies in their autonomy and problem-solving abilities.

Even if you don’t give them step by step instructions, they can work out the best way forward to achieve a set goal.

They do this by following set rules and past experiences to decide the best way to complete a task. This makes them incredibly useful for businesses, customer support, research, and even personal productivity.

→ Get an example of this type of AI agent solution with this scenario I built to support common onboarding challenges between HR and Tech teams.

The many faces of AI agents

There was a time when an AI agent meant one thing.

Now, we’ve hit peak confusion thanks to marketing teams the world over.

Each one wants to tell you they’re “agentic”, and each wants you to use their AI agent. But…is it really an AI agent? And if it is, is it the right one for you?

Let’s unpack the types of AI agents, or what social media wants to tell you are AI agents in the market today:

Now, the reality of what you see online is 95% in the automation and AI workflow buckets.

I know every 22-year-old with a YouTube channel wants to tell you otherwise, but “true” AI agent solutions, right now, are rare. Even rarer are agents doing valuable work within organisations.

And when I say ‘agents’, I mean actual ‘agents’, not workflows.

I’m not being harsh. I think AI workflows and automations are very useful, just don’t call them “Agents”.

Before we move on, let’s talk about Model Context Protocol aka MCP, in the first image.

Unless you’re a backend developer or some super nerd (like yours truly), you might never engage with MCP. Nonetheless, let’s take this as a learning moment to once again impress at your next team meeting.

Model Context Protocol Explained

To understand MCP, we need to understand the limitations of Large Language Models (LLMs) on their own, with the challenges developers face when trying to make them useful.

Maybe this will make you feel a bit of empathy for your local tech team.

LLMs are good at tasks like writing text, answering questions based on their training data, or generating code snippets.

However, they can’t do anything meaningful in the real world on their own, such as sending an email, interacting with a calendar, or performing a specific task on your behalf.

So, we need to connect them to different tools and services.

We can do this through APIs…however, this relies on APIs being made available for applications to connect and constantly needing to be monitored. One API with an LLM is easy, but connecting multiple tools to LLMs through APIs is difficult.

Now, MCP helps solve this problem by acting as a universal translator to simplify these connections.

Think of it as a layer between the LLM and all the different tools and services it might need to interact with. Instead of the LLM having to learn and manage every single service (through an API), MCP translates the different “languages” of all those services into a unified language for the LLM.

Now, either you got that, or I confused the s**t out of you.

If the latter, check out this vid, which should resolve that.

To Agent or not to Agent, that is the question

Every tool has its time and place.

I say that too often. Much like LLMs, and AI in general, Agents aren’t the answer to everything. Knowing when (and when not) to call upon the powers of an AI agent is a skill in itself.

My best advice is actually stolen from an engineer at Anthropic (creator of Claude).

Barry Zheng (Applied AI team at Anthropic) gave what I class as a legendary answer to the growing trend of people trying to apply agents to every problem, even when simpler systems would suffice.

“Don’t go after a fly with a Bazooka”

Barry Zheng (Applied AI team at Anthropic)

Magnificent!

I see this so much these days with a lot of tech.

So many tasks can be done in a few minutes by a human, but we’ll spend hours trying to get AI to do it. Surely, that’s counterintuitive to the goal?

Barry also shared this useful slide from one of his live talks (if you’re reading this, Barry, I’m not stalking you – promise!).

And to echo what Quentin Villard shared on LinkedIn, here’s a quick framework to figure out the best tool for the job:

  • If a task requires interacting with external services or your digital environment and is not set up as a workflow or agent, you need to do it yourself. Use a degree of common sense here. If the task is simple or you enjoy it, use that supercomputer in your head, aka the brain.
  • Choose an AI workflow for repeatable, rule-based tasks where you want predictable automation.
  • Choose an AI agent for tasks where you have a goal and want the AI to dynamically figure out the steps, acting as a flexible assistant.

Final thoughts

Of course, there’s much more to say about agents.

But for 95% of the humble humans in this world, this is what you need to know.

This space will continue to grow faster than my cups of tea can brew, but that doesn’t mean you need to be flying at the same speed.

Deep and meaningful understanding requires a moment or two to breathe.

Agents are here, they’re useful, and it will only become easier to access them in shared marketplaces.

As a bonus, here’s a few more resources to shape your knowledge:

Go forth, human.

→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

How To Build An Interactive AI Avatar (And 4 Ideas To Use It in L&D)

Ok, we’re getting super practical today.

Too much content talks about ‘what could be’, but you know I like to take a different approach.

I’ve always been a show, not just tell type of guy.

If you want to shape the future of the learning industry and expand your career, you gotta be down in the action. This is the kinda stuff you’ll never see at a conference.

Today, you’ll learn how to create your own interactive avatar and put it to the test as part of the evolving learning experience with tech.

Now doesn’t that sound fancy…

Before we begin, let’s get clear on why we’re doing this.

A few weeks back, I shared this video on LinkedIn. I honestly thought most people knew about this tech. It’s been around for at least a year.

Turns out I was wrong.

A lot of people freaked out. They either thought I was some sort of dark wizard or the whole thing was staged with a scripted avatar. Neither is true, and you’ll see for yourself today.

To get the most out of today’s edition, you’ll need to do a bit of reading, watching and doing.

Lucky for you, I’ve structured this all in the window you’re currently staring at.

Comparison of Standard AI Avatars and Interactive AI Avatars, featuring a woman with a serious expression on the left and a smiling expression on the right, with text labels highlighting their differences.
How ChatGPT sees the difference

The different levels of AI Avatars

Comments on my LinkedIn post tell me I need to provide some context on the current state of AI avatar technology.

So, let’s break down the difference between an interactive avatar and a standard one.

Let’s start with ‘what you know’…

Your standard AI Avatars.

The ones you’ve probably seen everywhere the past few years. You know the score with these.

You pick a character, a voice and give it a script to voice over your terrible elearning project…oh, sorry, I misspoke…I meant your content 😈.

Anyway, this is what you probably use and know.

In L&D, I’ve often seen Avatars used as a way to elevate bad practices.

Too many turn that already crap course/powerpoint/pdf into a droning avatar and think that’s ‘job done’. In reality, its just crap in → crap out.

That’s not the fault of the tech – it’s ours.

→ We are falling to the limits of our ideas.

Like with any product, it has good use cases if designed well.

Now,Interactive AI Avatars are the opposite of this.

There are no scripts.

Yes, you still pick a character and a voice, or clone yourself if you’re one of the vain among us, but that’s where the similarities stop.

An interactive avatar runs on two sources:

  • A Large Language Model (LLM)
  • A knowledge base paired with system instructions

If you’ve created custom GPTs or built any general AI assistants, you’ll be familiar with this back-end setup.

Essentially, it’s an LLM with a face and voice.

The difference is that you refine its focus by providing sources in a knowledge base that it uses to interact with users.

Now we’ve got that out of the way.

You can check out the full “How to build an interactive AI avatar with HeyGen” video tutorial (skip to 3:31 for the tutorial only) and find my ideas on impactful ways to experiment with this tech in L&D below.

Where can Interactive AI avatars deliver impact?

While this all seems very cool, it doesn’t mean much without a performance application.

Here are a few ways to experiment with this:

👋 The Friendly Onboarding Assistant

I’ve previously covered how AI assistants can be a real help in closing the gap between different departments for new starters.

Starting a new job is such a nerve-wrecking thing.

There are lots of questions to ask and stuff to know, and quality human time isn’t always available. I see an interactive avatar as the next level of a text-based onboarding assistant.

Imagine your new starter opens their laptop and is greeted by a branded avatar that says, “Hey, let’s help you get settled.”

They can ask anything. 

The questions we might miss but are oh so important when you just land, stuff like where the printers are, who to speak to in ‘x’ team and how to set up MFA (that one can be hell!).

They don’t have to wait for days until HR gets back to them or feel like they’re annoying their manager. They get a real-time response and a friendly smile (if you instruct it to smile, of course).

This avatar doesn’t replace onboarding, it enhances the experience.

The ROI:

  • New hires get the answers they want fast.
  • Everyone hears the same message (no more “Steve forgot to tell me how to book holiday”).
  • You finally get data on what people actually want to know in their first few weeks.

🚨 Compliance & Policy Coach

Let’s be honest: nobody remembers anything from their annual compliance training.

I’ve been in this game for 17 years, and no one has proved me wrong yet.

You know this problem, too. People need to know what to do in the moment, not six months before, in a test for which their colleague probably sent them a Slack message with the answers.

So, what if they could talk to an avatar trained specifically on that data, who can guide them to not get sacked by sending that email?

This is modern-day performance support.

They can ask questions like: “Is this process breaking data rules?” or “Can I share this externally?” And instead of a vague response, the avatar references real policies.

The ROI:

  • More useful than clicking through 78 slides of death by PowerPoint once a year
  • It’s there when they actually need it
  • You can track every question, every response. That’ll please the audit committee

🛒 The Shop Floor Assistant

I get a lot of questions about “How do people on the retail frontline leverage AI?” – I think this is one idea to try.

I used to work in the global L&D team for one of the world’s largest retailers, so I’m familiar with the challenges of performance support for in-store colleagues.

They don’t have the same access to devices and apps as their head office counterparts.

We can change that with both online and offline AI solutions. A lot of things can go wrong on a shop floor, and no one wants to be working on a battlefield in the bread aisle.

An interactive avatar could work with teams to deploy new store layouts, get answers to questions about the latest products and new promotions.

Yes, you’d need to think about hardware access for in-store teams, and today’s avatar solution might not be the best option for this right now, but that doesn’t mean it’s not possible to achieve.

The ROI:

  • Real-time answers for colleagues and customers direct from live data
  • Better understand the questions and challenges in-store colleagues face in their day to day.

💬 The ‘Refresh My Skills’ Coach

Another evolution of a current text-based solution.

Creating a fine-tuned skills coach with an LLM is pretty easy. Now we can give it a face and voice to round out the experience.

Often, we all want a safe space to practice, and that’s not always accessible with other humans.

AI can provide a challenging sandbox to refresh and sharpen key skills at the point of need. Imagine you need to have a difficult conversation with a team member this afternoon. It’s been a while, you’re a little rusty and a bit nervous.

This is totally normal.

You want to get a bit of structure and practice in before the meeting, but finding another human to help with that within the time constraints is not easy. So, we could call upon an interactive avatar to:

  • Refresh our understanding of frameworks and models for difficult conversations
  • Roleplay the conversation to test your skills
  • Craft an action plan to prepare you for the conversation and create a meaningful experience for all participants.

The ROI:

  • Critical support with sensitive tasks when you need it most
  • An accessible and safe space to practice skills 24/7
A user interface showcasing three sections on interactive AI avatars. The left section features a young woman with a suggestion to upload knowledge documents. The middle section displays an avatar video with options for tailored personalities, featuring multiple avatars: April, Mia, Jack, and Lily. The right section presents a world map with markers indicating different time zones, emphasizing 24/7 availability and scalability.
Source: HeyGen

Final thoughts

I cannot stress enough that this isn’t about replacing people.

Avatar companies love to use that tagline of “replace yourself” but you don’t need to shape it this way.

It’s about designing support where it matters most in those micro-moments that often get missed. Interactive avatars won’t solve every problem, but when used intelligently, they can lift the load, enhance performance, and free humans to do more of the stuff only we can do.

You don’t need to, and shouldn’t, transform everything into an avatar experience because it’s easy. This is the same type of thinking which led us to the current mess of “Everything is a course”.

Just pick the one use case that solves a real problem in your org and test it.

Then build what’s most impactful.

→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

How L&D Can Actually Transform with AI in 2025

The wall of AI noise on social media is deafening.

I know you feel that too.

It leads to a lot of insecurities, false expectations and an unrealistic understanding of the current state of AI, especially in L&D.

I know this because L&D teams tell me almost weekly.

So, today, we’ll explore “What’s really going on” with most of the L&D teams I collaborate with and how they are leveraging AI today.

Spoiler: They’re not building complex multi-step agentic workflows and sipping martinis on the beach while AI does it all. I didn’t want to get your hopes up.

The 4 Levels of Transformation For L&D with AI in 25/26

If you believe everything you see on social media, every “AI influencer” (which doesn’t mean much when it seems everyone is in AI today) has a billion agents doing their work while they make millions of dollars.

These statements provide my daily dose of amusement.

Appearances, as you know, can be deceiving.

It’s these types of posts that breed fear amongst many L&D teams I work with. Everyone thinks they’re behind, or at least that’s the perception. But as long-time readers will know, that all depends on your context.

Each team is on a different journey shaped by constraints, resources, and organisational realities

One team’s small use case is a huge enhancer for another.

The reality of what you think the ‘majority’ of L&D teams are doing and what they’re actually doing is probably a bigger gap than you think.

We’re going to shine a light on some of this today.

3 Years, 20 clients and a lot of data later

Being a tech geek in L&D has always worked in my favour.

Gen AI pumped that full of steroids, which has led to the fortunate opportunities to work with many L&D teams on the expanding topic of AI in L&D.

A question I never escape in consulting clients is:

“What are other L&D teams doing with AI? We feel so behind and need to accelerate” 

I get why they want some sort of maturity benchmark against fellow practitioners.

It’s not so easy to give that, though.

Everyone is on their own journey, influenced by the factors I shared earlier. What I can, and do share, is an ‘average maturity level’ of where most teams I’ve worked with play in 2025.

Here’s a snapshot of that ↓

Infographic illustrating the five-stage journey to AI maturity, highlighting the current state at Level 2.5, which focuses on activation and experimentation in AI adoption.
Source: Anthropic x Asana AI at work Report (Graffiti by me)

They sit on a 2.5 level, if I can call it that.

A zone of both activation and experimentation.

Which I love to see because stage 1 was more common in late 2023 and most of 2024 for me. That’s not to say I still don’t encounter it, but it’s less of a resistance conversation and more of a “What’s the enormity of the possible with this stuff?”

I don’t ever see maturity as fixed.

Not with AI, anyway.

New advancements drop quicker than Ed Sheeran can break hearts with another emotionally torn tune. Moving back and forth through these levels is both expected and encouraged, imo.

This image represents a higher-level view of things in 2025.

So, let’s explore how L&D can transform with AI in the next year.

The L&D with AI transformation model

Ok, what I’m about to walk you through is built on:

  1. My experiences with clients
  2. What I see and hear from product partners
  3. Too much obsession with analysing market trends

It’s a viewpoint, observation, and my best guidance rolled into one.

You can disagree with it.

I believe the next year (and a bit) presents a golden opportunity for rewiring what we do and how we do things as an industry with AI. The fancy consulting firms would bill this as “hyper-transformation” or some insane level maturity model.

As a humble human, I don’t dare go down that route!

What is paramount to understand is that truly changing the face of L&D with AI is one of the biggest transformation projects our industry has ever seen.

Just like with social media and the internet before it.

Don’t fool yourself that this is only a story about technology.

I look at this journey in 2025 through 4 layers:

An infographic illustrating the four levels of transformation for Learning and Development (L&D) with AI by 2025. It includes categories: "Level 1: Current State - AI as an Assistant," "Level 2: Transition - AI as a Teammate," "Level 3: Emerging - AI as a Tutor + Coach," and "Level 4: Future - AI rewires L&D." The visual highlights the progression and implications of AI in L&D.
My view on 2025

Level 1: The Current State – AI as an Assistant

80% of the teams I consult play here.

It’s a good place to be, as we all have to start somewhere.

You’re (probably) using AI mostly as an assistant right now. By that, I mean the use of one of the many Large Language Models (LLMs) in ChatGPT, Gemini, Copilot, etc. You could even be doing this within an LXP or LMS if you have AI features bolted on.

At this level, we see the most use in L&D for generating content, analysing data, Q&A and simple info retrieval. Most other industries at this level are doing the same thing.

Level 2: Transition – AI as a Teammate

This is right where we belong as I type these words.

We’re not just looking at AI as a tool, but as a teammate, and exploring what that means for the work you do, how you structure your teams, and how you deliver value in the business.

What we’re starting to see is AI’s more active role in the form of agents (or agentic AI if you wanna get fancy about it).

AI agents are the next level of generative AI technology that now allows you to automate workflows and delegate tasks with a degree of autonomy.

A simple way to describe the difference between AI assistants and agents is:

  • AI assistants work with you on tasks
  • AI agents do the task for you (with guidelines and oversight)

(I have a non-techy guide to AI agents for L&D that you should check out to learn more).

There’s a lot of marketing BS around agents right now.

Agents in their true autonomous form are going to be pretty powerful. But don’t be fooled by all those “gurus” online with their n8n/Make/Zapier 150-step workflows.

That’s like a bargain basement version of its potential.

Autonomous agents take a task and do it for you based on the parameters and guidelines you put in place. A really simple way is to think about this like self-driving cars (which are also autonomous, fyi).

Self-driving cars are given guidelines/instructions on how to complete their tasks and the power to make decisions based on that data. They can respond to situations that happen in the moment through their own reasoning.

That’s how they know to take a different route or dodge when someone runs out in front of them.

These tools are becoming increasingly autonomous like self-driving cars.

Agents are going to be transformational, not just for L&D, but society in general.

What’s really important to understand is that this is a transitional time because a lot of teams still use and look at AI as an assistant. But those that are forward-thinking are looking at how they use AI as a teammate.

Level 3: Emerging – AI as the Coach + Tutor

Level 3 is what I look at as emerging.

I say emerging, but really, this is happening now.

We’re moving to a place where AI moves into the creation of more adaptive and personalised learning experiences. This challenges the paradigm of what we’ve always known with courses and content libraries.

Maybe, we won’t need them anymore 😱.

We’re seeing a new evolution of these technologies in AI tutors and coaches.

[Note: you can explore more of my analysis on AI coaches and tutors in L&D, and a demo of how I use AI coaches in courses, too.]

I’m not saying AI-powered coaches and tutors will replace content libraries and platforms in 6 months. That might never happen, but I do see them challenging the status quo.

They’ll become a prominent piece of these experiences in the tech ecosystem, and they are something that can coexist with that, of course.

What I think we’ll probably see is that as more of our audience’s experience is coming through AI tools, these are the kind of things they’re gonna be demanding, and we can see the demand for that already.

The reality is that more of us are spending time inside LLMs like ChatGPT.

This has set a new expectation for experiences.

We’ve seen this before with both Google search and social media delivering new experiences our audiences have come to expect. I’m not quite sure we ever rose to those levels as an industry.

AI provides a different dynamic here in not just consuming content but enabling the audience to create a personalised experience based on their context.

We can see suppliers providing AI coaching and tutoring solutions today (check out my friends at Sana).

It’s happening in schools, too. Check out this demo from a new startup that was released just last week. Many more will be coming, no doubt

This level will be the era of moving from transactional to truly conversational experiences.

The question is, how will you survive and thrive in it?

Level 4: Future – rewiring L&D with AI

I label this level “Future”, but in reality, we’re talking end of this year.

This is all about how AI rewires the world of L&D and how we shape it. What I mean by ‘rewiring’ is redesigning what we do and how we do it.

For workplace L&D, this is a complete rethink of how ‘learning’ is designed, delivered and measured. If our audience’s experience is shifting with the use of AI tools, we have to meet them where they are.

We can’t keep pushing out the same stuff or subscribe to current thinking.

I don’t have all the answers to what this looks like, sorry. I have a bunch of ideas and experiments I’m running, but I gotta get paid, so I can’t be revealing all the secrets here.

To shape what I’m talking about here requires a certain level of confidence and comfort in the subject of AI. So don’t skip on that before you try to shape a new way.

Final thoughts

There is much more to say on all of this, but we’ll leave it here for today.

I cannot give you an exact timeline on how this plays out for you, your team and your company. No one can as that is all based on your context.

I say this is all playing out across 2025 as the tech is maturing at a lightning pace, yet that doesn’t mean you’ll apply this at the same pace.

Hopefully, this brings you closer to the reality of what’s really happening today and where we’re going.

→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

Weird And Wonderful Experiments With AI Avatars For Learning

My latest experiments with AI to rewire learning have taken me back to playing with AI Avatars.

The market has many AI avatar tools with varying degrees of quality (I have seen some horror shows). I’ve played around with a few but never really found a good use case for them.

The quality level has mostly put me off sharing anything built with them, but times change, and the tech has advanced…a lot.

In L&D, I’ve often seen Avatars used as a way to elevate bad practices.

Too many turn that already crap course/powerpoint/pdf into a droning avatar and think that’s ‘job done’. In reality, its just crap in → crap out, and that’s not the fault of the tech – its ours.

We’re falling to the limits of our ideas.

Faster and more might feel like the only option on the table with AI avatars, but that’s not the case.

We have to rewire how we do what we do.

Converting an ‘average’ static piece of content into something AI-enhanced is probably not your best call.

Grand Theft Avatars

I must admit, watching several deepfake-themed thrillers like the superb British drama The Capture has put me in a constant state of suspicion when it comes to using my likeness with any of these tools.

Sadly, this choice has been taken away from me in the last few months, as I’ve dealt with continued instances of my video content being stolen, translated into different languages, and reused across social platforms.

Yes, they even target little old me.

Apparently, I’m a small hit with Japanese audiences.

Those reservations aside, I always believe every tool has a time and place.

Stand-alone vs built-in

While I like stand-alone AI tools, having everything in one place for video editing is an advantage.

So, when I saw Descript, a video editing tool I use daily, introduce their v1 of AI avatars to their platform, I decided to jump back in and see where the tech is at today.

Let me clarify something here: I’m aware that better AI avatar tools exist, but I’m not looking to use several different apps chained together.

If I get a lot of value out of an existing product and they bring AI features that work well within that product, I’m all for that.

The ability to do everything I need with existing features I’m comfortable with, and have that additional AI layer in-app, is what gets more of my attention these days.

Plus, I don’t have all the money in the world for a bazillion app subscriptions.

As the feature is a v1, I’m not expecting the level of something like HeyGen or Synthesia.

I’d give my experience and the result a 7/10 — and there’s nothing wrong with a 7/10 in my book.

Here’s what I liked:

  • The lip-sync looks pretty damn good
  • Body and face movements didn’t feel distracting
  • The ease of writing my script and the transcription quality
  • How effortlessly the avatar worked with existing features — I did expect breaks, but had none

What could be better:

  • Avatar selection: currently, you can only pick from animated, otherworldly or digital-type characters. It’s a v1, so this is fine, and I didn’t want to provide my own avatar
  • Avatar visual quality: You can probably see this in the video. Although the video is output in 4K, the avatar looks pixelated at times
  • Over-exaggeration of words: I see this in a lot of these avatars, and you’ll see it in the video too. The pitch when pronouncing some words can be off.

In sum: I’m not assessing this as an AI avatar feature on its own. That 7/10 rating takes into account how it complements the existing product and the ease of use for me as a daily user.

→ What do you think? Check out the finished video and hit ‘reply’ to let me know.

Oh, I nearly forgot — I need to tell you about the use case.

The problem I’m solving

I binge (with AI) a lot of reports.

A lot of the time, I don’t want to keep writing another article or shoot another talking head video about new research, even if I’ve found it valuable.

A new report from Microsoft, covering how they think teams will evolve with AI, fell into this category.

It’s a great report, and I will write in more detail about it, but I wanted to get some initial thoughts out there, which I think you (yes, gentle reader — you) could benefit from, until I drink copious amounts of tea to distil those thoughts.

So that scenario, and Descript launching this feature, collided, and here we are.

I spent 15 minutes skim-reading the report (yes, I still do that), jotting down the key insights and ideas that stood out for me in my little notebook (and yes, I still do that too), before I spoke to my good AI friend in NotebookLM.

That chat involved NLM reviewing the report with my notes and crafting a narrative together on what I felt was most useful to share. This became the script for the video, which I tweaked a lot in the editing phase. I felt it needed more of my infused sarcastic British tone.

All the transitions and media in the video were done within Descript’s editor.

3 hours later, you have the below.

I feel like it turned out ok.

So, will I be using these all the time? No.

But will I be more open to experimentation with Avatars depending on the context of my goals? 100%.

Bonus: How I created this video in Descript (step by step tutorial)

I imagine you might want to know how the above was created.

Fret not, I filmed a second video to show you how I used Descript to build the end-to-end production, including the avatar creation and editing.

Don’t say I never treat you!

AI avatar tools to test

I always stick to the best tool being one that best fits your goals and priorities.

For me, Descript ticks this box right now as I already pay for the core product, but that doesn’t mean it’ll be right for you.

Other avatar focused platforms include:

  • HeyGen
  • Synthesia
  • Elai
  • Coloyssan

If you’re already a power user of these, let me know how you use them.

Final thoughts

There are 3 points to take away today:

  1. Think about how you’re rewiring how we craft learning experiences with AI – don’t do what you’ve always done (death to PDFs and PowerPoint)
  2. Getting quality features in a product you love using already can be more powerful than using several different tools combined. This is contextual, though
  3. The best way to ‘figure out’ how useful AI could be, is to use AI

→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

How AI Is Redefining the Way We Assess Learning

I got an out of the blue message recently.

“I just saw this video where an AI agent completed a test for an employee, are you worried about this?”

There was no “Hey, Ross”, “How are you, Ross?”, “Hope life is good, Ross?” – just the question. I also find it incredibly unnerving to write about myself in the third person. This is how you know I write all of this and not AI. It’s way too polished to do any of that madness.

Lack of the sender seeming to care about me as a human aside, I have an answer.

I imagine it’s not the one you’d expect.

My take will be a little bit ranty, but nearly 20 years in the field has made me somewhat numb to dumb education and learning practices.

What was shown in the video (fyi, can’t find the video now, must have rage deleted the message) doesn’t worry me as much as the fact that the education system is broken.

Yes, that sounds like some lame statement to say to court attention, but this isn’t an algorithm and I’ll go down with this ship.

The problem with tests and quizzes in courses

If we confirm ‘skills’ and ‘understanding’ through quizzes, then it’s all just a game of who has the best memory, not who understands how to do ‘x’.

I know millions of institutions and corporate learning experiences worship at the altar of the almighty multiple choice exam as the measurement stick for human intelligence, skills and expertise.

Doesn’t mean it’s right.

Agentic AI thrives here because it’s a pretty easy process to tick a box of multiple-choice answers. There’s nothing inherently hard about that. It might be one of the easiest use cases of AI agents to date.

That is not the problem.

The actual problem is how we’ve shaped the process of measuring intelligence, skills and expertise.

What we should be doing is assessing critical thinking and how students/people approach and solve problems.

This is where AI tutors will play a big role. Instead of taking some stupid quiz or exam, you’ll talk with a tutor to explain concepts, provide analogies and break down your human chain of thought.

Thus testing your problem-solving capabilities and helping you identify blindspots.

The latest AI tech finally enables us to do something about this.

But let’s not stay in the rant too long…

So, instead of me shaking my tea cup at the world, let me share the enormity of the possible and how you can experiment with this in your learning experiences.

Why AI Coaches beat any test or quiz for real learning measurement

Screenshot of a conversation with an AI coach discussing fundamental AI concepts and their implications.
Screenshot

A few weeks back, I finished a Machine Learning Cert with Google.

And I really enjoyed it.

Yes, you read that right, someone actually enjoyed learning something. But what triggered this emotion? Simple, they treated me like an adult throughout the entire process, not just the stereotypical category of a ‘student’.

The most impactful way they did this was by removing all those worthless and kinda insulting tests and quizzes that pollute too many education and workplace learning products.

Instead, I was assessed based on what I understood from the course by talking with an AI assistant, which acted as a semi-assessor and coach.

No multiple-choice questions for me to get ChatGPT to answer here.

At the end of each module, I had to face what we could call the final boss in Google’s AI coach.

After hours spent learning about the wonderful components of machine learning, I’d talk with the AI coach to share what I learnt, answer its probing questions and explain how I would convey these learnings with others through examples and analogies.

It was unique compared to the industry standard, and that made it very rewarding.

Even as I write these words, my recollection is so positive, and it helped me cement my understanding of many of the core concepts that I know a multiple-choice test couldn’t have.

Now, you could read this and think, “This guy is crazy, isn’t a test better?” — NO!

The memory game is not a measure of intelligence

The problem we’ve created as a society is this odd ‘test your memory game’ with which we use to assess an individual’s skills and expertise.

Completing a course on ‘x’ doesn’t mean you know how to do ‘x’ for real.

That goes for this very course I’ve shared with you.

I took the cert as a way to expand my understanding and capabilities in the ML realm, but don’t expect me to be building algorithms anytime soon.

Knowing how to do ‘x’ requires me to demonstrate that I can do it and explain the critical thinking behind it.

We need to spend more time prioritising how we solve problems, thinking critically and nurturing our human chain of thought, not being the top memory champion.

And I think AI can help us shape this.

Infographic comparing traditional learning assessments with modern methods enhanced by AI. On the left, it lists 'Old' methods like quizzes, tests, and memory games with a red cross. On the right, 'Modern' methods include AI Coach, AI Tutor, and Human Chain of Thought, accompanied by an illustrated brain character.

How to rewire courses with AI to measure real impact

Even before the Google cert, I’ve had AI coaches in two of my online courses.

One in my performance consulting course and the other for my AI prompting masterclass.

Both are vital, imo, in helping students to take action on what they learnt and for me to validate whether they actually understand any of what they’ve done.

They’ve been transformative for my students.

Both coaches are used daily by new and returning students. They extend the value of the courses, make sure students get real-time support and keep making the content applicable for years to come.

This is the power AI can bring to education and learning if you go beyond content creation.

'Human Chain of Thought' and a description of its meaning related to problem-solving skills we should assess instead of tests and exams.

Why do this instead of quizzes, tests, etc.?

Easy:

  • I think they’re utterly pointless, but I assume you figured that out already.
  • Because I want to develop each student’s Human Chain of Thought (HCOT).

I need to do a bit of explaining with Human Chain of Thought.

Early iterations of Large Language Models (LLMs) from all the big AI names you know today weren’t great at thinking through problems or explaining how they got to an answer.

That ability to break down problems and display its thinking is called a Chain of Thought technique.

This was comically exposed with any maths problem you’d throw at these early-stage LLMs.

They would struggle with even the most basic requests.

It’s a little different today, as we have reasoning models. These have been trained to specifically showcase how they solve your problems and present that information in a step-by-step fashion.

We now expect all the big conversational AI tools to do this, so why don’t we value the same in humans?

Providing AI coaches that help my students contextualise, apply and truly understand what they’ve learnt amplifies this Human Chain of Thought.

So next time you design a learning experience, maybe ditch the test and quizzes for a personalised AI coach/assessor/I’ll think of a better name in the future.

I plan to cover more on the ‘How to build’ these type of solutions in my next AI for solving real business problems boot campI only do these once or twice a year, you can join the waitlist to be first in line for the next one.

AI Tutor and Coach examples

I’ve used the word ‘coach’ a lot in this one.

We can throw AI tutors into that mix, too.

Here’s how I see the difference btw:

  • AI Tutor = Breaks down concepts and works in more of a professor style
  • AI Coach = Works with you in a live environment to solve challenges together. Basically, the new “Learning in the flow” but with AI.

Of course, these terms are interchangeable, and the capabilities can be merged.

Often, I find it’s easier to show you what I’m talking about with AI than try to describe it to you, so here’s some examples:

How I use AI coaches in my online courses

Using AI as a Tutor with Google AI Studio

Using AI as a Real-Time Coach with Google AI Studio

Final thoughts

I get that tests and quizzes are an easy way to measure “learning”, but that doesn’t make them useful.

AI now lets us reshape this.

You can create a similar AI coach to the one I had with Google, or the ones I use across my own courses.

Saying goodbye to completions as the measure of success is the new reality. Now, you can actually see whether people truly ‘get it’, not just if they finished something and passed the test.

→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.