Categories
Artificial intelligence

The Hidden Impact of AI on Your Skills

The world has tons of reports/research on performance with Gen AI tools.

It has even more industry use cases by the day.

We know that, when used intelligently, these tools can enhance performance (see thisthis, and this). Plus, it’s clear that we see the biggest short-term ROI in the boring and basic tasks.

I’ve found in L&D (and, to be fair, many industries) that we get distracted. We focus so often on the ‘shiny thing’ that we continually miss the point.

If AI ‘does it for you’, what happens to your skills?

Although I like the power, potential, and continued promise of AI tools, I’m troubled by the unexpected consequences of the manic pursuit of ‘AI at all costs.’

Especially, AI’s impact on skills.

I sense that we already over-rely on certain tools, and in doing so, we both create illusions of capabilities and fail to invest in moments of intentional learning.

Granted, a lot of this comes down to the intent and ability of human users.

But with all-time high levels of use across millions of Gen AI tools, and all-time low levels of AI literacy, we could be heading for a skills car crash of our own design.

Let’s unpack that.

📌 Key Insights:

  • Smart Gen AI use can expand skill capabilities for a limited time
  • We aren’t improving skills in most cases, mastery requires more than AI alone
  • The majority of people will over-rely on AI tools and become ‘de-skilled’
  • AI tools can help us improve critical thinking processes
  • We must be more intentional in how we approach skill-building in an age of ‘do this task for me’

The Controversial Idea: Skills will be destroyed if we let AI do everything

So many people are scared that AI will take their job.

They think they’ll lose because AI tools can do the tasks better.

But what if you lose, not because AI does your job better, but because you over-relied on the temporary power it grants? You’d be the master of your own demise.

It’s easy to think, “That will never happen to me.” 

Maybe it won’t.

But I’d ask you to consider your use of AI tools today. My assumption is that most people use them in a ‘do this thing for me’ approach, rather than a “show me how to do this.”

Here exists a problem we aren’t paying enough attention to.

An AI-first approach will damage the capabilities and potential of skills (if you allow it).

Somewhat an observation for now, but a dark reality I’d like to avoid.

My thinking behind this comes both from real-world experience with consulting clients on Gen AI skills programs, and what I’ve seen in more advanced research this year.

An excellent piece of research from Boston Consulting Group has been one of my favourites on this topic. It unpacks, with an experiment involving 480 of their consultants, that Gen AI can increase productivity and expand capabilities.

That’s the headline, of course.

AI’s impact on skills: What we know today

The problem with most research and reports is that most don’t read beyond the headline.

Hence why we have so many cult-like statements about Gen AI’s endless power. It is powerful, in the right hands. But any power comes at a cost.

For those willing to go deeper, we find both a bundle of exciting opportunities and critical challenges.

Here’s some we haven’t discussed ↓

1/ Gen AI grants short-term superpowers

No surprise here, I think.

Gen AI tools grant easy access to skills we don’t possess. They can amplify the level of our current skills too. The BCG team coins this as an ‘exoskeleton’ effect.

Explained in their own words:

“We should consider generative AI as an exoskeleton: a tool that empowers workers to perform better and do more than either the human or Gen AI can on their own.”

Being a nerd, I compare this to something like Iron Man.

For those not familiar with the never-ending Marvel films, Tony Stark is a character who has no superpowers (but is a highly intelligent human). To play in the realm of superheroes, he creates his own suit of armour that gives him access to incredible capabilities he doesn’t have as a human.

The caveat is that he needs the suit to do those things.

Essentially, using an AI tool is like being given a superpower you can use only for 20 minutes. It exponentially increases your abilities, but without it, you go back to your normal state. And everyone has the same access to this power.

BCG found the same in this research.

We could call this a somewhat false confidence dilemma.

This presents a few challenges to navigate:

  • How do we combat the illusion of expertise?
  • What happens when you don’t have access to AI?
  • How do we stop addiction to the ‘easy option’?

Spoiler: I don’t have all the answers.

However, this temporary boost in abilities often leads to another problem – the illusion of expertise.

2/ We have to fight the illusion of expertise

This is a big challenge for us.

Getting people to look beyond AI’s illusion of expertise.

You know what I’m talking about.

Now everyone has access to creation tools, they all think they’re learning designers who can create their own amazing products. We both know how that’s going to turn out.

As an example in my work, I can build a decent website with AI, which does the heavy coding for me. But I can’t do it without it, not unless I learn how to do it.

Yes, I built x, but I’m not a software engineer.

There’s a big difference, and sadly, I see people falling into this trap already.

Now, with all this new tech, I don’t need to know the ‘why’ or ‘how’ behind something being built by AI.

But what does this mean for my skills?

A big part of skill acquisition focuses on the ‘why’ and ‘what’, in my opinion. I don’t need to know every little detail, but it helps to have a basic understanding.

I see a few unintended consequences if we don’t clearly define what is a ‘short-term expansion enabled by tech’ and what is ‘true skill acquisition’:

  • We over-rely on AI tools, and this over-reliance erodes critical thinking skills, a key element in real-world problem-solving
  • We lose context, a sense of understanding
  • Our human reasoning skills will erode in the face of “But AI can tell me”

Each of us will fall into different groups based on our motivations.

I’m not saying we’re all collectively going to become skill-less zombies addicted to a digital crack of Gen AI tools, but it will be a reality for some.

an image of a broken phone showing an AI assistant which a human has over-relied on for building skills

What happens to human skills if we over-rely on AI?

This is a real grey area for me.

I’ve seen countless examples where too much AI support leads to less flexing of human skills (most notably common sense), and I’ve seen examples where human skills have improved.

In my own practice, my critical thinking skills have improved with weekly AI use these last two years. It’s what I class as an unexpected but welcome benefit.

This doesn’t happen for all, though.

It depends on the person, of course.

BCG’s findings seem to affirm my thoughts that the default will be to over-rely. I mean, why wouldn’t you? This is why any AI skills program you’re building must focus on behaviours and mindset, not just ‘using a tool.’

You can only make smart decisions if you know when, why, and how to work with AI.

But we can learn with AI too

I (probably like some of you) have beef with traditional testing in educational systems.

It’s a memory game, rather than “Do you know how to think about and break down x problem to find the right answer?” Annoying! We celebrate memory, not thinking (bizarre world).

My beef aside, research shows partnering intelligently with AI could change this.

This article between The Atlantic and Google, which focuses on “How AI is playing a central role in reshaping how we learn through Metacognition”, gives me hope.

The TL;DR (too long; didn’t read) of the article is that using AI tools can enhance metacognition, aka thinking about thinking, at a deeper level.

The idea is, as Ben Kornell, managing partner of the Common Sense Growth Fund, puts it, “In a world where AI can generate content at the push of a button, the real value lies in understanding how to direct that process, how to critically evaluate the output, and how to refine one’s own thinking based on those interactions.”

In other words, AI could shift us to prize ‘thinking’ over ‘building alone.’

And that’s going to be an important thing in a land of ‘do it for me.’

To truly do so, you must know

Google’s experiments included two learning-focused examples.

In the first example, pharmacy students interacted with an AI-powered simulation of a distressed patient demanding answers about their medication.

  • The simulation is designed to help students hone communication skills for challenging patient interactions.
  • The key is not the simulation itself, but the metacognitive reflection that follows.
  • Students are encouraged to analyse their approach: what worked, what could have been done differently, and how their communication style affected the patient’s response.

The second example asks students to create their own chatbot.

Strangely, I used the same exercise in my recent “AI For Business Bootcamp” with 12 students.

Obviously, great minds think alike 😉.

It’s never been easier for the everyday human to create AI-powered tools with no-code platforms.

Yet, you and I both know, that easy doesn’t mean simple. I’m sure you’ve seen the mountain of dumb headlines with someone saying we don’t need marketers/sales/learning designers because we can do it all in ‘x’ tool.

Ha ha ha ha is what I say to them.

Clicking a button that says ‘create’ with one sentence doesn’t mean anything.

To demonstrate this to my students, we spent 3 hours in an “AI Assistant Hackathon.” This involved the design, build, and delivery of a working assistant.

What they didn’t know is I wasn’t expecting them to build a product that worked.

Not well, anyway.

I spent the first 20 minutes explaining that creating a ‘good’ assistant has nothing to do with what tool you build it in and everything to do with how you design it.

Social media will try to convince you that all it takes is 10 minutes to build a chatbot.

While that’s true from a tech perspective, the product, and its performance, will suck.

Just because you can, doesn’t mean you will (not without effort!)

When the students completed the hackathon, one thing became clear.

It’s not as simple or easy to create a high-quality product, and you’re certainly not going to do it in minutes.

But, like I said, the activity’s goal was not to actually build an assistant, but rather, to understand how to think deeply about ‘what it takes’ to build a meaningful product.

I’m talking about:

  • Understanding the problem you’re solving
  • Why it matters to the user
  • Why the solution needs to be AI-powered
  • How the product will work (this covers the user experience and interface)

Most students didn’t complete the assistant/chatbot build, and that’s perfect.

It’s perfect because they learned, through real practice, that it takes time and a lot of deep thinking to build a meaningful product.

“It’s not about whether AI helped write an essay, but about how students directed the AI, how they explained their thought process, and how they refined their approach based on AI feedback. These metacognitive skills are becoming the new metrics of learning.”

Shantanu Sinha, Vice President and General Manager of Google for Education

AI is only as good as the human using it

The section title says it all.

Perhaps the greatest ‘mistake’ made in all this AI excitement is forgetting the key ingredient for real success.

And that’s you and me, friend.

Like any tool, it only works in the hands of a competent and informed user.

I learned this fairly young when a power drill was thrust into my hands for a DIY mission. Always read the instructions, folks (another story for another time).

Anyway, all my research and real-life experience with building AI skills has shown me one clear lesson.

You need human skills to unlock AI’s capabilities.

You won’t go far without a strong sense (and clarity) of thinking, and the analytical judgment to review outputs.

Going back to the BCG report, a few things to note that support this:

1/ Companies are confusing AI ‘augmenting with skill building’

As we touched on earlier, AI gives you temporary superpowers.

Together (you and AI) you can do wonderful things. Divided, not so much (unless you have the prerequisite knowledge to do the task).

We can already see both companies and workers confusing their abilities to (actually) perform a task.

AI gives both a false sense of skills, and terror at the lack of them.

2/ Most people can’t evaluate AI outputs

Again, any of us can code with AI.

But that doesn’t mean we know what’s going on or how to check if it’s correct.

This is the trap anyone can fall into. Knowing how to validate AI outputs is critical. We need to pay more attention to this. You know, thinking about thinking, and all that.

An AI framework on when to use and when not to use.

3/ Without context, you’re doomed

Content without context is worthless.

That’s a general rule. Exceptions apply at times. Nonetheless, you need the context of when and when not to use AI tools to get results.

As we know, it’s not a silver bullet.

The solution to this is getting a better understanding of Gen AI fundamentals.

Another BCG report, in collaboration with Harvard, discovered that success in work tasks with AI came down to knowing when is the right time to call on those superpowers.


How to help humans use AI for REAL learning

Ok, we can see a potential problem if left unchecked.

Here’s a few ideas, tools and actions to do something about it:

1/ Cover AI fundamentals

Too often ignored with people going straight to tools.

Yet, knowing how and why a technology works means you become the chess player, and not a chess piece that’s moved by every new model and tool.

The world has lots of resources to help you with this.

Here’s some from my locker:

2/ Don’t confuse ‘do it for me’ with ‘learning to do’

While AI can enable individuals to complete tasks they wouldn’t be able to do independently, this doesn’t automatically translate to skill acquisition.

Help people recognise the difference.

To truly learn anything, you need a combination of:

  1. Understanding key concepts
  2. Engage in practice
  3. Commitment to improve

3/ Nurture your Human Chain of Thought

I introduced this concept in last week’s edition.

You might have heard me say “AI is only as good as the human using it” like a broken record.

Like any tool, it only works in the hands of a competent and informed user.

I learned this fairly young when a power drill was thrust into my hands for a DIY mission. Always read the instructions, folks (another story for another time).

Anyway, all my research and real-life experience with building AI skills have shown me one clear lesson.

You need human skills to unlock AI’s full potential.

4/ Encourage critical thinking before and after using AI

Infographic illustrating a structured approach to engaging with AI tools, featuring boxes labeled 'Assess', 'Pre-Prompt', 'Output Analysis', 'Prompt', 'Role Reverse', and 'Challenge' with relevant questions and tasks for each.

Despite what social media gurus say, we all very much need to use our brains when working with AI.

If you want to do useful stuff, that is.

I’ve shared a system you can use to achieve this with all your AI interactions before. You’ll stand out from the digital zombies with this.

5/ Prompt an Engineer’s Mindset

BCG refers to this as the ‘engineer’s mindset’ as it originates from mostly engineering roles (both physical + digital).

I call it the ‘Builder’s mindset’, and I think this is a cheat code for life.

I would say I’m only as successful as I have been because of it. I learned it during my teenage years of coding in SQL and Java. It’s built around the principles of understanding what, why, and how of building anything.

Back in the day, I used it to build SQL-based reporting applications.

I didn’t even think about building the app before I knew more about the consumer.

Simple things like:

  • Who are they?
  • What problems are they having?
  • Why are those problems happening?
  • What would this look like if it were easier for them?

Over the years, I’ve adapted this into all my work, especially writing.

As of today, before I begin any work, I ask:

  1. Why am I building this?
  2. What problem is it solving?
  3. The ‘So What’ test?
  4. How will you build it?

I can only solve a problem or create a meaningful post/product/newsletter/video if I know the above.

Like a builder, you piece together an end goal.

When you reveal this, the next part is easy → Reverse engineer this process.

As this is such an important point, I need more than the written word to explain this.

So, here’s a short video where I explain how to use this framework:

Modern ways to reshape skill-building with AI

I’ve spoken a lot about AI coaches.

We can throw AI tutors into that mix, too.

Here’s how I see the difference btw:

  • AI Tutor = Breaks down concepts and works in more of a professor style
  • AI Coach = Works with you in a live environment to solve challenges together. Basically, the new “Learning in the flow” but with AI.

Of course, these terms are interchangeable, and the capabilities can be merged.

FYI, today’s NL partner, Sana, is doing a great job in this department with their soon-to-be-released AI tutor. You should check that out.

Often, I find it’s easier to show you what I’m talking about with AI than try to describe it to you, so here’s examples of both:

Using AI as a Tutor with Google AI Studio

Using AI as a Coach with Google AI Studio

In case you’re wondering, I use Google AI Studio to show these features because it’s easy to access for most people.

It’s a sandbox where you can experiment.

But you shouldn’t use this for work, just as a place to experiment. For Tutor and Coach tools in the workplace, more are entering the market.

Final thoughts

So, will AI destroy or amplify your skills?

Only if you let it.

This is by no means a closed book. No doubt, I’ll cover more on this as time goes on.

For now, be smart:

  1. Craft your builder’s mindset
  2. Borrow superpowers but build real ones through practice.
  3. AI is powerful and has great potential, but don’t forget the unique human and technical skills you need to be ‘fit for life.’

Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

How To Turn Your Ideas Into Working Prototypes With AI

As generative AI advances show no sign of slowing down, neither do the growing uses for it as experience designers.

One that I’m growing fond of is the ability to build working prototypes for L&D ideas, apps and other creations.

No doubt, as I have, you spend a lot of time trying to help stakeholders visualise what ‘x’ thing could look like.

This normally involves scrappy drawings in a notebook or standing in front of people to do your best Bob Ross impression on a whiteboard.

People nod, yet it’s pretty hard to see how your idea would work and how users engage with it. However, what if I told you, a tool exists where you can turn your ideas and scribbles into a semi-working prototype in 15 minutes? One you can share with users to get instant feedback.

Sounds like a scam, I know.

Thankfully, it’s quite real. I want to show you a handy feature from Claude (another LLM like ChatGPT) to achieve this.

Here’s how it works.

Getting started with Claude

Claude is another conversational AI tool like ChatGPT.

It works exactly the same way apart from a few features. Most of it’s team is full of ex-Open AI employees (ChatGPTs creator) and backed by a huge investment from Amazon.

Before we unleash you with building apps, you need to do the following:

  1. Visit Claude’s website and sign-up for a free account
  2. Access your profile on the left-hand side of the screen at the bottom with your email/username
  3. Click ‘settings’ to access your ‘profile’ screen. Here, scroll to the bottom of this page and select ‘enable artifacts’

Artifacts is what we’re going to use to build interactive prototypes of your ideas. This allows Claude to generate code snippets, text documents, and loads of different designs.

The advantage is that it enables you to view semi-working apps in real time and tweak these.

How to transform your ideas into working prototypes

I’m not going to write out the step by step playbook here.

Instead, you can follow along to get everything you need with me in this video:

What can you use Claude Artifacts for?

A bunch of stuff.

Remember those whiteboard sessions and scribbles in your notebook from earlier? You can turn those into something a stakeholder can actually play with.

Instead of saying, “here’s how it can work in my notebook scribbles” you can say “I’ve built a simple prototype to give you a feel of how it could work, try it here”.

A couple ideas you could build:

  • A content library
  • A low-level LMS or LXP interfaces – handy if you wish to see how people behave with a new experience
  • Landing pages
  • Course pages
  • Simple apps
  • Websites

There’s much more you can do.

Everyone can build, but not everyone is a builder

As always, the power of new tools is in the hand of a skilled practitioner.

You will be able to build something of more value than the everyday employee as you know the context of your field. Look at this type of tool as another design assistant in how you build L&D tech solutions.

The barrier for entry to build stuff is getting lower every week.

That doesn’t mean everyone is a builder though (more on that in the future).


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

The Essential Ingredient For New Tech To Succeed

Every new technological innovation seems to unearth the same question throughout history.

Is this the end for humans?

It’s a tale as old as time. From the creation of the printing press to the attack of AI chatbots, it seems like we’re constantly on the brink of destruction.

There’s a huge plot twist here though.

It’s easy to think machines can do it all. But the real magic happens when humans are in the loop. If you’re worried about robots taking over, don’t be.

Let’s unpack how humans (including you) are the essential ingredient it for tech adoption success.

What is the ‘human in the loop’?

If this was one of those Marvel films, this would be the time when the superior spandex laden superhero appears to save the day.

You might have heard of the concept ‘human in the loop’.

It’s commonly used to describe the essential human involvement required with any technology. Did you think all of those cool tools worked on their own?

If you haven’t, the term refers to human input into the development, training, and operation of AI systems. It’s about collaboration between man and machine, not one or the other. I believe this is the best way to work with these tools.

That’s why when I’m asked “Will x take my job?”, I reply “It depends”.

It depends if you’re building a human in the loop (HITL) with AI assisted tasks, and the answer is – you should!

The HITL approach leverages that collaborative approach I mentioned to improve accuracy, reliability, and adaptability of tech tools. You (the human) are the key ingredient in working with any technology. If you’re human skills suck, AI and other tolls won’t help you much.

As humans, we provide key context.

Tools like Generative AI can do many wonderful things but it can’t apply those contextually.

Not right now, anyway.

So, if you’re sitting their worried about AI taking your job – Don’t.

Until SkyNet rises and starts building Terminators, you have a clear place in the flow of work.

Why do we need ‘humans in the loop’ with technology?

Maybe you’re not quite sold on this concept.

Here’s where humans enhance the tech partnership:

  1. Accuracy and reliability
  2. Context and understanding
  3. Ethics and accountability
  4. Continuous improvement
  5. Trust and adoption

Without you, technology can’t benefit from any of this.

That means it’s not much use in the long-term.

Humans and AI collaboration case studies

Talk is cheap without action.

To honour that, here are 3 examples where a human in the loop with AI tools creates performance improvements for both.

🩺 Healthcare

When it comes to medical imaging, like the scans doctors use to spot things like tumours, AI is a powerful tool.

But it’s not perfect on its own.

That’s where human expertise comes in. Radiologists work alongside AI to double check and refine anything a AI tool discovers. This partnership ensures that diagnoses are spot-on because, let’s face it, in medicine, there’s no room for error.

This has been common practice for sometime and generative AI models have only enhanced this partnership.

🌾 Agriculture

In the farming world, Hummingbird Technologies is a great example of human and AI teamwork.

They use drones and satellites to collect images of crops, but it’s the human experts who make sense of this data. Initially, data scientists manually annotated images to train their AI models. Later, they outsourced this task to a dedicated HITL workforce, allowing their in-house team to focus on model development and optimisation.

This approach not only sped things up but also made the predictions more reliable, helping farmers make better decisions.

Win!

🚗 Self -driving cars

Probably once of the most talked about innovations this decade.

They’ve not quite landed yet.

With self-driving cars, accuracy is everything, and that’s where humans come in. Developers use a HITL to process the massive amounts of data these cars generate like video and sensor inputs.

Human annotators review and correct the AI’s work, especially in tricky situations where the AI might miss something.

This collaboration is critical to making sure these cars are not just smart but safe on the roads. If you’ve seen any of the horror stories where these innovations have gone wrong, you know how important this is.

The irreplaceable human element

I get its hard to see this when social media is ablaze with inflated stories.

Most people use Gen AI tools for creation. That’s less than 5% of their potential in my eyes. In reality, they’re overall potential is greatly untapped.

I compare the current state of AI use for work to giving a Ferrari to a 5 year old. People don’t have the skills, experience or know-how to use it effectively.

That will change.

We’re talking years here not days. I keep going back to this image from Oliver Wyman with the scaling model for AI adoption and ROI. Time is on your side.

What it really takes to scale AI and tech adoption success

Humans are here to stay

I read the same fear-filled headlines you do.

I don’t believe them, and you don’t have too either. Want to get the real answer to all of this? Then take time to experiment and research. I think you’d be surprised by what you discover.

I’ve written extensively on where my fellow industry practitioners will always add value no matter the technological innovation.

I see the same case for most industries.

That’s not to say I’m blind or foolish to the fact that some industries, and thus jobs will be reshaped. This is the nature of life.

5 ways to bring humans in the loop for tech adoption success

5 ways to bring ‘humans in the loop’ in your technology projects

This mostly won’t happen overnight.

Here’s how you can bring an intelligent human approach to your collaboration with technology.

1/ Start Small

Involve humans in tasks like data labelling and quality control. In L&D, this could mean using human reviewers to validate AI-generated content or training materials.

Your goal is to ensure that the information you feed tools to enhance your work is accurate and relevant. Getting this right from the beginning is of the utmost importance.

Start as you mean to go on, as ‘they’ say.

2/ Leverage human expertise for continuous improvement

Hopefully a no brainer but I have to call it out just in case.

Use feedback loops to review and refine AI outputs. You’re sitting on a wealth of data with AI generated outputs. Get clear on what’s good, bad and downright ugly. Make the improvements needed.

Take a page from the book of our friends in product.

Introduce retros to each AI-assisted project to scale the performance of your collaborations. Good work takes time. Its not about getting it perfect from the start.

3/ Focus on contextual decision-making and independent thinking

The keyword here is ‘contextual’.

AI is not so good with this, not unless you’re awesome with prompting context rich tasks to tools. Sadly, most people don’t do this.

Instead, encourage fellow humans to apply their judgment in situations where context is key. AI can suggest solutions, but humans should make the final call, especially in nuanced scenarios like personalised learning paths.

You have the context. AI is like the Robin to your Batman with decisions.

4/ Ensure practical ethical oversight

You and I aren’t going to solve the full scope of the ethical dilemmas with AI.

Yet, we can establish meaningful guidelines and checks to prevent the list of issues we’d rather avoid in AI outputs.

Human oversight is crucial for ensuring that AI recommendations align with your ethical standards. It can boiled down to what goes in is what comes words. In other words, crap data inputs = crap data outputs.

Focus on quality not quantity.

5/ Invest in digital intelligence

If you’ve read my work for sometime, you’ll know digital intelligence is one of the most underrated skills I endorse.

We live in an increasingly digital world at work but many people can barely operate their email app. It’s concerning.

It’s a no-brainer once more, but you must provide your team with the necessary support to effectively collaborate with AI tools.

This should focus on understanding how AI works, its limitations, and how to leverage it to enhance their work rather than replace it. After all, with great power comes great responsibility.

For tech adoption success you need to understand the pros and cons

Final thoughts

  • Humans (you) are the secret ingredient to successful technology collaboration.
  • The human touch remains irreplaceable in making contextual decisions and providing ethical oversight with AI.
  • Start small, leveraging human expertise for continuous improvement, and promoting contextual decision-making are key ways to bring humans into the loop with technology.
  • Embrace digital intelligence and investing in teams so they can effectively collaborate with AI tools to enhance their work rather than replacing it.

The future is always human-powered!


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

6 Unique Ways L&D Teams Can Use Perplexity AI

I see too many posts just talking theory about AI for work.

Today, we change that by unpacking how to use tools for specific L&D tasks. We begin with 6 ways you can use Perplexity AI to work smarter in L&D immediately.

What is Perplexity?

Let’s start with the obvious.

Perplexity positions itself as an AI search engine. This basically means they use AI to create a personalised answer for you from credited sources versus Google giving you lists of links.

It’s an attractive idea.

Instead of trying to figure out ‘which link is right for you’, you’re given a new way to discover and interact with information.

As with other popular AI tools like ChatGPT, it has a conversational interface to ask questions, upload documents and produce a variety of outputs.

6 Unique Ways L&D Teams Can Use Perplexity AI for design, research and analysis

Getting Started with Perplexity

Instead of me banging on for paragraphs about setting up the tool, here’s a 4-minute video.

Technically, it can be 2 minutes if you put it on 2x playback speed like me (I like to live dangerously). Before hitting play, get your free account on Perplexity.

This covers:

  • Setup
  • Key features
  • Free vs Paid
  • Search vs Pro
  • Your first output
A beginners guide to Perplexity

Now we’ve completed the essentials.

Let’s unpack our 6 use cases ↓

6 unique ways to use Perplexity AI for improved research, design and analysis

1. The Case Study

A lot of the promise of Gen AI tools has been personalised learning experiences.

That has always excited me.

So, you can imagine my surprise when all I find are L&D pros and workforces using tools for content creation 99% of the time.

The potential is untapped.

Let’s change that.

As our industry likes a good old case study. Here’s how to build case studies in under 15 minutes with Perplexity:

  1. Identify your topic of interest.
  2. Build your prompt (see example in video)
  3. Review and provide feedback
  4. Ask for guided steps on how you can implement the knowledge
  5. Ask Perplexity to transform all outputs into a readable document.
  6. [Bonus] Turn outputs into a podcast script and use ElevenLabs to create a podcast you can use as an audio lesson

The last one is a bonus, but it’s pretty cool and takes personalised learning to another level.

Why read it yourself when an AI-generated voice can read it for you?

What a time to be alive.

This enables you to craft case studies for use in community channels, newsletters and workshops. I used to spend hours trawling Google for good case studies to anchor workshops around back in the day.

If only I had this back then.

I ran this example myself as research for the course arm of my business.

I created a case study on Kat Norton, creator of Miss Excel (who you’ve probably seen dancing around Excel sheets online somewhere) which I turned into a podcast as a lesson for others to learn from too.

How to build business case studies with Perplexity AI

2. The Research Assistant

Compelling credible and evidence-based research from scratch.

Similar to case studies, our industry lives on evidence-based research.

AI has a bit of a love/hate relationship with facts and data. I mean, we know it likes to make up a good old fact or two, or three!

Perplexity’s strength comes into play here.

It indexes the internet every 24 hours to keep its data on point. I’m not saying you can 100% trust it (it is still a probabilistic tech after all). It’s better than most AI tools on offer.

Let’s say you’re building a feedback experience and need industry-specific research,

You can use a prompt like this:


###Context###
I'm building a workshop focused on enhancing the feedback capabilities of mid- level managers in our technology department. To date feedback on their performance on this subject is incredibly mixed. Ideally, I want to unpack examples of feedback scenarios from big tech companies. My plan is to use these as a central part of the experience to breakdown 'what good looks like' ###Task### You will find 5 examples of feedback done well from top tech companies as outlined in my context. Present these as follows: - Title of company where example came from - 150 max overview of scenario - What the audience can learn from this - Steps to replicate 'what good looks like' ###Constraints### Only focus on examples from the last 5 years. They must be from technology companies only. Do not make up examples if you can't find any. Respond with 'I can't find this'.

Here’s the response I got. Pretty good, right?

As with all things Perplexity, knowing your sources in a clear-cut format is a great feature. Below are the sources used to create my response to this prompt.

How to find sources on Perplexity AI

Also, note on the right-hand side of your screen where a gallery of images appears.

Click ‘View More’ to get a nice curated list of relevant images for your search. Mine provided good feedback frameworks related to the output.

An example of research and analysis with Perplexity AI
Screenshot

You can use this to:

  • Research new learning tech
  • Improve old content
  • Build business proposals

And much more. Let your imagination run wild!

(Not sure how to write AI prompts? Get the guide on the best AI prompt framework for business tasks on the blog)

3. The Data Analyst

As an industry, we love to talk about data – a lot.

The thing is none of us were trained to be data ninjas. We spin enough hats already. I’m all for being data-led and informed in making smart decisions for L&D.

Thankfully, we’ve entered a time where AI tools make this a lot easier for us. Perplexity isn’t unique in this feature. You can achieve this with ChatGPT and Claude. I’ll show you how to do that with their features in the weeks and months to come.

As with other AI tools, you can upload different document types to Perplexity including Excel, Google Docs and PDFs.

I spend too much time reading industry reports. I use AI tools to support me in analysing and uncovering the most useful insights.

Here’s an example prompt you can use with your data analysis:

###Context###

In the attached [enter file type here] you'll find [data type: comments, report etc] on [enter subject]

I need to [output: measure, analyse, understand] the [thing you need to know] to help me with [future task]

###Task###

Your task is to analyse this report and provide the following:

- Sentiment analysis of user feelings
- Trending data insights
- Catalogue mentions of the term [insert term]
- Provide 3 areas I should explore in more detail

Present this in a structured table format for easy viewing.

Play with this to align with your task.

AI tools are not just useful for analysing data. They can help as an incredibly valuable co-pilot working inside data handling tools like Excel, and Google Sheets and specialist tools like Tableau.

Gone are the days of scouring pages of Google for that one formula to rule them all.

Just ask your local conversational AI tool to give you a step-by-step guide.

Quick hack for info sentiment and analysis with comments

When compiling articles like this, I do a lot of research on places like Quora and Reddit.

These threads can have hundreds even thousands of comments. Unless I have months to write these (which I don’t) these pieces wouldn’t be in front of you now.

To help with this, I discovered a quick and so simple hack to get data from these sites into my chosen AI tool.

I use Microsoft Edge, but you can find this under the settings menu of any browser.

Click the three dots or settings menu in your browser → select ‘print’ → ‘save as PDF’ → upload the file to Perplexity or your AI tool of choice → ask it to review the document and provide the trending responses – voila!

4. The Market Researcher

Comparing new tech, whether L&D or not, is hard.

I have a moment every year when Apple releases 3 new phones in one day, where I stare at the screen and think, what’s the difference? Our little AI assistants can help make this easier.

What separates a tool like Perplexity from a Google search is personalisation.

I can’t use a Google search to say “I’m a late 30-something male who is confident with tech. I’m looking for an iPhone with a large screen that fits my hands. It must have a top of the range camera, ability to run high-demanding business apps and longevity of at least 5 years worth of updates”

Google would have a seizure if you typed that in.

Conversational AI tools don’t.

This is where Perplexity will give you an advantage. As mentioned earlier, it indexes the internet daily for the latest info. Where Google gives a bazillion links to review, Perplexity will create a structured answer on a page with sources.

Should I buy x tool?

You can use this same method to compare L&D tools.

We’re drowning in so many, it’s difficult to know who is worth your time.

Here’s a prompt template to try:

###Context###

I'm a [insert role] at [describe your company and industry].

We're looking to add a new [tool description] to helps us with:

- Problem 1
- Problem 2
- Problem 3

It must work with [insert current tech] and be able to support [specify audience and size]

###Task###

Your task is to identify and research 5 learning platforms that can solve the problems outlined in the context above.

They should be based in [enter territory or country] and specifically support [your industry]

Present the output in a table with:

- Supplier (include website URL)
- Location
- Price
- Key Features
- Previous customers
- Reviews

###Constraints###

Do not use your training data to fill in any missing pieces of information. If you can't find the information requested, leave the field blank.

5. Focused searching

Pinpoint answers with ‘focus’ mode.

This is a feature I’ve found with no other conversational AI tool.

It’s small, but I think rather useful. I’m talking about the ability to focus your query for the best results. Perplexity can refine its answer to only source information from:

  • YouTube
  • Social media
  • Academic libraries
  • Just it’s training data with no web access

I’ve found use for this in searching social media comments to get a sentiment analysis of attitudes towards AI tools like Perplexity.

Sometimes it’s better to refine your search vs look everywhere.

To access this, just click the ‘focus’ option within the input bar and select your source.

The ultimate decisions between using Google and Perplexity AI for L&D teams

6. Replacing Google(?)

Probably never going to happen but it is an AI-based search engine first.

It stops you from scrolling through links to figure out ‘what’s worth my time’. Instead, you get a round-up of up to 10 sources that build a personalised answer.

Google gives you results, but Perplexity will give you answers.

I know this is the angle Perplexity is aiming for, and it is good at it in some areas, but I think it will take more than that to take down the king. Google is integrating more of its Gemini AI across its search technologies as we speak.

Still, if you prefer a structured answer versus playing a game of links. Perplexity is a good option.

✍️ Final Thoughts

There’s not much more to say, friend.

Just keep experimenting and open your mind to the enormity of the possible. It would be a real shame to use powerful tech to only 1% of its potential.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.


Make AI Your Partner, Not The Problem 🤝

Get lifetime access to the only AI Crash Course designed for L&D Professionals. Join 500 + students to future-proof your skills and work smarter.

👉 Get started with my AI For L&D Pros Crash Course.

Categories
Artificial intelligence

The Best AI Prompt Framework For Work (2026)

Prompts for AI can come in many forms for different outcomes.

Looking for a quick answer to a simple question? Use a 1-2 sentence prompt. Want to unpack a complex work task with layers of actions? You need a different approach.

Whereas a one shot prompt like “What’s the weather in Ibiza on average in August?” is a simple string of words for any conversational AI tool to answer. Asking it to review, ideate and share how to create a learning strategy for your company is not so straightforward.

To get the best results from something like a ChatGPT, you need 3 things:

  • Context
  • Task outline
  • Constraints

3 prompting techniques you need to know

3 prompting techniques for the best AI prompt frameworks

Let’s take some of the techy terms you might have heard and translate for humans.

1️⃣ Zero-shot

Useful for: Simple Q&A inputs

Example:On average, how long do Alaskan Huskies live?

2️⃣ One-shot

Useful for: When your task requires a specific output such as a table or executive summary with one examples as guidance.

Example:I want an executive summary on the current population of Alaskan huskies in North America. The summary needs to follow our company format, here’s an example:

‘Insert your example’

Now, let’s create a first draft”

3️⃣ Few-shot

Useful for: Working on highly complex tasks like data analysis or writing a report for senior leaders using 2 – 3 examples.

Example:I need to produce an analysis of the most popular buying locations of Alaskan Huskies in North America from European origins. This will need to be in our preferred company format with supporting data visualisations.

Here’s a few examples to show what I want to achieve:

*Either insert the examples directly into the input bar or upload as images, Docs and PDFs

Note: No prompt is bulletproof. LLMs can behave in extraordinary and odd ways. If you find yourself hitting that brick wall, deploy these techniques.

How to write the best AI prompts for business tasks

This is a universal approach you can use with any tool.

Let’s unpack each of these:

Context

What does the LLM need to know to successfully support you?

Here’s some ideas:

  • Your organisation
  • Team
  • Roles and work they do?
  • Specifics on the task
  • What have you done before?
  • What is the role it’s playing? [If role-playing or coaching]

Task

  • Outline the task
  • What does success look like?
  • What are the essential components of the task?
  • Keep it clear and simple
  • How should the output be structured? Bullets, sentences or paragraphs.

Constraints

  • What should the LLM not focus on?
  • What must it not consider?
  • Should it only use its training data or connect to the internet or both?
  • Should it only use the data you’ve provided?

Example of a powerful AI prompt for business tasks

If we put this all together, it can look something like this:

###Context###

I'm crafting my organisations L&D strategy for the year ahead. We’re a scale-up business with 800 employees in 5 global offices. We’re limited with our resource and budget to deliver. Our goal is not to do everything, but do the top 2-3 things that matter most.

Our strategy for the last few years has become stale and not working towards what we want to achieve.

Top things on our employees minds include:

- Having the right skills for the role

- Learning from their peers

- Manager support and coaching


###Task###

You will help me get clarity on how I can work with my team to build a relevant and meaningful strategy for our organisation.

To do this, you will ask me questions to help get clarity to build a better picture of where we can go.


###Constraints###

Keep questions short and relevant. They should be quick fire rather than in-depth. Let’s keep questions to a minimum of 3 at a time.

You might notice I’m using ‘markdown’ structure and delimiters in these prompts. This helps AI tools better understand the instructions you provide by using heading, bullets and general formatted structure etc.

This works with all popular AI tools including ChatGPT, Claude, Perplexity and all the ones you know.

Why you should use delimiters in AI prompts

Delimiters help AI models recognise that what follows or is enclosed within them is a directive or special instruction that should guide its response.

Here’s 6 ways they improve AI prompts

  1. Clarifies Instructions: Separates commands or instructions from regular text, reducing ambiguity.

  2. Enhances Accuracy: Helps LLMs focus on the specific parts of the prompt, improving response relevance.

  3. Organises Complex Prompts: Structures detailed prompts into distinct sections for better comprehension.

  4. Ensures Consistent Formatting: Maintains desired format in outputs, like code or structured data.

  5. Highlights Key Elements: Marks important parts of the text, such as commands or keywords.

  6. Reduces Misinterpretation: Provides clear boundaries to prevent confusion in interpreting different content types.

Common Delimiters to use with AI tools

Here is a list of common delimiters that are often used effectively with large language models (LLMs):

  1. Triple Hash (###): Often used to separate sections or indicate instructions.
  2. Angle Brackets (<...>): Used to highlight specific tags or commands.
  3. Double Curly Braces ({{...}}): Can indicate placeholders or variables.
  4. Backticks (’...‘): Commonly used to denote code or special terms.
  5. Double Hyphens (- ... --): Sometimes used for comments or notes.
  6. Pipe Symbols (|...|): Useful for delineating choices or options.
  7. Square Brackets ([...]): Can denote optional elements or clarifications.
  8. Double Quotes ("...): Used for direct quotes or exact text.
  9. Colons and Semicolons (: ... ;): To separate elements in lists or statements.
  10. Parentheses ((...)): Often used for additional information or clarification.

You can learn more about that in this video.

Reasoning Models vs GPT Models: What’s the difference?

Not all AI models work the same way.

Some are designed to think deeply before responding, while others prioritise speed and efficiency.

The two main types you’ll come across are Reasoning Models (like o1 and o3-mini) and GPT Models (like GPT-4o). They have different strengths and are suited to different tasks. 

Here’s how they compare:

Reasoning Models: The Deep Thinkers

These models are built to process complex problems step by step before responding. Think of them like a senior colleague who takes their time to analyse a situation before giving you a well-thought-out answer.

What they do well:

→ Handle ambiguity and figure out what you mean, even if your instructions are unclear

→ Solve complex problems by planning multiple steps ahead

→ Make reliable decisions based on lots of scattered information

→ Ask clarifying questions rather than guessing when details are missing

Best used for:

  • Strategic problem-solving
  • Sifting through huge datasets to find key insights
  • Multi-step planning and execution
  • Debugging code and reviewing AI-generated responses
  • Visual reasoning tasks

GPT Models: Fast and Furious

GPT models, on the other hand, are built for speed and efficiency.

They work best when tasks are well-defined and don’t require deep reasoning.

What they do well:

→ Quickly generate content, summaries, and responses

→ Follow explicit instructions without overthinking

→ Handle repetitive tasks efficiently

Best used for:

  • Writing, editing, and summarising information
  • Answering straightforward questions
  • Generating content when speed matters more than accuracy

Think of them like a junior co-worker: great when given clear instructions, but not someone you’d rely on to figure out a complex problem without guidance.

When should you use a reasoning model?

It depends on your needs:

  • If accuracy and careful thinking are the priority → Use a Reasoning Model
  • If speed and cost matter more than deep problem-solving → Use a GPT Model

In many cases, the best approach is a mix of both. A reasoning model can do the heavy lifting—like analysing and strategising—while a GPT model handles execution tasks quickly and efficiently.

Now you know the difference, you can use the right tool for the right job.


The best AI prompt frameworks for reasoning models

How to Prompt Reasoning Models (7 tips)

1. Keep prompts simple and direct

Reasoning models work best with brief, clear instructions. Avoid unnecessary complexity.

2. Avoid chain-of-thought prompts

Prompts like “think step by step” or “explain your reasoning” are unnecessary. These models already reason internally, and such instructions may even reduce performance.

3. Use delimiters for clarity

Use markdown, XML tags, or section titles to separate different parts of the input. This helps the model interpret sections correctly.

4. Start with zero-shot, then try few-shot if needed

Begin with prompts that don’t include examples—reasoning models often don’t need them. If your task is complex, add a few clear input-output examples that align closely with your instructions.

5. Provide specific guidelines

Clearly define any constraints, such as “propose a solution with a budget under $500”, to ensure precise responses.

6. Be explicit about your end goal

Set clear success criteria and encourage the model to iterate until the response meets your expectations.

7. High-level guidance works best

Reasoning models perform well when given a high-level goal rather than micromanaged steps—trust them to figure out the details.

Get more best practice in OpenAI’s official documentation.


What ‘good’ looks like

Here’s an example of a prompt with OpenAI’s o1 reasoning model by Greg Brockman, President of OpenAI.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.