Categories
Learning Technology

The YouTube Growth Playbook: 4 Proven Strategies For Creators

I run lots of experiments in a “How does x thing work” style.

Sometimes I share these, and often I don’t. Mostly because I’m not sure people would care if it’s a non-L&D topic.

Anyway, I’ve inadvertently found myself growing my YouTube channel by 60% subs and 240% views in the last 8 months with 2 -3 hours of work a month.

In this, I’ve learnt a lot about how the platform works. So, I’ve crafted a sort of YouTube Growth Playbook. I have no plans to be a YouTuber but I’m fascinated by how it works.

After many conversations and lots of reflection.

I think there’s a lot to learn from the YouTube design process for L&D design.

From choosing the right idea to designing the user experience to packaging it for a viewer. I see so many similar traits with L&D.

That seems like something worthwhile for us to unpack.

The Experiment: Growing a YouTube Channel in 2-3 hours a month

As mentioned, I have no plans to be a YouTuber.

I just need somewhere to store the videos I share on LinkedIn and this newsletter. My solution is YouTube. I use it more like a storage locker rather than trying to attract people to it.

At the start of the year, I became curious to learn ‘what makes people click’ on a video.

This inevitably sent me down a black hole of learning. I like to think we can agree that is a good type of black hole to fall into. I discovered that crafting high-quality YouTube videos is pretty similar to good end-to-end learning design.

You can transfer a lot of frameworks and skills from writing and marketing into this space.

What happened?

I’ve been running a growth experiment with YouTube for 8 months.

→ It’s increased subscribers by 60%.

→ It’s increased watched hours by 240%.

I’m not out to become a YouTuber, but I do love to learn how things work.

Since our world works on a lot of digital platforms, and we each use them to promote work and business, I like to know how this game is played.

Here are my 4 biggest lessons from taking my channel from 300 subs to over 1,000 (as of this post) in 8 months, with 2-3 hours input a month:

  • Writing is more important than you think: For scripts, descriptions with SEO, titles and hooks.
  • Good design matters: From your thumbnails to how the video looks and flows.
  • Experiment often: This is especially true with thumbnails, descriptions and titles. Some of the biggest players on this platform do this, and it’s something which has worked for me.
  • Build evergreen content, not just for today: Play the long game. I have a set of 4 to 5 high-performing videos that bring in 80% of subs and views. Why? Because they’re evergreen content that transcends the years with relevance.

Note: I could have achieved way more if I was dedicating myself to this, but that’s not my plan.

4 proven YouTube growth strategies for creators, designers and educators

After 8 months of experimentation, I landed on 4 principles that enhance both growth on YouTube and serve as good learning design.

Let’s unpack these in more detail.

Why writing is an essential skill for YouTube growth

Writing is more important than you think

You’re forgiven for thinking that a successful video is all about, well…good video.

It actually starts with good writing.

Writing is one of the most important skills you need. Without good writing (and editing) you cannot:

  • Get buy-in for your work
  • Build a compelling message
  • Explain complex problems
  • Design quality user experiences

And much more.

Specifically for YouTube, writing comes in:

  • Generating video ideas
  • Scriptwriting
  • Video titles
  • SEO-rich video descriptions
  • Transcripts and subtitles

We use almost all of this for good learning design.

Until you can clearly write down what you’re solving and how you’re solving it for the user, your product doesn’t have much hope.

One thing I like about the way YouTube Strategists (yes, they’re a real thing) think about design is with ‘packaging’. This term is used to describe how you put all the moving pieces together to craft an experience people will engage with.

What I can tell you, is this doesn’t stop when the video is filmed.

It’s about how you script your intro, the video title you choose, your video description and how you will distribute this to your audience. I’ve found the best people obsess over these things to a militaristic level, and that makes sense.

You cannot do well long-term without this approach.

This is no different in L&D. The work is not over when you finish building an experience (no matter if physical or digital). Think about how you will ‘package’ your product to drive value for users.

Good design matters

This transcends industries and products.

I’d go as far as to say it’s a ‘fact of life’.

In YouTube land, thumbnails, video titles and descriptions are gold. They’re what bring in the viewers, but without a well-designed video behind them, they’ll flop.

This is something I learned early on.

The dopamine video hit

I wrote a devilishly enticing title and designed a clickbaity thumbnail for a bare-bones video.

The goal was to see how much of an effect these variables have on a sub-par product. Long story short, the video did very well in the first 12-24hrs. It racked up a few thousand views, but the people spoke with downvotes.

Despite the high engagement in the first 24 hours, the video died.

Why? It was the most downvoted video I released, and YouTube takes note of that. You can have thousands of views and all the dopamine-spiking materials you want, but if many people start downvoting, YouTube stops recommending.

Lesson: You can’t hide rubbish with rainbow paint.

The big hits

I don’t have and don’t want any viral videos.

I detest the word ‘viral’ in this digital age. Instead, I focus on what I do best: ‘how-to’ educational videos. Probably most of the ones you’ve come to know me for.

The videos that have performed well for me, checked these boxes:

  • They were well-planned with a clear structure for the viewer’s experience
  • Each had a short intro which explained ‘what you’ll get’ from this
  • They solved a problem that people in my niche face daily
  • They provide a painkiller to the pain the viewer is experiencing
  • Each had a clear and compelling title and description

Examples of two of my best-performing videos showing these strategies are below:

As you can see, nothing special in particular.

The main thing they do is solve a problem. Just like in L&D. So, solve the problem and you’ll do well.

Research-backed YouTube growth strategies

Instead of me, re-sharing everything I’ve ever learnt about this topic.

Here’s a few resources from people I’ve learnt from and their best advice:

Paddy Galloway: The (Original) YouTube Strategist

Until I came across Paddy, I’d never heard of the phrase ‘YouTube Strategist’.

That was 5 years ago.

Today, it’s pretty common for strategy teams to exist. What I enjoy about Paddy is his critical design mind. You can take how he thinks about YouTube and apply it to many industries. He’s one of those people who can see how it all works together.

The best video I’ve seen him share some of his work in is below.

Here’s 2 of my favourite strategies he shares:

  1. The CCN (Core, Casual, New) Framework: It’s hard to capture everyone, and you probably shouldn’t try too. I love the old saying “If you design for everyone, you design for no-one”.

    What you can work on is how your target audience will experience content. Core people will watch anything, casuals drop in from time to time and new are those discovering the experience for the first time.

    If we’re being honest, L&D for most company’s is a numbers game, and this is something to keep in mind. Listen to Paddy’s explanation.


  2. What makes good video ‘packaging: In the YT world, this refers to the thumbnail, description and title of the video, hence the word ‘package’.

    This is about discoverability.

    I wonder how many L&D teams think about this when their content or workshop sign-up is plastered on a LMS and company intranet.

    Will you risk your product being buried behind poor packaging? More on this from Paddy.

What makes good packaging w/ Aprilynne Alter

I stumbled on Aprilynne’s videos when seeking how to create worthy thumbnails and titles that support user discoverability.

I didn’t want my hard video work to go unnoticed because the packaging sucks. Lucky for me, Aprilynne had done a ton of research on what works in this space.

As I watched the video, I couldn’t help but think about the similarities when it comes to filtering content on a LMS and LXP.

With so much on these systems, you need to help your best (and most important) content standout.

The same design principles for this transcend YouTube.

Two videos I recommend you check out. The first focuses on what makes a good YouTube thumbnail (which you can use for your content system of choice), and the second, unpacks the thought process behind this with the creator.

These strategies have worked well in my experiments so far.

What L&D audiences experience

Take a look at the below.

Looks familiar, right? You know what it is in 5 seconds – a typical course catalog.

This is what your audience experiences when you push them to a content system. There is nothing inherently bad about this image. It’s just not very clear, compelling and ultimately helpful for them.

An image of a learning management system course catalog and how this could be improve by applying techniques use for YouTube content

I’m sure these could be all worthwhile courses for your audience, but they kinda all blend into the background.

As an example, we can see x3 science related options for AI. Each contain a similar title and quite generic imagery. There’s not much to differentiate on that front.

However, the one clear difference are the higher education brand names, which will probably sway a prospective viewer.

Out of the 3, I’d expect 90% will choose Harvard because it’s well-known and is positioned as being prestigious.

That doesn’t mean the others are worse, and the viewer may never know. Perhaps, they’re better but they’ve not been able to communicate the specificity here on the screen where it matters.

This is where ‘packaging’ is key.

The importance of good packaging

The one tile that stands out to me is at the start of the bottom left.

Yes, the Google one, but why?

  • The background image is minimal but a contrast to the same stock images used by the rest
  • It has a very clean visual look so I’m not struggling to make sense of it
  • The title is the most specific one on the screen. It tells me everything I need to make an informed decision in a few seconds. The others are a bit vague and require further investigation. This turns into a Russian roulette because I won’t spend time on them all. This is where brand familiarity helps.

A good test is to put this image into powerpoint and create a second slide that’s blank (complete white space).

Swipe between the two slides, one with the image and the other with white space. Now note where your eyes are naturally drawn to on the image.

This is the course you’re most likely to click.

Use this to reverse engineer the reasons behind the decision. What attracted you and why?

Assume you’re not Harvard

With many off-the-shelf content libraries, you’ll be competing against more well-known brands.

Your job is to stand out in that sea to ensure company content is front and centre. So you need to package things differently. Build clear, compelling and specific imagery, titles and descriptions.

In sum: The little things matter more than you think.

Experiment

Probably the thing we should do more.

Not just in design, but with ongoing packaging of content and courses.

One thing I noticed about bigger YouTube Creators is how they tweak the elements of their packaging if performance is not as expected.

Sometimes the thumbnail didn’t work or the title doesn’t land, maybe the description is not specific enough for the engine to rank it higher in searches.

So, they change it.

Perhaps days later, sometimes hours. They look at audience data to uncover what’s not clicking. You can do the same with courses and content.

If your work is not hitting the mark after a week, try:

  • New visual design – mix up your imagery
  • Change the title – we all know most of us just read headlines, so make yours compelling
  • Description – most learning platforms search engines work the same as YouTube’s. They look for relevant keywords in both titles and descriptions against what a user has requested. Make sure you have these words in yours.

Often, it’s not about re-creating the product, it’s a matter of re-packaging it.

In sum: Don’t kill the product if it doesn’t work first time. Improve the packaging and distribution.

Create evergreen content

This phrase might be new to you, so allow me to explain.

Evergreen means something that is continually fresh and relevant. In a world littered with short-form content and experiences driven by instant gratification, you want to play the slow growth game.

Simply this means building products (I use this word here to describe content and courses) that last for the long-term.

You do this by updating them for relevance.

Sadly, I don’t see this happen often in many L&D teams. The default is to create another new thing rather than update what you have. I tried not to mention AI today, yet I find a good use case for it here in helping you to update old content.

Try sharing your current content with the AI tool of choice and ask it to:

  • Check for relevance
  • Search for new research on the topic
  • Suggest improvements in style and structure
  • Pose thoughtful questions for the audience

I’ve found this helpful with my website.

You’re not asking the tools to ‘create this for me’, you’re working with it as a design assistant.

In sum: Before you create something new, consider how you can refresh something old.

The 3 key elements of a YouTube growth playbook

📝 Final thoughts

OK, friend, we got through a lot today.

  • Don’t let good products get lost with poor packaging
  • It’s the little things that matter: titles, descriptions and imagery
  • Experiment with distribution and packaging: Don’t kill the product, improve the packaging
  • Before building something new, refresh what’s old – this is (probably) the best option
  • Your work needs good design, packaging and distribution to succeed

Bonus: 🤖 Tool for you

To help you apply the topics we’ve covered today, I’ve built a custom AI-assistant with all the videos, tips and knowledge sources in this edition.

It’s not going to make you a superstar YouTuber earning mega bucks, but it will help you put today’s thoughts into practice.

Try out the YouTube Strategist.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

How To Turn Your Ideas Into Working Prototypes With AI

As generative AI advances show no sign of slowing down, neither do the growing uses for it as experience designers.

One that I’m growing fond of is the ability to build working prototypes for L&D ideas, apps and other creations.

No doubt, as I have, you spend a lot of time trying to help stakeholders visualise what ‘x’ thing could look like.

This normally involves scrappy drawings in a notebook or standing in front of people to do your best Bob Ross impression on a whiteboard.

People nod, yet it’s pretty hard to see how your idea would work and how users engage with it. However, what if I told you, a tool exists where you can turn your ideas and scribbles into a semi-working prototype in 15 minutes? One you can share with users to get instant feedback.

Sounds like a scam, I know.

Thankfully, it’s quite real. I want to show you a handy feature from Claude (another LLM like ChatGPT) to achieve this.

Here’s how it works.

Getting started with Claude

Claude is another conversational AI tool like ChatGPT.

It works exactly the same way apart from a few features. Most of it’s team is full of ex-Open AI employees (ChatGPTs creator) and backed by a huge investment from Amazon.

Before we unleash you with building apps, you need to do the following:

  1. Visit Claude’s website and sign-up for a free account
  2. Access your profile on the left-hand side of the screen at the bottom with your email/username
  3. Click ‘settings’ to access your ‘profile’ screen. Here, scroll to the bottom of this page and select ‘enable artifacts’

Artifacts is what we’re going to use to build interactive prototypes of your ideas. This allows Claude to generate code snippets, text documents, and loads of different designs.

The advantage is that it enables you to view semi-working apps in real time and tweak these.

How to transform your ideas into working prototypes

I’m not going to write out the step by step playbook here.

Instead, you can follow along to get everything you need with me in this video:

What can you use Claude Artifacts for?

A bunch of stuff.

Remember those whiteboard sessions and scribbles in your notebook from earlier? You can turn those into something a stakeholder can actually play with.

Instead of saying, “here’s how it can work in my notebook scribbles” you can say “I’ve built a simple prototype to give you a feel of how it could work, try it here”.

A couple ideas you could build:

  • A content library
  • A low-level LMS or LXP interfaces – handy if you wish to see how people behave with a new experience
  • Landing pages
  • Course pages
  • Simple apps
  • Websites

There’s much more you can do.

Everyone can build, but not everyone is a builder

As always, the power of new tools is in the hand of a skilled practitioner.

You will be able to build something of more value than the everyday employee as you know the context of your field. Look at this type of tool as another design assistant in how you build L&D tech solutions.

The barrier for entry to build stuff is getting lower every week.

That doesn’t mean everyone is a builder though (more on that in the future).


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

The Essential Ingredient For New Tech To Succeed

Every new technological innovation seems to unearth the same question throughout history.

Is this the end for humans?

It’s a tale as old as time. From the creation of the printing press to the attack of AI chatbots, it seems like we’re constantly on the brink of destruction.

There’s a huge plot twist here though.

It’s easy to think machines can do it all. But the real magic happens when humans are in the loop. If you’re worried about robots taking over, don’t be.

Let’s unpack how humans (including you) are the essential ingredient it for tech adoption success.

What is the ‘human in the loop’?

If this was one of those Marvel films, this would be the time when the superior spandex laden superhero appears to save the day.

You might have heard of the concept ‘human in the loop’.

It’s commonly used to describe the essential human involvement required with any technology. Did you think all of those cool tools worked on their own?

If you haven’t, the term refers to human input into the development, training, and operation of AI systems. It’s about collaboration between man and machine, not one or the other. I believe this is the best way to work with these tools.

That’s why when I’m asked “Will x take my job?”, I reply “It depends”.

It depends if you’re building a human in the loop (HITL) with AI assisted tasks, and the answer is – you should!

The HITL approach leverages that collaborative approach I mentioned to improve accuracy, reliability, and adaptability of tech tools. You (the human) are the key ingredient in working with any technology. If you’re human skills suck, AI and other tolls won’t help you much.

As humans, we provide key context.

Tools like Generative AI can do many wonderful things but it can’t apply those contextually.

Not right now, anyway.

So, if you’re sitting their worried about AI taking your job – Don’t.

Until SkyNet rises and starts building Terminators, you have a clear place in the flow of work.

Why do we need ‘humans in the loop’ with technology?

Maybe you’re not quite sold on this concept.

Here’s where humans enhance the tech partnership:

  1. Accuracy and reliability
  2. Context and understanding
  3. Ethics and accountability
  4. Continuous improvement
  5. Trust and adoption

Without you, technology can’t benefit from any of this.

That means it’s not much use in the long-term.

Humans and AI collaboration case studies

Talk is cheap without action.

To honour that, here are 3 examples where a human in the loop with AI tools creates performance improvements for both.

🩺 Healthcare

When it comes to medical imaging, like the scans doctors use to spot things like tumours, AI is a powerful tool.

But it’s not perfect on its own.

That’s where human expertise comes in. Radiologists work alongside AI to double check and refine anything a AI tool discovers. This partnership ensures that diagnoses are spot-on because, let’s face it, in medicine, there’s no room for error.

This has been common practice for sometime and generative AI models have only enhanced this partnership.

🌾 Agriculture

In the farming world, Hummingbird Technologies is a great example of human and AI teamwork.

They use drones and satellites to collect images of crops, but it’s the human experts who make sense of this data. Initially, data scientists manually annotated images to train their AI models. Later, they outsourced this task to a dedicated HITL workforce, allowing their in-house team to focus on model development and optimisation.

This approach not only sped things up but also made the predictions more reliable, helping farmers make better decisions.

Win!

🚗 Self -driving cars

Probably once of the most talked about innovations this decade.

They’ve not quite landed yet.

With self-driving cars, accuracy is everything, and that’s where humans come in. Developers use a HITL to process the massive amounts of data these cars generate like video and sensor inputs.

Human annotators review and correct the AI’s work, especially in tricky situations where the AI might miss something.

This collaboration is critical to making sure these cars are not just smart but safe on the roads. If you’ve seen any of the horror stories where these innovations have gone wrong, you know how important this is.

The irreplaceable human element

I get its hard to see this when social media is ablaze with inflated stories.

Most people use Gen AI tools for creation. That’s less than 5% of their potential in my eyes. In reality, they’re overall potential is greatly untapped.

I compare the current state of AI use for work to giving a Ferrari to a 5 year old. People don’t have the skills, experience or know-how to use it effectively.

That will change.

We’re talking years here not days. I keep going back to this image from Oliver Wyman with the scaling model for AI adoption and ROI. Time is on your side.

What it really takes to scale AI and tech adoption success

Humans are here to stay

I read the same fear-filled headlines you do.

I don’t believe them, and you don’t have too either. Want to get the real answer to all of this? Then take time to experiment and research. I think you’d be surprised by what you discover.

I’ve written extensively on where my fellow industry practitioners will always add value no matter the technological innovation.

I see the same case for most industries.

That’s not to say I’m blind or foolish to the fact that some industries, and thus jobs will be reshaped. This is the nature of life.

5 ways to bring humans in the loop for tech adoption success

5 ways to bring ‘humans in the loop’ in your technology projects

This mostly won’t happen overnight.

Here’s how you can bring an intelligent human approach to your collaboration with technology.

1/ Start Small

Involve humans in tasks like data labelling and quality control. In L&D, this could mean using human reviewers to validate AI-generated content or training materials.

Your goal is to ensure that the information you feed tools to enhance your work is accurate and relevant. Getting this right from the beginning is of the utmost importance.

Start as you mean to go on, as ‘they’ say.

2/ Leverage human expertise for continuous improvement

Hopefully a no brainer but I have to call it out just in case.

Use feedback loops to review and refine AI outputs. You’re sitting on a wealth of data with AI generated outputs. Get clear on what’s good, bad and downright ugly. Make the improvements needed.

Take a page from the book of our friends in product.

Introduce retros to each AI-assisted project to scale the performance of your collaborations. Good work takes time. Its not about getting it perfect from the start.

3/ Focus on contextual decision-making and independent thinking

The keyword here is ‘contextual’.

AI is not so good with this, not unless you’re awesome with prompting context rich tasks to tools. Sadly, most people don’t do this.

Instead, encourage fellow humans to apply their judgment in situations where context is key. AI can suggest solutions, but humans should make the final call, especially in nuanced scenarios like personalised learning paths.

You have the context. AI is like the Robin to your Batman with decisions.

4/ Ensure practical ethical oversight

You and I aren’t going to solve the full scope of the ethical dilemmas with AI.

Yet, we can establish meaningful guidelines and checks to prevent the list of issues we’d rather avoid in AI outputs.

Human oversight is crucial for ensuring that AI recommendations align with your ethical standards. It can boiled down to what goes in is what comes words. In other words, crap data inputs = crap data outputs.

Focus on quality not quantity.

5/ Invest in digital intelligence

If you’ve read my work for sometime, you’ll know digital intelligence is one of the most underrated skills I endorse.

We live in an increasingly digital world at work but many people can barely operate their email app. It’s concerning.

It’s a no-brainer once more, but you must provide your team with the necessary support to effectively collaborate with AI tools.

This should focus on understanding how AI works, its limitations, and how to leverage it to enhance their work rather than replace it. After all, with great power comes great responsibility.

For tech adoption success you need to understand the pros and cons

Final thoughts

  • Humans (you) are the secret ingredient to successful technology collaboration.
  • The human touch remains irreplaceable in making contextual decisions and providing ethical oversight with AI.
  • Start small, leveraging human expertise for continuous improvement, and promoting contextual decision-making are key ways to bring humans into the loop with technology.
  • Embrace digital intelligence and investing in teams so they can effectively collaborate with AI tools to enhance their work rather than replacing it.

The future is always human-powered!


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

6 Unique Ways L&D Teams Can Use Perplexity AI

I see too many posts just talking theory about AI for work.

Today, we change that by unpacking how to use tools for specific L&D tasks. We begin with 6 ways you can use Perplexity AI to work smarter in L&D immediately.

What is Perplexity?

Let’s start with the obvious.

Perplexity positions itself as an AI search engine. This basically means they use AI to create a personalised answer for you from credited sources versus Google giving you lists of links.

It’s an attractive idea.

Instead of trying to figure out ‘which link is right for you’, you’re given a new way to discover and interact with information.

As with other popular AI tools like ChatGPT, it has a conversational interface to ask questions, upload documents and produce a variety of outputs.

6 Unique Ways L&D Teams Can Use Perplexity AI for design, research and analysis

Getting Started with Perplexity

Instead of me banging on for paragraphs about setting up the tool, here’s a 4-minute video.

Technically, it can be 2 minutes if you put it on 2x playback speed like me (I like to live dangerously). Before hitting play, get your free account on Perplexity.

This covers:

  • Setup
  • Key features
  • Free vs Paid
  • Search vs Pro
  • Your first output
A beginners guide to Perplexity

Now we’ve completed the essentials.

Let’s unpack our 6 use cases ↓

6 unique ways to use Perplexity AI for improved research, design and analysis

1. The Case Study

A lot of the promise of Gen AI tools has been personalised learning experiences.

That has always excited me.

So, you can imagine my surprise when all I find are L&D pros and workforces using tools for content creation 99% of the time.

The potential is untapped.

Let’s change that.

As our industry likes a good old case study. Here’s how to build case studies in under 15 minutes with Perplexity:

  1. Identify your topic of interest.
  2. Build your prompt (see example in video)
  3. Review and provide feedback
  4. Ask for guided steps on how you can implement the knowledge
  5. Ask Perplexity to transform all outputs into a readable document.
  6. [Bonus] Turn outputs into a podcast script and use ElevenLabs to create a podcast you can use as an audio lesson

The last one is a bonus, but it’s pretty cool and takes personalised learning to another level.

Why read it yourself when an AI-generated voice can read it for you?

What a time to be alive.

This enables you to craft case studies for use in community channels, newsletters and workshops. I used to spend hours trawling Google for good case studies to anchor workshops around back in the day.

If only I had this back then.

I ran this example myself as research for the course arm of my business.

I created a case study on Kat Norton, creator of Miss Excel (who you’ve probably seen dancing around Excel sheets online somewhere) which I turned into a podcast as a lesson for others to learn from too.

How to build business case studies with Perplexity AI

2. The Research Assistant

Compelling credible and evidence-based research from scratch.

Similar to case studies, our industry lives on evidence-based research.

AI has a bit of a love/hate relationship with facts and data. I mean, we know it likes to make up a good old fact or two, or three!

Perplexity’s strength comes into play here.

It indexes the internet every 24 hours to keep its data on point. I’m not saying you can 100% trust it (it is still a probabilistic tech after all). It’s better than most AI tools on offer.

Let’s say you’re building a feedback experience and need industry-specific research,

You can use a prompt like this:


###Context###
I'm building a workshop focused on enhancing the feedback capabilities of mid- level managers in our technology department. To date feedback on their performance on this subject is incredibly mixed. Ideally, I want to unpack examples of feedback scenarios from big tech companies. My plan is to use these as a central part of the experience to breakdown 'what good looks like' ###Task### You will find 5 examples of feedback done well from top tech companies as outlined in my context. Present these as follows: - Title of company where example came from - 150 max overview of scenario - What the audience can learn from this - Steps to replicate 'what good looks like' ###Constraints### Only focus on examples from the last 5 years. They must be from technology companies only. Do not make up examples if you can't find any. Respond with 'I can't find this'.

Here’s the response I got. Pretty good, right?

As with all things Perplexity, knowing your sources in a clear-cut format is a great feature. Below are the sources used to create my response to this prompt.

How to find sources on Perplexity AI

Also, note on the right-hand side of your screen where a gallery of images appears.

Click ‘View More’ to get a nice curated list of relevant images for your search. Mine provided good feedback frameworks related to the output.

An example of research and analysis with Perplexity AI
Screenshot

You can use this to:

  • Research new learning tech
  • Improve old content
  • Build business proposals

And much more. Let your imagination run wild!

(Not sure how to write AI prompts? Get the guide on the best AI prompt framework for business tasks on the blog)

3. The Data Analyst

As an industry, we love to talk about data – a lot.

The thing is none of us were trained to be data ninjas. We spin enough hats already. I’m all for being data-led and informed in making smart decisions for L&D.

Thankfully, we’ve entered a time where AI tools make this a lot easier for us. Perplexity isn’t unique in this feature. You can achieve this with ChatGPT and Claude. I’ll show you how to do that with their features in the weeks and months to come.

As with other AI tools, you can upload different document types to Perplexity including Excel, Google Docs and PDFs.

I spend too much time reading industry reports. I use AI tools to support me in analysing and uncovering the most useful insights.

Here’s an example prompt you can use with your data analysis:

###Context###

In the attached [enter file type here] you'll find [data type: comments, report etc] on [enter subject]

I need to [output: measure, analyse, understand] the [thing you need to know] to help me with [future task]

###Task###

Your task is to analyse this report and provide the following:

- Sentiment analysis of user feelings
- Trending data insights
- Catalogue mentions of the term [insert term]
- Provide 3 areas I should explore in more detail

Present this in a structured table format for easy viewing.

Play with this to align with your task.

AI tools are not just useful for analysing data. They can help as an incredibly valuable co-pilot working inside data handling tools like Excel, and Google Sheets and specialist tools like Tableau.

Gone are the days of scouring pages of Google for that one formula to rule them all.

Just ask your local conversational AI tool to give you a step-by-step guide.

Quick hack for info sentiment and analysis with comments

When compiling articles like this, I do a lot of research on places like Quora and Reddit.

These threads can have hundreds even thousands of comments. Unless I have months to write these (which I don’t) these pieces wouldn’t be in front of you now.

To help with this, I discovered a quick and so simple hack to get data from these sites into my chosen AI tool.

I use Microsoft Edge, but you can find this under the settings menu of any browser.

Click the three dots or settings menu in your browser → select ‘print’ → ‘save as PDF’ → upload the file to Perplexity or your AI tool of choice → ask it to review the document and provide the trending responses – voila!

4. The Market Researcher

Comparing new tech, whether L&D or not, is hard.

I have a moment every year when Apple releases 3 new phones in one day, where I stare at the screen and think, what’s the difference? Our little AI assistants can help make this easier.

What separates a tool like Perplexity from a Google search is personalisation.

I can’t use a Google search to say “I’m a late 30-something male who is confident with tech. I’m looking for an iPhone with a large screen that fits my hands. It must have a top of the range camera, ability to run high-demanding business apps and longevity of at least 5 years worth of updates”

Google would have a seizure if you typed that in.

Conversational AI tools don’t.

This is where Perplexity will give you an advantage. As mentioned earlier, it indexes the internet daily for the latest info. Where Google gives a bazillion links to review, Perplexity will create a structured answer on a page with sources.

Should I buy x tool?

You can use this same method to compare L&D tools.

We’re drowning in so many, it’s difficult to know who is worth your time.

Here’s a prompt template to try:

###Context###

I'm a [insert role] at [describe your company and industry].

We're looking to add a new [tool description] to helps us with:

- Problem 1
- Problem 2
- Problem 3

It must work with [insert current tech] and be able to support [specify audience and size]

###Task###

Your task is to identify and research 5 learning platforms that can solve the problems outlined in the context above.

They should be based in [enter territory or country] and specifically support [your industry]

Present the output in a table with:

- Supplier (include website URL)
- Location
- Price
- Key Features
- Previous customers
- Reviews

###Constraints###

Do not use your training data to fill in any missing pieces of information. If you can't find the information requested, leave the field blank.

5. Focused searching

Pinpoint answers with ‘focus’ mode.

This is a feature I’ve found with no other conversational AI tool.

It’s small, but I think rather useful. I’m talking about the ability to focus your query for the best results. Perplexity can refine its answer to only source information from:

  • YouTube
  • Social media
  • Academic libraries
  • Just it’s training data with no web access

I’ve found use for this in searching social media comments to get a sentiment analysis of attitudes towards AI tools like Perplexity.

Sometimes it’s better to refine your search vs look everywhere.

To access this, just click the ‘focus’ option within the input bar and select your source.

The ultimate decisions between using Google and Perplexity AI for L&D teams

6. Replacing Google(?)

Probably never going to happen but it is an AI-based search engine first.

It stops you from scrolling through links to figure out ‘what’s worth my time’. Instead, you get a round-up of up to 10 sources that build a personalised answer.

Google gives you results, but Perplexity will give you answers.

I know this is the angle Perplexity is aiming for, and it is good at it in some areas, but I think it will take more than that to take down the king. Google is integrating more of its Gemini AI across its search technologies as we speak.

Still, if you prefer a structured answer versus playing a game of links. Perplexity is a good option.

✍️ Final Thoughts

There’s not much more to say, friend.

Just keep experimenting and open your mind to the enormity of the possible. It would be a real shame to use powerful tech to only 1% of its potential.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.


Make AI Your Partner, Not The Problem 🤝

Get lifetime access to the only AI Crash Course designed for L&D Professionals. Join 500 + students to future-proof your skills and work smarter.

👉 Get started with my AI For L&D Pros Crash Course.

Categories
Artificial intelligence

The Best AI Prompt Framework For Work (2025)

Prompts for AI can come in many forms for different outcomes.

Looking for a quick answer to a simple question? Use a 1-2 sentence prompt. Want to unpack a complex work task with layers of actions? You need a different approach.

Whereas a one shot prompt like “What’s the weather in Ibiza on average in August?” is a simple string of words for any conversational AI tool to answer. Asking it to review, ideate and share how to create a learning strategy for your company is not so straightforward.

To get the best results from something like a ChatGPT, you need 3 things:

  • Context
  • Task outline
  • Constraints

3 prompting techniques you need to know

3 prompting techniques for the best AI prompt frameworks

Let’s take some of the techy terms you might have heard and translate for humans.

1️⃣ Zero-shot

Useful for: Simple Q&A inputs

Example:On average, how long do Alaskan Huskies live?

2️⃣ One-shot

Useful for: When your task requires a specific output such as a table or executive summary with one examples as guidance.

Example:I want an executive summary on the current population of Alaskan huskies in North America. The summary needs to follow our company format, here’s an example:

‘Insert your example’

Now, let’s create a first draft”

3️⃣ Few-shot

Useful for: Working on highly complex tasks like data analysis or writing a report for senior leaders using 2 – 3 examples.

Example:I need to produce an analysis of the most popular buying locations of Alaskan Huskies in North America from European origins. This will need to be in our preferred company format with supporting data visualisations.

Here’s a few examples to show what I want to achieve:

*Either insert the examples directly into the input bar or upload as images, Docs and PDFs

Note: No prompt is bulletproof. LLMs can behave in extraordinary and odd ways. If you find yourself hitting that brick wall, deploy these techniques.

How to write the best AI prompts for business tasks

This is a universal approach you can use with any tool.

Let’s unpack each of these:

Context

What does the LLM need to know to successfully support you?

Here’s some ideas:

  • Your organisation
  • Team
  • Roles and work they do?
  • Specifics on the task
  • What have you done before?
  • What is the role it’s playing? [If role-playing or coaching]

Task

  • Outline the task
  • What does success look like?
  • What are the essential components of the task?
  • Keep it clear and simple
  • How should the output be structured? Bullets, sentences or paragraphs.

Constraints

  • What should the LLM not focus on?
  • What must it not consider?
  • Should it only use its training data or connect to the internet or both?
  • Should it only use the data you’ve provided?

Example of a powerful AI prompt for business tasks

If we put this all together, it can look something like this:

###Context###

I'm crafting my organisations L&D strategy for the year ahead. We’re a scale-up business with 800 employees in 5 global offices. We’re limited with our resource and budget to deliver. Our goal is not to do everything, but do the top 2-3 things that matter most.

Our strategy for the last few years has become stale and not working towards what we want to achieve.

Top things on our employees minds include:

- Having the right skills for the role

- Learning from their peers

- Manager support and coaching


###Task###

You will help me get clarity on how I can work with my team to build a relevant and meaningful strategy for our organisation.

To do this, you will ask me questions to help get clarity to build a better picture of where we can go.


###Constraints###

Keep questions short and relevant. They should be quick fire rather than in-depth. Let’s keep questions to a minimum of 3 at a time.

You might notice I’m using ‘markdown’ structure and delimiters in these prompts. This helps AI tools better understand the instructions you provide by using heading, bullets and general formatted structure etc.

This works with all popular AI tools including ChatGPT, Claude, Perplexity and all the ones you know.

Why you should use delimiters in AI prompts

Delimiters help AI models recognise that what follows or is enclosed within them is a directive or special instruction that should guide its response.

Here’s 6 ways they improve AI prompts

  1. Clarifies Instructions: Separates commands or instructions from regular text, reducing ambiguity.

  2. Enhances Accuracy: Helps LLMs focus on the specific parts of the prompt, improving response relevance.

  3. Organises Complex Prompts: Structures detailed prompts into distinct sections for better comprehension.

  4. Ensures Consistent Formatting: Maintains desired format in outputs, like code or structured data.

  5. Highlights Key Elements: Marks important parts of the text, such as commands or keywords.

  6. Reduces Misinterpretation: Provides clear boundaries to prevent confusion in interpreting different content types.

Common Delimiters to use with AI tools

Here is a list of common delimiters that are often used effectively with large language models (LLMs):

  1. Triple Hash (###): Often used to separate sections or indicate instructions.
  2. Angle Brackets (<...>): Used to highlight specific tags or commands.
  3. Double Curly Braces ({{...}}): Can indicate placeholders or variables.
  4. Backticks (’...‘): Commonly used to denote code or special terms.
  5. Double Hyphens (- ... --): Sometimes used for comments or notes.
  6. Pipe Symbols (|...|): Useful for delineating choices or options.
  7. Square Brackets ([...]): Can denote optional elements or clarifications.
  8. Double Quotes ("...): Used for direct quotes or exact text.
  9. Colons and Semicolons (: ... ;): To separate elements in lists or statements.
  10. Parentheses ((...)): Often used for additional information or clarification.

You can learn more about that in this video.

Reasoning Models vs GPT Models: What’s the difference?

Not all AI models work the same way.

Some are designed to think deeply before responding, while others prioritise speed and efficiency.

The two main types you’ll come across are Reasoning Models (like o1 and o3-mini) and GPT Models (like GPT-4o). They have different strengths and are suited to different tasks. 

Here’s how they compare:

Reasoning Models: The Deep Thinkers

These models are built to process complex problems step by step before responding. Think of them like a senior colleague who takes their time to analyse a situation before giving you a well-thought-out answer.

What they do well:

→ Handle ambiguity and figure out what you mean, even if your instructions are unclear

→ Solve complex problems by planning multiple steps ahead

→ Make reliable decisions based on lots of scattered information

→ Ask clarifying questions rather than guessing when details are missing

Best used for:

  • Strategic problem-solving
  • Sifting through huge datasets to find key insights
  • Multi-step planning and execution
  • Debugging code and reviewing AI-generated responses
  • Visual reasoning tasks

GPT Models: Fast and Furious

GPT models, on the other hand, are built for speed and efficiency.

They work best when tasks are well-defined and don’t require deep reasoning.

What they do well:

→ Quickly generate content, summaries, and responses

→ Follow explicit instructions without overthinking

→ Handle repetitive tasks efficiently

Best used for:

  • Writing, editing, and summarising information
  • Answering straightforward questions
  • Generating content when speed matters more than accuracy

Think of them like a junior co-worker: great when given clear instructions, but not someone you’d rely on to figure out a complex problem without guidance.

When should you use a reasoning model?

It depends on your needs:

  • If accuracy and careful thinking are the priority → Use a Reasoning Model
  • If speed and cost matter more than deep problem-solving → Use a GPT Model

In many cases, the best approach is a mix of both. A reasoning model can do the heavy lifting—like analysing and strategising—while a GPT model handles execution tasks quickly and efficiently.

Now you know the difference, you can use the right tool for the right job.


The best AI prompt frameworks for reasoning models

How to Prompt Reasoning Models (7 tips)

1. Keep prompts simple and direct

Reasoning models work best with brief, clear instructions. Avoid unnecessary complexity.

2. Avoid chain-of-thought prompts

Prompts like “think step by step” or “explain your reasoning” are unnecessary. These models already reason internally, and such instructions may even reduce performance.

3. Use delimiters for clarity

Use markdown, XML tags, or section titles to separate different parts of the input. This helps the model interpret sections correctly.

4. Start with zero-shot, then try few-shot if needed

Begin with prompts that don’t include examples—reasoning models often don’t need them. If your task is complex, add a few clear input-output examples that align closely with your instructions.

5. Provide specific guidelines

Clearly define any constraints, such as “propose a solution with a budget under $500”, to ensure precise responses.

6. Be explicit about your end goal

Set clear success criteria and encourage the model to iterate until the response meets your expectations.

7. High-level guidance works best

Reasoning models perform well when given a high-level goal rather than micromanaged steps—trust them to figure out the details.

Get more best practice in OpenAI’s official documentation.


What ‘good’ looks like

Here’s an example of a prompt with OpenAI’s o1 reasoning model by Greg Brockman, President of OpenAI.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.