Categories
Career Development

How You Can Build a ‘100 Million Dollar’ Skillset

Let me throw a big question your way.

Are you working on building your 100-million-dollar skills?

I’m not saying these skills will magically deposit a cool $100 million in your bank account tomorrow (but wouldn’t that be something?).

The term “100-million dollar skills” isn’t about actual dollar value (sorry).

It’s a metaphor borrowed from Alex Hormozi to describe the ultra-valuable skills that can elevate your career opportunities and overall wealth. It’s about focusing on the quality of skills, not just quantity.

Why High-Value Skills Matter

Historically, everyone used to craft better opportunities with fancy job titles.

Thankfully, the world has changed.

Niche skills make you stand out, consider these your career currency.

They’re the skills that separate you from the pack. I’m talking about the way you communicate, your ability to think critically, make sound judgments, and how you work intelligently with AI.

Why focus on high-value skills?

I’d hope the answer to this is obvious but I’ll play ball anyway.

Like money, you can compound these skills to unlock better opportunities further down the road. We’re playing an infinite game in a finite space after all. Think of it as investing in a high-yield stock that keeps giving back yearly. Building skills in high demand ensures that you are a key player in your field with an advantage very few possess.

As Tim Ferriss said: “Don’t try to be one of, be the only”.

The 100 million dollar skills principle in action

Speaking of Tim Ferriss, he’s a good example to explore on this.

For those who don’t know, Tim is the popular author and I suppose ‘productivity/self-help consultant’ of the 4-hr books, and his podcast.

Tim’s career began in technology startups in Silicon Valley, a highly competitive environment where efficiency and rapid learning are crucial.

Although he found early success, Ferriss was overwhelmed by overwork and stress, pushing him to seek more efficient ways to manage his time and productivity.

Recognising the need for a change, Ferriss started focusing on what he terms “meta-learning”, a skill of learning how to learn efficiently and effectively.

He explored various techniques for time management, productivity, and personal optimisation, aiming to work smarter, not harder. This exploration led to the development of the “4-Hour” concept, which he first applied to his personal health and fitness routines.

I’ve always struggled to clearly define what Tim actually does.

His skill has always felt like ‘Tim Ferris’ because he’s the only one doing the many things he does in the way he does them. Meta-learning sounds much better, though.

How Tim unlocked unique opportunities with niche skills

Ferriss’s breakthrough came with the publication of “The 4-Hour Workweek”.

A book that encapsulated his principles of lifestyle design and productivity. The book, which details how to outsource life tasks, automate business processes, and design an ideal lifestyle, struck a chord with a global audience tired of the traditional 9-to-5 grind.

Including me, back in 2015.

The success of “The 4-Hour Workweek” transformed Tim from a stressed entrepreneur into a leading voice in life hacking (do people still use this word?) and personal productivity.

His ability to distill complex subjects into actionable advice proved to be a high-value skill, setting him apart from other self-help authors.

Especially at the time because many self-help authors acted like gurus, where as Tim adopted a professor approach of showing, not just telling.

Building on his success, Tim continued to expand his niche skills into other areas, including cooking (“The 4-Hour Chef”) and fitness (“The 4-Hour Body”). Each project leveraged his meta-learning skills, showing others how to master complex skills quickly and efficiently.

He also launched a popular podcast, “The Tim Ferriss Show” where he interviews world-class performers from diverse areas to share their experience.

This podcast has run for over a decade with millions of listeners.

Today, Tim Ferriss is recognised not just for his books and podcast but for his unique approach to learning and productivity.

Like I said, he’s kinda known for doing Tim Ferris stuff which no one else is even attempting.

His mastery of meta-learning, combined with his skill in communicating these concepts to a broad audience, has not only built his career but showcases the power of niche skills in creating a successful and influential career.

The 9-5 example

You might read the above example and think “That’s cool but I’m not going to be able to do that”.

I totally get that. Tim is in the 1% of that category.

So, what could this look like for us in the 9-5 game? 

Let me tell you the story of my pal, Dave. He’s a great guy and works a 9-5 (probably a few more hours here and there) like most of us.

On the surface, you might not think Dave is killing it in the Career Game.

But in reality, he’s crafted a set of skills which has turned him into a in-demand consultant able to command an annual salary of up to $150k. How is he doing this you ask? AI, quantum physics, world class heart surgeon??

No – his skills are niche in Excel and data visualisation.

Were you expecting something sexier? Most people do. Dave is not doing anything revolutionary. He discovered early in his career that people are terrified of excel.

They love the data output and beautiful visualisations, yet seeing rows induced a sense of doom.

Dave didn’t see doom here, he saw opportunity.

He told me “I saw an opportunity to scale something I could do well and tolerate what others couldn’t”. He quickly found his skills in-demand in his first organisation because no one else wanted to tame the beast of excel.

Dave became the Excel and data king 👑.

It turns out, that people will pay kings very well. You could do this too, as could I. Everyone has access to and uses Excel. We all produce data in many apps, yet most of us suck at it. Dave understood that and built his 100-million dollar skillset around that.

You can do this in any job and industry with the millions of apps we each use.

Let’s unpack the blueprint to do that together ↓

How to identify your High-Value Skills

Identifying which skills can catapult your career into that $100-million valuation starts with a good look at your current job and industry.

Ask yourself:

  • What skills are most admired and rewarded in my field?
  • Which abilities do top performers in my sector possess that I can develop?
  • How do my unique insights and capabilities stand out?

The idea is to zero in on skills that add significant value to your work and enhance your unique selling proposition.

Whether it’s exceptional project management, innovative problem-solving, or cutting-edge tech proficiency, these are the skills that can define a high-value career.

4 ways to compound these High-Value Skills

  1. Focused Learning: Pick one skill at a time to develop. Trying to master multiple skills simultaneously often leads to mediocrity. If critical thinking is your target, dedicate time to courses, books, and activities that enhance that skill specifically.
  2. Practical Application: Apply what you learn in real-world scenarios. If you’re improving your tech skills, work on projects that allow you to use new tools. Real-world application cements learning far more effectively than theory alone.
  3. Feedback and Iteration: Seek feedback from peers and mentors. Understand how your skills are perceived and where you can improve further.
  4. Network and Collaborate: Engage with others who excel in areas you aspire to master. Networking isn’t about swapping business cards anymore. It’s about exchanging ideas and strategies that can help refine your own skills.

Final Thoughts

Building your 100-million dollar skills isn’t about adding more to your plate.

Be brutally specific on the 3 – 5 skills that can make the difference in your industry or even cross-industry. Start today, focus deeply, and create your own opportunities.

Oh, and be like Dave.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

Where L&D Adds Real Value In The AI Noise

There’s a ton of talk about AI adoption.

It’s odd because the validation of “adoption” has many definitions dependent on the context and environment. The common pitfall is to measure adoption as ‘use of AI tools’ alone.

As we know with previous technology, usage alone doesn’t mean meaningful adoption.

Setting what adoption looks like in your organisation is not a task for the L&D team.

Yet, we have an opportunity to contribute to long term and meaningful adoption of AI across workforces as part of a wide collaboration in a community.

Lets talk about that…

It takes more than access

Let’s go beyond the veil of bullshit we see online.

Access to an AI tool alone means nothing, and putting on one hour lunch and learns to “make people learn AI” is a comical up-skilling strategy.

If you’re a long time reader, you’ve heard me become a broken record when I talk about what it takes to nurture meaningful and long-term change.

We have much to consider with context, culture and constraints in each environment. No two workplaces are the same, that’s why the cookie-cutter “adoption frameworks” make me laugh.

They’re a good point of inspiration but you shouldn’t follow them like a strict set of instructions.

Saying that, what is it we need to consider beyond tools?

Read on…

People, Systems and Tools

As you’ve probably guessed, launching new technology and tools alone rarely leads to meaningful adoption.

There’s a bigger ecosystem at play.

We have to consider:

1/ People

Where are people at today and how do we meet them?

Everyone will have a different understanding, maturity and receptiveness to something new and unknown. In AI’s case, we have a mix of emotions from “will this take my job” to “I want it to do all this stuff I hate doing”.

The most difficult part of a change process is people because we’re all so unpredictable.

2/ Systems

Quite simply, how we work today.

What are the tried, tested and trusted conscious and unconscious systems we have in place. This covers both how we execute tasks and how we think about executing those tasks (deep, I know).

We each follow different types of systems in our day to day.

Understanding what these are and how AI will impact those is key in this change.

3/ Tools

The part you’re most likely more familiar with.

Here, we should consider the tools in use today alongside new ones being deployed, and how to bridge the gap in both understanding and knowing when and where to deploy them.

Too many forget the ‘when and where’ part at their own peril.

Where you can add value

Source: BCG

For us to recognise where we can provide support and drive value, we must note what’s changing.

I think this framework from BCG can help recognise the moments where performance support is most need with AI transformation.

They propose it for navigating AI transformation at scale, and through an L&D lens, I see this as a conversation point of what to map against when focusing on how best to support workforce’s.

It’s built on two key dimensions:

1️⃣ AI Maturity

It progresses from tool-based adoption by individuals, to workflow transformation, to full, agent-led orchestration. Most organizations, and even teams within them, operate across multiple stages at once, not in a linear path.

2️⃣ Workforce Impact

This spans how tasks are executed, to what skills are needed, to how teams are structured, to how organisational culture must evolve to support new ways of working.

While this covers the wider transformation AI brings across businesses, it acts as a roadmap for L&D.

A roadmap is often what we need because its not uncommon for senior leaders to treat “training” (as they call it) as a boomerang that’s thrown at will when they decide people need to know stuff.

The framework above provides a view to where the friction/pain points/ problems exists in the cycle of change. That’s where we should focus.

Map it out

I mentioned before to not blindly follow frameworks, and that advice is the same here.

This view from BCG is a useful foundation for each of us to think about “where can we add value”, but it will look different for each environment.

So, I’d recommend you map out what your organisational journey looks like today.

Explore the 3 pillars of tasks, talent and teams across your business and how/where AI is starting to and might impact these. It’s here you will uncover the friction and pain points where we can be of most service.

Some of that will be through tooling, no doubt.

Yet, I feel pretty safe in saying you’ll be spending a good deal of your time navigating changes within people and systems.

Final thoughts

There’s much to say, of course, but only so much attention span I can ask you to give.

I’m thinking of expanding some of this thought into a long-form video, if that sounds like something you’d like to see, let me know.

In the meantime, so additional resources to explore on this include:


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence Skills

Why Skill Erosion is a Real Problem That No one Can Ignore

I kinda think of this post as a sequel to my analysis on “The Hidden Impact of AI on Your Skills”.

Somehow, it’s been a year since I hit publish on that one.

Isn’t it funny how time works? I remember so clearly spending months researching and putting all the pieces together to look deeper into the real impact of AI on skills so far, and now, here I am talking about it like some sort of ancient text.

My reminiscing aside…

The message of that piece was to think deeply about the over-reliance we will easily slip into with AI, and how easy it will be to convince ourselves we’re learning how to do something, when in reality, AI is doing it for us.

A year later, I only see more activity, which has amplified both.

That’s not to say there are not those who are rejecting total delegation to AI and those finding the balance between artificial and human intelligence.

We’ll talk about some of those later.

Consequences

It’s such a serious sounding word, isn’t it?

Like something your parents would say to you.

Our choices can lead to consequences in many forms, that’s the risk we all take, and not to keep sounding like some old stoic, but life is essentially all about risk.

Back in October last year, when I spoke about AI over-reliance and the illusion of expertise, I only covered in small detail what the consequences of those choices could mean.

A year later, it’s clear to me that’s skill erosion.

The Great Erosion of Skills

Do you remember just after the pandemic, when every headline was something like “The Great ‘x’ of blah blah?”

I’m happy to make a contribution to that movement 😂.

Jokes aside, you might be noticing some people’s skill sets are eroding through lack of use, and some aren’t even learning the skills at all. This is being driven by the change in the tasks we now deliver.

As AI gets better and better at completing a wide variety of tasks, it means we (as humans) do less in certain areas.

That is not always a bad thing.

Cognitive offloading of some tasks can amplify our ability to perform better in the workplace. A good example of this is GPS. Before we had GPS in our lives to guide us to destinations, we’d spend hours pouring over gigantic maps with tiny text, trying to figure out the best route.

Now, at the touch of a button, we’re guided without having to activate one brain cell.

There’s another side to this coin, though.

Humans, for the most part, want to take the path of least resistance and favour instant gratification over the challenge (I’m no different here).

The problem is that real learning and thus improved performance are about navigating the challenges. It’s really hard to learn how, what and why if you don’t experience the struggle.

AI doesn’t take this away all on its own, how we use AI does.

In our quest for “more time”, “creative freedom”, “improved efficiency” and every other statement that tech CEOs blurt out about AI, we’ve become obsessed with the automation of everything.

This creates the consequences I’m talking about.

What we lose and what we gain

I always remember an old colleague saying, “You can have it all, you just can’t have it at the same time”.

While it was in relation to something else, I can’t help but think it fits well in this conversation.

I’ve found life to be a series of trade-offs.

If you say yes to one thing, you’re saying no to something else. It sounds like easy math (and it is), but it’s by no means a simple equation.

I’m not the first to consider the impact of AI in this way.

The folks at Gartner have been covering this as they look towards what is putting future workforce’s at risk.

Here’s an excerpt to ponder:

Gartner predicts that by 2028, 40% of employees will be trained and coached by AI when entering new roles, up from less than 5% today.

While this shift promises faster onboarding and adaptive, scalable learning, it also means fewer chances for employees to learn from experienced peers. Junior staff, who once relied on mentorship and hands-on experience, will learn primarily from AI tools, while senior staff increasingly depend on AI to complete complex work.

This shift accelerates the loss of foundational skills and weakens expert mentorship and relationship development across the organization.

Source: Gartner

We have skills eroding through lack of practice and application, and it seems, the quick expiring of skill creation with future generations entering the workforce.

Harold Joche put it nicely when he said, “One key factor in understanding how we learn and develop skills is that experience cannot be automated”.

So, what can be done?

Are we doomed to roam the world skill-less and watch AI-powered tools suck the life out of the world itself? Of course not, there is a way, my fellow human hacker.

Strategies and tactics to prevent skill erosion

So, instead of moaning about the great wave of skill erosion, I’d rather focus on doing something about it.

The good news is there’s a lot we can all do.

If you haven’t already, you can find a ton of my guidance in these articles:

  1. The Hidden impact of AI on your skills
  2. How to stop AI from hijacking your thinking?

Saves me repeating myself like a broken record here.

Plus, the folks at Gartner offer some basic but useful actions for the workforce:

  • Watch for AI mistakes and rising error costs, and keep manual checks in place where AI is used most.
  • Retain your senior staff and encourage peer learning to slow skill loss.
  • Focus on roles at risk and review your talent strategies regularly to keep key skills strong.
  • Pair AI with human oversight and maintain manual checks as a backup for AI.
  • Encourage employees to continue exercising core skills (e.g., analysis, coding, problem-solving) even when AI tools are available — through simulations, rotations and shadowing.
  • Use AI simulations and adaptive training, but make sure people still learn from each other.

My question to you: What would you add?

Final thoughts

There’s much more to ponder on this.

Like with everything in this space, whether it happens or not is down to your individual choices and intentions. So, if you want to craft a career for the long haul, make smarter choices when it comes to your skills.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Skills

The Dangers Of Accepting What You See Online

Once upon a time, I enjoyed using social media.

Sounds weird to say nowadays, I know. Yet, when I first started using platforms like LinkedIn nearly 15 years ago, it was both a different time and place.

Platforms felt more conversational-based and not driven by clickbait. The algorithms weren’t so optimised for mass outrage.

I used to learn tons as a mid-twenty something trying to navigate the odd corporate world in London.

I had a ritual of saving posts to read later and call upon as a sort of personal learning system.

I adopted the same approach in the early days on Instagram.

Specifically trying to take my body from the grasshopper lightweight it was, to some form of decent-sized guy who didn’t look like he could slide through the cracks of doors.

Again, I’d absorb whatever good stuff I could.

Back in 2012, fitness influencer’s weren’t really a thing, so I didn’t have to cut through any noise.

Today, the story couldn’t be farther from this.

I only use LinkedIn these days, and even that is becoming a struggle because I’m met daily with disinformation, misinformation, smart people saying dumb things and a host of selfies desperately being used in the search for attention.

What concerns me most these days is so many people’s inability to ‘read beyond the headlines’.

I see this most often in the last few years with the sharing of the absurd amount of research and reports on AI. Look, I understand the attention game and why people indulge in the shock factor to garner attention.

The problem is that these people and posts are proliferating an epidemic of often incorrect statements, and viewers aren’t nearly being skeptical enough.

They say that AI is killing our critical thinking and analytical judgment, yet we seem to be doing that fine ourselves by not questioning what we see.

A great quote I keep in my notes folder reminds me of the need to both doubt and ask questions: “Don’t believe everything you think or see”.

So, my question is, when did we stop looking beyond the veil? And why don’t we ask questions anymore or do our own research?

I’m not expecting an answer, I’m just throwing it out there.

A few case studies

We see case studies on this almost daily.

They’re probably hundreds at this stage, yet we have two recent ones which I’m sure most of you have seen. It is going to feel like I’m picking on MIT here, but that’s not the intention.

It is how the data produced is being used by 3rd parties and how that impacts the global narrative.

The latest MIT Study, which claims 95% of organisations are getting zero return on Gen AI projects, has been doing the rounds in the last week. Now, while this makes a great clickbait headline and social post, we need to look deeper.

If more people went to the “Research and methodology” sections of reports, they would be surprised.

→ Ignoring this makes smart people say dumb things.

For this particular example, Ethan Mollick posted a great note on how this paper was researched. I’ve provided that section for you to check out below:

I’ll let you draw your own conclusions, yet I don’t feel 52 interviews with a 6-month timeline is a good enough example set to be used as it is, as “the global view” by many people posting across social.

The devil is in the details, as they say.

Our second example, again from MIT (sorry, I do love you really), but back in June, gave rise to more clickbait and social discourse.

This has nothing to do with the research or the researchers themselves. They set out to see how using AI tools affected an individual’s ability to write essays. They conducted this with only 54 people and on this one task.

The main finding was that AI can help in the short term, but if you always use and rely on it, you’ll diminish your ability to write an essay without it.

Cool, sounds like common sense to me.

But that research gave rise to crazy headlines like this:

See how quickly that turned?

Did anyone reading these headlines, both in news apps or social posts, look beyond this? From what I see, no.

This is where the problem exists.

Even the lead researcher on this report called out the same thing and set the facts straight in their own post.

It’s not necessarily new, yet algorithm-based platforms are loving the attention it creates.

Like I said, this problem is not isolated to one report, it’s everywhere across social media and as such, society at large.

Posts with clickbait headlines and mass engagement are proliferating, often misleading and, in some situations, harmful messages.

So, what can you do?

Become a skeptical hippo

Ok, what I’m not saying here is to become some kind of conspiracy theorist.

Instead, I want you to engage that powerful operating system that sits in each of our skulls. When you see headlines like we’ve covered or clickbaity posts, try the following:

1. Go to the primary source

  • Locate the actual report or paper (not just a blog post or tweet about it).
  • Even if you don’t read every detail, scan:
    • Abstract (what the study did and found).
    • Methods (how many people, what was tested, what tools were used).
    • Limitations (almost always at the end).

2. Ask 3 key questions

When you see a claim, pause and ask yourself:

  • Who was studied? (demographics, sample size, context).
  • What exactly was measured? (recall, ownership, not general intelligence).
  • How broad are the claims? (exploratory finding vs universal truth).

If the headline claims more than the study actually measured, that’s a red flag.

3. Notice the language

  • Headlines often use absolutes (“ChatGPT destroys learning!”).
  • Scientific reports usually use tentative language (“suggests,” “indicates,” “preliminary”). Spotting this mismatch helps you resist being pulled into the hype.

4. Slow down your consumption

  • Disinformation spreads because social media rewards speed + emotion.
  • Slow thinking (literally taking a minute to check the source or read the abstract) interrupts that cycle and gives you space to process critically.

TL;DR: Read beyond the headlines, ask the questions and embrace those skeptical hippo eyes.


📝 Final thoughts

While this might partly sound like human raging against the machine, I hope my sentiments of ‘do your own research’ and ‘be more skeptical to reach your own conclusions’, come through.

I don’t believe we need to worry about AI killing our critical thinking and analytical judgment, if we’re doing that fine all by ourselves.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Categories
Artificial intelligence

How To Stop AI From Hijacking Your Thinking

While I’m not on the doomsday train of “AI will destroy all human thinking” entirely on its own. I can’t ignore the level of stupidity that some humans exhibit when working with AI.

I shouldn’t be surprised, really, as the path of least resistance is paved with instant gratification, which is a dopamine daydream for the digitally addicted.

Still…

What happens to human thinking when so many outsource it to an artificial construct?

I’m saying this as much to myself as I am to you.

This is turning into a strange “dear diary” entry, but stick with me.

This is the end…or is it?

We both see the polarising views plastered across social feeds.

Logic seems to be lost in most conversations.

Most posts are either “AI will destroy your brain” or “outsource all your thinking to AI” I don’t know about you, but I’m not cool with either of those options.

It doesn’t help when the majority blindly believe every headline that’s emotionally tweaked to grab attention. Taking time to look beneath the surface usually paints a different picture. Yes, I’m looking at you, MIT study.

Moving away from all the noise, only one question is worth asking right now:

Are you thinking with AI, or is AI shaping your thinking?

Maybe it’s doing both, and maybe you’re aware of that.

Saying all that, here are a few points I think are worth exploring.

What happens to ‘human thinking’ if we over-rely on AI?

This is a real grey area for me.

I’ve seen countless examples where too much AI support leads to less flexing of human skills (most notably common sense and deep thinking), and I’ve seen examples where human skills have improved.

In my own practice, my critical thinking skills have improved with weekly AI use over the last two years. It’s what I class as an unexpected but welcome benefit.

This doesn’t happen for all, though.

It depends on the person and their intent, of course.

Research and experiences seem to affirm my thoughts that the default will be to over-rely.

I mean, why wouldn’t you?

This is why any AI skills program you’re building must focus on behaviours and mindset, not just ‘using a tool.’

You can only make smart decisions if you know when, why, and how to work with AI.

One unignorable insight I’ve uncovered from collecting research over the last few years, and psychoanalysing that together with AI, is the importance of confidence in your capabilities to enable you to think critically with AI.

Where are you playing?

This is the battleground of most social spaces today.

High-performing organisations and teams will be those that think critically with AI, not outsource their thinking to it.

Being a “Balanced Evaluator” is the gold standard. So, we could say that thinking about thinking is the new premium skill (more on that ltr).

The combination of high AI literacy (skills, understanding how AI works, limitations) with high trust (knowing the right tool for the job and a willingness to use it effectively) is not straightforward.

That’s where you come in as the local L&D squad.

To be here, you must critically engage with AI by asking whenhow, and why to trust its output. This requires questioning, verifying, and a dose of scepticism that too many fail to do but sorely regret when it backfires.

Also, don’t interpret “AI Trust” as blind faith. This is built through experimenting and learning how the best tools work.

What does meaningful learning with AI look like?

I (probably like some of you) have beef with traditional testing in educational systems.

It’s a memory game, rather than “Do you know how to think about and break down x problem to find the right answer?” We celebrate memory, not thinking (bizarre world).

My beef aside, research shows partnering intelligently with AI could change this.

This article, between The Atlantic and Google, which focuses on “How AI is playing a central role in reshaping how we learn through Metacognition”, gives me hope.

The TL;DR (too long; didn’t read) of the article is that using AI tools can enhance metacognition, aka thinking about thinking, at a deeper level.

The idea is, as Ben Kornell, managing partner of the Common Sense Growth Fund, puts it, “In a world where AI can generate content at the push of a button, the real value lies in understanding how to direct that process, how to critically evaluate the output, and how to refine one’s own thinking based on those interactions.”

In other words, AI could shift us to prize ‘thinking’ over ‘building alone.’

And that’s going to be an important thing in a land of ‘do it for me.’

Side note: I covered my view on the future of learning ditching recall and focusing on human reasoning, in a previous post. You’ll find a bunch of examples showing this in action there.

To learn, you must do

The Atlantic article shared two learning-focused experiments by Google.

In the first, pharmacy students interacted with an AI-powered simulation of a distressed patient demanding answers about their medication.

  • The simulation is designed to help students hone communication skills for challenging patient interactions.
  • The key is not the simulation itself, but the metacognitive reflection that follows.
  • Students are encouraged to analyse their approach: what worked, what could have been done differently, and how their communication style affected the patient’s response.

The second example asks students to create a chatbot.

Coincidentally, I used the same exercise in one of my “AI for Business Bootcamps” last year.

It’s never been easier for the everyday human to create AI-powered tools with no-code platforms.

Yet, you and I both know that easy doesn’t mean simple.

I’m sure you’ve seen the mountain of dumb headlines with someone saying we don’t need marketers/sales/learning designers because we can do it all in ‘x’ tool.

Ha ha ha ha is what I say to them.

Clicking a button that says ‘create’ with one sentence doesn’t mean anything.

To demonstrate this to my students, we spent 3 hours in an “AI Assistant Hackathon.” This involved the design, build, and delivery of a working assistant.

What they didn’t know is that I wasn’t expecting them to build a product that worked.

Not well, anyway.

I spent the first 20 minutes explaining that creating a ‘good’ assistant has nothing to do with what tool you build it in and everything to do with how you design it, ya know, the A-Z user experience.

Social media will try to convince you that all it takes is 10 minutes to build a high-performing chatbot.

While that’s true from a tech perspective, the product and its performance will suck.

You need to think deeply about it

When the students completed the hackathon, one thing became clear.

It’s not as simple or easy to create a high-quality product, and you’re certainly not going to do it in minutes.

But, like I said, the activity’s goal was not to actually build an assistant, but rather, to understand how to think deeply about ‘what it takes’ to build a meaningful product.

I’m talking about:

  • Understanding the problem you’re solving
  • Why it matters to the user
  • Why the solution needs to be AI-powered
  • How the product will work (this covers the user experience and interface)

Most students didn’t complete the assistant/chatbot build, and that’s perfect.

It’s perfect because they learned, through real practice, that it takes time and a lot of deep thinking to build a meaningful product.

“It’s not about whether AI helped write an essay, but about how students directed the AI, how they explained their thought process, and how they refined their approach based on AI feedback. These metacognitive skills are becoming the new metrics of learning.”

Shantanu Sinha, Vice President and General Manager of Google for Education

AI is only as good as the human using it

Perhaps the greatest ‘mistake’ made in all this AI excitement is forgetting the key ingredient for real success.

And that’s you and me, friend.

Like any tool, it only works in the hands of a competent and informed user.

I learned this fairly young when a power drill was thrust into my hands for a DIY mission. Always read the instructions, folks (another story for another time).

Anyway, all my research and real-life experience with building AI skills have shown me one clear lesson.

You need human skills to unlock AI’s capabilities.

You won’t go far without a strong sense (and clarity) of thinking, and the analytical judgment to review outputs.

Embrace your Human Chain of Thought

Yes, I made up this phrase (sort of).

Let me give you some context…

Early iterations of Large Language Models (LLMs) from all the big AI names you know today weren’t great at thinking through problems or explaining how they got to an answer.

That ability to break down problems and display its thinking is called a Chain of Thought technique.

This was comically exposed with any maths problem you’d throw at these early-stage LLMs.

They would struggle with even the most basic requests.

It’s a little different today, as we have reasoning models. These have been trained to specifically showcase how they solve your problems and present that information in a step-by-step fashion.

We now expect all the big conversational AI tools to do this, so why don’t we value the same in humans?

Those who nurture this will have greater command of their career.

So don’t ignore your Human Chain of Thought.

Focusing your energy on the ability to explain your reasoning is far more useful in a world littered with tech products that can recall info on command.

Tools to enhance, not erode your thinking with AI

A couple of useful tools and frameworks to get you firing those neurons from the most powerful tool at your disposal (fyi, it’s your brain).

1/ Good prompting is just clear thinking

Full disclosure: There’s no such thing as a perfect prompt.

They’re often messy, don’t always work every time in the same pattern and need continuous iteration.

Saying that, you can do a lot (and I mean a lot!) to set yourself up for success.

Here’s a (sorta framework) I use to help think critically before, during and after working with AI.

Step 1: Assess

Can AI even help with your task? (It’s not magic, so yes, you need to ask that)

Step 2: Before the prompt

  • What does the LLM need to know to successfully support you?
  • What does ‘good look like’?
  • Do you have examples?

⠀And, most importantly, don’t prompt and ghost.

Step 3: Analyse the output

  • Does this sound correct?
  • Is it factual?
  • What’s missing?

Step 4: Challenge & question

I’m not talking about a police investigation here. 

Just ask:

  • Based on my desired outcome, have we missed anything?
  • From what you know about me, is there anything else I should know about ‘x’? (works best with ChatGPT custom instructions and memory)
  • What could be a contrarian take on this?

Step 5: Flip the script

Now we turn the tables by asking ChatGPT to ask you questions:

Using the data/provided context or content (delete as needed), you will ask me clarifying questions to help shape my understanding of the material.

They should be critical and encourage me to think deeply about the topics and outcomes we’ve covered so far. Let’s start with one question at a time, and build on this.

This is a powerful way to develop your critical skills and how you collaborate with AI.

P.S. Get more non-obvious insights and guidance on AI prompting in my short course designed specifically for busy people like you.

2/ Unpack the problem

Before you start building that next ‘thing’, check out this little framework, which has helped me to do my best work over the last decade.

3/ Partner with AI, don’t use it like a one-click delivery service

If I had a dollar for every time I said this, I’d be a billionaire by next year.

Often, it’s the small and simple actions that can bring the most valued results.

That’s not to say it’s easy to do.

In this video, I share how you can use AI to improve your critical thinking as a thought partner.

Final thoughts

There’s much more to say about this, friend.

But we’ll pause here for now.

Thinking is cool, and thinking about thinking is even cooler.

Let your brain dwell on that for a bit. AI can be an extension of your thinking, but never let it shape it.

Keep being smart, curious and inquisitive as I know you are.


Before you go… 👋

If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.

You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.

Exit mobile version