I got an out of the blue message recently.
“I just saw this video where an AI agent completed a test for an employee, are you worried about this?”
There was no “Hey, Ross”, “How are you, Ross?”, “Hope life is good, Ross?” – just the question. I also find it incredibly unnerving to write about myself in the third person. This is how you know I write all of this and not AI. It’s way too polished to do any of that madness.
Lack of the sender seeming to care about me as a human aside, I have an answer.
I imagine it’s not the one you’d expect.
My take will be a little bit ranty, but nearly 20 years in the field has made me somewhat numb to dumb education and learning practices.
What was shown in the video (fyi, can’t find the video now, must have rage deleted the message) doesn’t worry me as much as the fact that the education system is broken.
Yes, that sounds like some lame statement to say to court attention, but this isn’t an algorithm and I’ll go down with this ship.
- The problem with tests and quizzes in courses
- Why AI Coaches beat any test or quiz for real learning measurement
- The memory game is not a measure of intelligence
- How to rewire courses with AI to measure real impact
- Why do this instead of quizzes, tests, etc.?
- AI Tutor and Coach examples
- How I use AI coaches in my online courses
- Using AI as a Tutor with Google AI Studio
- Using AI as a Real-Time Coach with Google AI Studio
- Final thoughts
The problem with tests and quizzes in courses

If we confirm ‘skills’ and ‘understanding’ through quizzes, then it’s all just a game of who has the best memory, not who understands how to do ‘x’.
I know millions of institutions and corporate learning experiences worship at the altar of the almighty multiple choice exam as the measurement stick for human intelligence, skills and expertise.
Doesn’t mean it’s right.
Agentic AI thrives here because it’s a pretty easy process to tick a box of multiple-choice answers. There’s nothing inherently hard about that. It might be one of the easiest use cases of AI agents to date.
That is not the problem.
The actual problem is how we’ve shaped the process of measuring intelligence, skills and expertise.
What we should be doing is assessing critical thinking and how students/people approach and solve problems.
This is where AI tutors will play a big role. Instead of taking some stupid quiz or exam, you’ll talk with a tutor to explain concepts, provide analogies and break down your human chain of thought.
Thus testing your problem-solving capabilities and helping you identify blindspots.
The latest AI tech finally enables us to do something about this.
But let’s not stay in the rant too long…
So, instead of me shaking my tea cup at the world, let me share the enormity of the possible and how you can experiment with this in your learning experiences.
Why AI Coaches beat any test or quiz for real learning measurement

A few weeks back, I finished a Machine Learning Cert with Google.
And I really enjoyed it.
Yes, you read that right, someone actually enjoyed learning something. But what triggered this emotion? Simple, they treated me like an adult throughout the entire process, not just the stereotypical category of a ‘student’.
The most impactful way they did this was by removing all those worthless and kinda insulting tests and quizzes that pollute too many education and workplace learning products.
Instead, I was assessed based on what I understood from the course by talking with an AI assistant, which acted as a semi-assessor and coach.
No multiple-choice questions for me to get ChatGPT to answer here.
At the end of each module, I had to face what we could call the final boss in Google’s AI coach.
After hours spent learning about the wonderful components of machine learning, I’d talk with the AI coach to share what I learnt, answer its probing questions and explain how I would convey these learnings with others through examples and analogies.
It was unique compared to the industry standard, and that made it very rewarding.
Even as I write these words, my recollection is so positive, and it helped me cement my understanding of many of the core concepts that I know a multiple-choice test couldn’t have.
Now, you could read this and think, “This guy is crazy, isn’t a test better?” — NO!
The memory game is not a measure of intelligence
The problem we’ve created as a society is this odd ‘test your memory game’ with which we use to assess an individual’s skills and expertise.
Completing a course on ‘x’ doesn’t mean you know how to do ‘x’ for real.
That goes for this very course I’ve shared with you.
I took the cert as a way to expand my understanding and capabilities in the ML realm, but don’t expect me to be building algorithms anytime soon.
Knowing how to do ‘x’ requires me to demonstrate that I can do it and explain the critical thinking behind it.
We need to spend more time prioritising how we solve problems, thinking critically and nurturing our human chain of thought, not being the top memory champion.
And I think AI can help us shape this.

How to rewire courses with AI to measure real impact
Even before the Google cert, I’ve had AI coaches in two of my online courses.
One in my performance consulting course and the other for my AI prompting masterclass.
Both are vital, imo, in helping students to take action on what they learnt and for me to validate whether they actually understand any of what they’ve done.
They’ve been transformative for my students.
Both coaches are used daily by new and returning students. They extend the value of the courses, make sure students get real-time support and keep making the content applicable for years to come.
This is the power AI can bring to education and learning if you go beyond content creation.

Why do this instead of quizzes, tests, etc.?
Easy:
- I think they’re utterly pointless, but I assume you figured that out already.
- Because I want to develop each student’s Human Chain of Thought (HCOT).
I need to do a bit of explaining with Human Chain of Thought.
Early iterations of Large Language Models (LLMs) from all the big AI names you know today weren’t great at thinking through problems or explaining how they got to an answer.
That ability to break down problems and display its thinking is called a Chain of Thought technique.
This was comically exposed with any maths problem you’d throw at these early-stage LLMs.
They would struggle with even the most basic requests.
It’s a little different today, as we have reasoning models. These have been trained to specifically showcase how they solve your problems and present that information in a step-by-step fashion.
We now expect all the big conversational AI tools to do this, so why don’t we value the same in humans?
Providing AI coaches that help my students contextualise, apply and truly understand what they’ve learnt amplifies this Human Chain of Thought.
So next time you design a learning experience, maybe ditch the test and quizzes for a personalised AI coach/assessor/I’ll think of a better name in the future.
I plan to cover more on the ‘How to build’ these type of solutions in my next AI for solving real business problems boot camp. I only do these once or twice a year, you can join the waitlist to be first in line for the next one.
AI Tutor and Coach examples
I’ve used the word ‘coach’ a lot in this one.
We can throw AI tutors into that mix, too.
Here’s how I see the difference btw:
- AI Tutor = Breaks down concepts and works in more of a professor style
- AI Coach = Works with you in a live environment to solve challenges together. Basically, the new “Learning in the flow” but with AI.
Of course, these terms are interchangeable, and the capabilities can be merged.
Often, I find it’s easier to show you what I’m talking about with AI than try to describe it to you, so here’s some examples:
How I use AI coaches in my online courses
Using AI as a Tutor with Google AI Studio
Using AI as a Real-Time Coach with Google AI Studio
Final thoughts
I get that tests and quizzes are an easy way to measure “learning”, but that doesn’t make them useful.
AI now lets us reshape this.
You can create a similar AI coach to the one I had with Google, or the ones I use across my own courses.
Saying goodbye to completions as the measure of success is the new reality. Now, you can actually see whether people truly ‘get it’, not just if they finished something and passed the test.
→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.


2 replies on “How AI Is Redefining the Way We Assess Learning”
[…] you can explore more of my analysis on AI coaches and tutors in L&D, and a demo of how I use AI coaches in courses, […]
[…] note: I covered my view on the future of learning ditching recall and focusing on human reasoning, in a previous post. You’ll find a bunch of examples showing this in action […]