Once upon a time, I enjoyed using social media.
Sounds weird to say nowadays, I know. Yet, when I first started using platforms like LinkedIn nearly 15 years ago, it was both a different time and place.
Platforms felt more conversational-based and not driven by clickbait. The algorithms weren’t so optimised for mass outrage.
I used to learn tons as a mid-twenty something trying to navigate the odd corporate world in London.
I had a ritual of saving posts to read later and call upon as a sort of personal learning system.
I adopted the same approach in the early days on Instagram.
Specifically trying to take my body from the grasshopper lightweight it was, to some form of decent-sized guy who didn’t look like he could slide through the cracks of doors.
Again, I’d absorb whatever good stuff I could.
Back in 2012, fitness influencer’s weren’t really a thing, so I didn’t have to cut through any noise.
Today, the story couldn’t be farther from this.
I only use LinkedIn these days, and even that is becoming a struggle because I’m met daily with disinformation, misinformation, smart people saying dumb things and a host of selfies desperately being used in the search for attention.
What concerns me most these days is so many people’s inability to ‘read beyond the headlines’.
I see this most often in the last few years with the sharing of the absurd amount of research and reports on AI. Look, I understand the attention game and why people indulge in the shock factor to garner attention.
The problem is that these people and posts are proliferating an epidemic of often incorrect statements, and viewers aren’t nearly being skeptical enough.
They say that AI is killing our critical thinking and analytical judgment, yet we seem to be doing that fine ourselves by not questioning what we see.
A great quote I keep in my notes folder reminds me of the need to both doubt and ask questions: “Don’t believe everything you think or see”.
So, my question is, when did we stop looking beyond the veil? And why don’t we ask questions anymore or do our own research?
I’m not expecting an answer, I’m just throwing it out there.
A few case studies
We see case studies on this almost daily.
They’re probably hundreds at this stage, yet we have two recent ones which I’m sure most of you have seen. It is going to feel like I’m picking on MIT here, but that’s not the intention.
It is how the data produced is being used by 3rd parties and how that impacts the global narrative.
The latest MIT Study, which claims 95% of organisations are getting zero return on Gen AI projects, has been doing the rounds in the last week. Now, while this makes a great clickbait headline and social post, we need to look deeper.
If more people went to the “Research and methodology” sections of reports, they would be surprised.
→ Ignoring this makes smart people say dumb things.
For this particular example, Ethan Mollick posted a great note on how this paper was researched. I’ve provided that section for you to check out below:


I’ll let you draw your own conclusions, yet I don’t feel 52 interviews with a 6-month timeline is a good enough example set to be used as it is, as “the global view” by many people posting across social.
The devil is in the details, as they say.
Our second example, again from MIT (sorry, I do love you really), but back in June, gave rise to more clickbait and social discourse.
This has nothing to do with the research or the researchers themselves. They set out to see how using AI tools affected an individual’s ability to write essays. They conducted this with only 54 people and on this one task.
The main finding was that AI can help in the short term, but if you always use and rely on it, you’ll diminish your ability to write an essay without it.
Cool, sounds like common sense to me.
But that research gave rise to crazy headlines like this:

See how quickly that turned?
Did anyone reading these headlines, both in news apps or social posts, look beyond this? From what I see, no.
This is where the problem exists.
Even the lead researcher on this report called out the same thing and set the facts straight in their own post.
It’s not necessarily new, yet algorithm-based platforms are loving the attention it creates.
Like I said, this problem is not isolated to one report, it’s everywhere across social media and as such, society at large.
Posts with clickbait headlines and mass engagement are proliferating, often misleading and, in some situations, harmful messages.
So, what can you do?

Become a skeptical hippo
Ok, what I’m not saying here is to become some kind of conspiracy theorist.
Instead, I want you to engage that powerful operating system that sits in each of our skulls. When you see headlines like we’ve covered or clickbaity posts, try the following:
1. Go to the primary source
- Locate the actual report or paper (not just a blog post or tweet about it).
- Even if you don’t read every detail, scan:
- Abstract (what the study did and found).
- Methods (how many people, what was tested, what tools were used).
- Limitations (almost always at the end).
2. Ask 3 key questions
When you see a claim, pause and ask yourself:
- Who was studied? (demographics, sample size, context).
- What exactly was measured? (recall, ownership, not general intelligence).
- How broad are the claims? (exploratory finding vs universal truth).
If the headline claims more than the study actually measured, that’s a red flag.
3. Notice the language
- Headlines often use absolutes (“ChatGPT destroys learning!”).
- Scientific reports usually use tentative language (“suggests,” “indicates,” “preliminary”). Spotting this mismatch helps you resist being pulled into the hype.
4. Slow down your consumption
- Disinformation spreads because social media rewards speed + emotion.
- Slow thinking (literally taking a minute to check the source or read the abstract) interrupts that cycle and gives you space to process critically.
TL;DR: Read beyond the headlines, ask the questions and embrace those skeptical hippo eyes.
📝 Final thoughts
While this might partly sound like human raging against the machine, I hope my sentiments of ‘do your own research’ and ‘be more skeptical to reach your own conclusions’, come through.
I don’t believe we need to worry about AI killing our critical thinking and analytical judgment, if we’re doing that fine all by ourselves.
Before you go… 👋
If you like my writing and think “Hey, I’d like to hear more of what this guy has to say” then you’re in luck.
You can join me every Tuesday morning for more tools, templates and insights for the modern L&D pro in my weekly newsletter.











