The Expert Look Part I: Why They Believe It
- TJ Ashcraft

- Mar 6
- 5 min read

There is a particular texture to the modern expertise feed.
Scroll for sixty seconds and you'll find the same insight — the same phrasing, the same three-step framework, the same confident assertion about leadership or learning or creativity — delivered by dozens of different voices, each presenting it as their own considered view. The carousel looks original. The caption sounds earned. The profile photo has the right kind of authority. But the content is identical, laundered through slightly different typography and a new author photo.
This is not primarily a story about dishonesty. Most of the people sharing that content genuinely believe they understand it. They read something, felt a flash of recognition, repackaged it, and posted it with confidence. The problem isn't that they're lying. The problem is that they've mistaken the feeling of understanding for understanding itself.
And that mistake is not random. It's predictable. It's documented. And the modern information environment is specifically designed to accelerate it.
The Illusion of Explanatory Depth
In the 1990s, cognitive scientists identified a phenomenon they called the illusion of explanatory depth: people routinely believe they understand how things work far better than they actually do. When asked to explain a zipper, a toilet, or a bicycle in step-by-step detail, most people discover — mid-explanation — that their understanding evaporates. What felt like knowledge was closer to familiarity.
The illusion runs deep because recognition and comprehension feel identical from the inside. Reading a clear, well-structured argument about confirmation bias or organizational change feels like learning confirmation bias or organizational change. The clarity of someone else's explanation becomes indistinguishable from your own grasp of the concept.
This is the cognitive machinery underneath the expertise feed. The person who read one clear LinkedIn post about "the five stages of psychological safety" and then wrote their own five-stage post isn't being cynical. They genuinely feel like they know this. The original author's clarity has transferred — not as understanding, but as a convincing simulation of it.
The Dunning-Kruger Gradient
The Dunning-Kruger effect — the finding that people with limited knowledge in a domain tend to overestimate their competence — is often cited as evidence of stupidity or arrogance. That reading misses the point.
The more precise finding is about the structure of learning itself. Early in acquiring any skill or domain of knowledge, you don't yet know what you don't know. The map of your ignorance is invisible to you because you haven't yet developed the framework to perceive it. Confidence peaks early — not because people are deluded, but because incompetence and self-assessment use the same cognitive tools, and those tools haven't been calibrated yet.
What the modern platform environment does is freeze people at that early peak. It rewards the confident post over the uncertain one. It amplifies the clear framework over the qualified argument.
The gradient exists naturally in all learning. Platforms turn it into an identity.
Content Laundering at Scale
What fills the feed — hundreds of posts repeating the same insight from the same probable source — is what happens when the illusion of explanatory depth meets algorithmic distribution.
The original idea, wherever it came from, gets read by someone who experiences that flash of recognition. They share a version. That version gets engagement. Others see the engagement, read the post, experience their own flash of recognition, and write their own version. The insight propagates not through original research or lived experience, but through the social proof of repetition.
Each iteration reinforces the previous one. By the fifth generation of the post, the idea feels self-evidently true — not because evidence has accumulated, but because exposure has. This is the illusory truth effect operating at platform scale: repetition manufactures credibility, and credibility manufactures confidence.
The person on generation fifteen genuinely believes they're sharing something they understand. They've seen it validated dozens of times. They've seen it resonate with their audience. The costume of expertise has become, in their experience, indistinguishable from the thing itself.
The Platform as Confidence Engine
Social platforms did not invent this dynamic. But they industrialized it.
The architecture of engagement — likes, shares, follower counts, algorithmic amplification — functions as a continuous confidence feedback loop. Post something that resonates and you receive immediate social validation. That validation feels like confirmation of understanding. The more it happens, the more certain you become — not because your knowledge has deepened, but because your social proof has compounded.
Research on what psychologists call "earned dogmatism" describes how people who have been recognized as credible in one domain develop an increased sense of license to hold strong opinions in adjacent domains — domains where their actual expertise may be thin or nonexistent. The feeling of being a credible voice generalizes beyond its actual radius. And in a platform environment that rewards reach over rigor, that generalization goes largely unchallenged.
This is the mechanism behind the wellness influencer who pivots to financial advice, the tech founder who pivots to geopolitical analysis, the life coach who becomes a political commentator. Each step feels internally justified — the confidence has been earned, just in a different room.
The Ethical Preview
None of this fully excuses the behavior. There is a threshold — blurry but real — between genuine self-deception and willful performance. Between someone who doesn't know what they don't know, and someone who has begun to suspect it but continues presenting as certain because certainty is more profitable.
That threshold is where psychology ends and ethics begin. And it's where the second part of this conversation goes. Because the question isn't only: why do people believe they're experts? It's: what does someone who presents themselves as an expert owe the audience that trusted them?
Terrain Lens: Reading Performed Confidence |
When someone presents with authority — on your feed, in a meeting, in a headline — try this: |
1. What is this asking me to feel before I think? |
2. Is this confidence proportionate to the complexity of the subject? |
3. Where did this idea originate — and how many steps removed is this version from the source? |
4. What qualifications, caveats, or counterarguments are absent — and does their absence feel deliberate? |
5. Is the platform rewarding this person for being right — or for sounding right? |
6. Is this inviting understanding — or offering permission to stop thinking? |
7. What would this argument look like if the author had to defend it to someone with deeper expertise? |
What's one idea in your feed this week that appeared in multiple forms from different voices — and what does the uniformity of that content tell you about how it was actually understood?



Comments