The Terrain of Confirmation: How Bias Hijacks Visual Literacy and Feeds Anti-Intellectualism
- TJ Ashcraft

- Feb 20
- 4 min read
Updated: 3 days ago


In an era overflowing with information, credibility is frequently determined by visual signals that seem like evidence. However, even with a solid understanding of visual literacy—knowing to inquire about framing, origin, and purpose—there remains a subtle influence that can still mislead us: confirmation bias.
Confirmation bias is the tendency to seek, interpret, and remember information in ways that favor what we already believe. (psy2.ucsd.edu) It doesn’t require bad faith. It doesn’t require low intelligence. It’s a default setting of human cognition—especially under stress, speed, and social pressure.
And right now, the terrain is engineered for it.
Where Confirmation Bias Shows Up in Visual Culture
Visual literacy teaches us to read the frame. Confirmation bias decides which frames we trust, often before we notice we’re choosing.
It shows up as:
1) Selective attention (what we even stop on)We linger on images that confirm our storyline: the clip that “proves it,” the screenshot that validates our suspicion, the chart that flatters our certainty. Everything else becomes background noise.
2) Biased interpretation (what the image “means”)Two people can watch the same footage and extract opposite truths—not because the footage is neutral, but because interpretation is guided by prior belief. Nickerson’s review details how people test hypotheses in ways that are partial to what they already think is true. (psy2.ucsd.edu)
3) Biased memory (what we remember later as “evidence”)We remember the confirming example as representative and forget the disconfirming ones as exceptions.
This matters because much of modern persuasion is not about argument—it’s about curation. And confirmation bias turns curation into a self-sealing system.
Platforms Don’t Just Reflect Bias. They Scale It.
Confirmation bias is old. What’s new is the distribution system.
Research syntheses on echo chambers and filter bubbles show that algorithmic curation, network homophily, and engagement incentives can intensify selective exposure and reinforce existing attitudes—though effects vary by platform, measurement, and context. (Springer)
Translation: the feed doesn’t simply show you information. It shows you what keeps you there. And what keeps you there is often what confirms you—what feels familiar, righteous, or clarifying.
The Bridge to Anti-Intellectualism
Anti-intellectualism rarely begins as “I hate knowledge.” More often it begins as:“I don’t trust their knowledge.”
Here’s where confirmation bias becomes combustible.
When someone already suspects that experts are biased, corrupt, or “in on it,” confirmation bias makes them:
overweight anecdotes over base rates,
treat debunking as evidence of a cover-up,
interpret uncertainty as incompetence,
and prefer “common sense” narratives that feel internally consistent.
Motivated reasoning research helps explain why: people don’t just process information to be accurate—they often process it to protect identity, worldview, or belonging. (ScienceDirect)
So the terrain becomes a loop:
Overload → shortcuts → identity-protective interpretation → selective sharing → stronger prior belief → deeper distrust of expertise.
Not because people are incapable of thinking, but because thinking has become socially and cognitively expensive.
Confirmation Bias Has an Aesthetic
If anti-intellectualism has a look, confirmation bias has a style of “proof.”
You’ll recognize it:
Screenshots as receipts (context removed, timestamp absent, source untraceable)
Single data points as destiny (a chart without methods, a metric without denominators)
Before/after visuals as causality (correlation framed as explanation)
Compilation edits (quantity of clips used to imply inevitability)
Certainty aesthetics (clean typography, emphatic captions, confident delivery)
Visual literacy can identify these patterns. But confirmation bias decides whether we apply that literacy evenly—or only when it’s convenient.
The Share Button Is Where Bias Becomes Infrastructure
One of the most practical findings in misinformation research is also the most humbling: people often share inaccurate content not because they consciously believe it, but because they’re not thinking about accuracy in the moment of sharing. Interventions that simply prompt users to consider accuracy can improve the quality of what gets shared. (Nature)
This matters for confirmation bias because sharing is a commitment device. Once we publicly endorse a claim, it becomes part of our identity—and identity is sticky.
Terrain Lens: Reading Confirmation Bias
Try these questions the moment something feels “obviously true”:
What am I hoping this proves?
If this were false, what would I expect to see instead?
Am I applying the same standard to my side as the other side?
What context would most weaken this claim (source, date, full clip, base rate)?
Is the visual doing evidence-work—or vibe-work?
And one more, borrowed from the most scalable interventions:
Am I sharing this because it’s accurate—or because it’s satisfying? (Nature)
Redrawing the Map
Confirmation bias is not a moral failure. It’s a navigational hazard.
In a visually saturated environment, it quietly decides what counts as evidence, which experts feel trustworthy, and which doubts feel “reasonable.” Left unchecked, it doesn’t just distort our perception—it can harden into a posture where expertise itself becomes suspect, and method starts to feel like manipulation.
Visual literacy is the skill of reading the frame.But intellectual honesty is the willingness to ask: What if the frame I prefer isn’t the truest one?
The terrain is not asking us to become perfect thinkers. It’s asking us to become better navigators.
Where do you notice confirmation bias most in your own media diet—what topics, what sources, what visuals—and what would it look like to build one small “accuracy pause” into that routine?


Comments