top of page

The Terrain of Confirmation: How Bias Hijacks Visual Literacy and Feeds Anti-Intellectualism

  • Writer: TJ Ashcraft
    TJ Ashcraft
  • Feb 20
  • 4 min read

Updated: Mar 6


In an era overflowing with information, credibility is frequently determined by visual signals that seem like evidence. However, even with a solid understanding of visual literacy — knowing to inquire about framing, origin, and purpose — there remains a subtle influence that can still mislead us: confirmation bias.


Confirmation bias is the tendency to seek, interpret, and remember information in ways that favor what we already believe. It doesn't require bad faith. It doesn't require low intelligence. It's a default setting of human cognition — especially under stress, speed, and social pressure. And right now, the terrain is engineered for it.


Where Confirmation Bias Shows Up in Visual Culture

Visual literacy teaches us to read the frame. Confirmation bias decides which frames we trust, often before we notice we're choosing.


It shows up as selective attention — we linger on images that confirm our storyline: the clip that "proves it," the screenshot that validates our suspicion, the chart that flatters our certainty. Everything else becomes background noise. It shows up as biased interpretation — two people can watch the same footage and extract opposite truths, not because the footage is neutral, but because interpretation is guided by prior belief.


And it shows up as biased memory — we remember the confirming example as representative and quietly forget the disconfirming ones as exceptions.


This matters because much of modern persuasion is not about argument — it's about curation. And confirmation bias turns curation into a self-sealing system.


Platforms Don't Just Reflect Bias. They Scale It.

Confirmation bias is old. What's new is the distribution system. Research syntheses on echo chambers and filter bubbles show that algorithmic curation, network homophily, and engagement incentives can intensify selective exposure and reinforce existing attitudes — though effects vary by platform, measurement, and context.


Translation: the feed doesn't simply show you information. It shows you what keeps you there. And what keeps you there is often what confirms you — what feels familiar, righteous, or clarifying.


The Bridge to Anti-Intellectualism

Anti-intellectualism rarely begins as "I hate knowledge." More often it begins as: "I don't trust their knowledge."


When someone already suspects that experts are biased, corrupt, or "in on it," confirmation bias makes them overweight anecdotes over base rates, treat debunking as evidence of a cover-up, interpret uncertainty as incompetence, and prefer "common sense" narratives that feel internally consistent. Motivated reasoning research helps explain why: people don't just process information to be accurate — they often process it to protect identity, worldview, or belonging.


So the terrain becomes a loop:

 

Overload → shortcuts → identity-protective interpretation → selective sharing → stronger prior belief → deeper distrust of expertise.

 

Not because people are incapable of thinking, but because thinking has become socially and cognitively expensive.

Confirmation Bias Has an Aesthetic

If anti-intellectualism has a look, confirmation bias has a style of "proof." Screenshots used as receipts — context removed, timestamp absent, source untraceable. Single data points presented as destiny — a chart without methods, a metric without denominators. Before/after visuals framed as causality when they're showing correlation at best. Compilation edits where quantity of clips implies inevitability. And certainty aesthetics — clean typography, emphatic captions, confident delivery — doing the work that evidence should be doing.


Visual literacy can identify these patterns. But confirmation bias decides whether we apply that literacy evenly — or only when it's convenient.


The Share Button Is Where Bias Becomes Infrastructure

One of the most practical findings in misinformation research is also the most humbling: people often share inaccurate content not because they consciously believe it, but because they're not thinking about accuracy in the moment of sharing. Interventions that simply prompt users to consider accuracy before sharing can measurably improve the quality of what gets amplified.


This matters because sharing is a commitment device. Once we publicly endorse a claim, it becomes part of our identity — and identity is sticky.

 

Terrain Lens: Reading Confirmation Bias

The moment something feels obviously true — try this before you share it:

1.  What is this asking me to feel before I think?

2.  What am I hoping this proves?

3.  If this were false, what would I expect to see instead?

4.  Am I applying the same standard to my side as the other side?

5.  Is the visual doing evidence-work — or vibe-work?

6.  Is this inviting understanding — or offering permission to stop thinking?

7.  Am I sharing this because it's accurate — or because it's satisfying?

 

Redrawing the Map

Confirmation bias is not a moral failure. It's a navigational hazard. In a visually saturated environment, it quietly decides what counts as evidence, which experts feel trustworthy, and which doubts feel "reasonable." Left unchecked, it doesn't just distort our perception — it can harden into a posture where expertise itself becomes suspect, and method starts to feel like manipulation.


Visual literacy is the skill of reading the frame. But intellectual honesty is the willingness to ask: What if the frame I prefer isn't the truest one?


The terrain is not asking us to become perfect thinkers. It's asking us to become better navigators.


Where do you notice confirmation bias most in your own media diet — what topics, what sources, what visuals — and what would it look like to build one small accuracy pause into that routine?

Comments


 Todd Ashcraft 2026 

bottom of page