top of page

How Do You Know?

  • Writer: TJ Ashcraft
    TJ Ashcraft
  • Mar 8
  • 7 min read

PART III OF III  //  VIBES VS. VERIFICATION

Not Rhetorically. Actually. How Do You Know?



We have spent two posts building toward this question. In Part I, we established that critical thinking has become a content format — that the performance of intellectual rigor has been decoupled from the practice of it, and that most audiences cannot tell them apart. In Part II, we established that using one AI platform to check another is not verification — it is the same problem wearing a lab coat.


Both posts ended with a challenge. Stop performing. Start checking. The referee is on the payroll. Go outside the loop.


Fine. But here is the question neither post fully answered, and the question this entire series has been circling:


How do you know?

Not as a rhetorical flourish. Not as a closing provocation designed to make you feel appropriately humbled before you move on with your day. As an actual, operational question that you should be able to answer about any claim you currently hold, any belief you are prepared to defend, any piece of information you are about to share.


How. Do. You. Know.


Sit with it for a moment before you answer. Because the answer most people give — when they give one at all — is not actually an answer to the question being asked.

 

The Answers That Are Not Answers

Here is the answer most people give: I read it somewhere credible.


Which raises the question: how do you know it was credible?


I've seen multiple sources say the same thing.


Which raises the question: how do you know those sources were independent of each other, rather than all drawing from the same original claim?


It aligns with what experts say.


Which raises the question: how do you know those experts were right, and how do you know the summary of their position you encountered accurately represented what they said?


It feels true. It matches what I know about how the world works.


Now we are getting somewhere. Because that last answer — uncomfortable as it is — is closer to honest than the first three. Most of what most people know, they know because it coheres with a prior framework they already held. The new information slotted in without friction. It felt like recognition rather than learning. It felt true.


That feeling is not worthless. Pattern recognition built from genuine experience is real. But it is also the single most exploitable feature of human cognition, and the entire information economy — algorithmic, commercial, political, AI-driven — is built on top of it. The feed does not need to convince you of something false. It only needs to show you something that feels like confirmation of something you already believe. Your own cognition does the rest.


The most dangerous misinformation never feels like misinformation. It feels like finally someone said it.

 

What Knowing Actually Requires

There is a word for the study of how we know things. Epistemology. It is a word that sounds like it belongs in a philosophy seminar, which is exactly why most people have decided it does not belong in their lives. That decision is costing them.


Epistemology is not abstract. It is the most practical question you can ask in 2026. In an environment where content is infinite, sources are opaque, AI generates fluent prose at scale, platforms reward confidence over accuracy, and the tools designed to help you check are structurally inclined to confirm — in that environment, the question of how you know what you know is not philosophical. It is survival.


Knowing something — genuinely knowing it, in a way that is defensible and honest — requires being able to answer three questions. Not satisfyingly. Not comfortably. Actually.

The first question: What is the original source of this claim, and is it accountable to anything?


Accountability matters. A peer-reviewed study is accountable to a methodology, a review process, and a field that can challenge and replicate it. A news report from a publication with an editorial standard is accountable to editors, corrections policies, and professional reputation. A social post, an AI output, a viral screenshot — these are accountable to nothing except engagement. The difference is not about prestige. It is about whether anyone with skin in the game has vouched for the accuracy of what you are reading.


The second question: What evidence would change this claim, and has anyone looked for it?


A claim that cannot be falsified is not a knowledge claim. It is a vibe. The history of every field that has produced reliable knowledge — science, medicine, law, investigative journalism — is a history of people actively trying to break their own conclusions and revising when they found the break. A claim that has only ever been tested by people who wanted it to be true has not been tested. It has been confirmed. Those are not the same thing.


The third question: Am I motivated to believe this, and how much is that motivation shaping my evaluation of the evidence?


This is the question nobody wants to ask. It is also the question that makes the other two possible. Because if you are strongly motivated to believe something — if it confirms your politics, your identity, your existing investment in a position — your evaluation of the evidence will be shaped by that motivation whether you acknowledge it or not. The research on motivated reasoning is unambiguous: smart people are not better at evaluating evidence objectively. They are better at constructing sophisticated justifications for conclusions they already wanted to reach. Intelligence, in this context, is a tool for rationalization as often as it is a tool for reasoning.


Knowing you are biased does not make you unbiased. It makes you a biased person who knows it. The work is still the work.

 

The Inventory

Here is the uncomfortable exercise. Pick something you know. Something you are confident enough in to have shared, argued for, or built other beliefs on top of. Something that functions as a load-bearing wall in your understanding of how the world works.


Now answer the three questions.


What is the original source, and is it accountable to anything?

Not the place you encountered it. Not the person who told you. The original claim, in its original form, made by a person or institution that had something at stake if they were wrong.


For most things most people believe, this chain ends in fog. Not because the belief is necessarily false — but because the chain was never traced. The claim arrived pre-packaged, pre-validated by its apparent ubiquity, and was installed without inspection. It felt true. It cohered. You moved on.


What evidence would change this, and has anyone looked?

If you cannot name the evidence that would change your mind, you do not hold a belief. You hold a commitment. Those are different cognitive objects. A belief can be updated. A commitment can only be defended or abandoned. Most people, most of the time, are operating on commitments they have labeled as beliefs — which means they experience any challenge to those commitments not as useful information but as attack.


And if you can name the falsifying evidence — good. Now ask whether anyone has looked for it seriously, with real methodological rigor, and what they found. Not whether someone has looked and confirmed your position. Whether someone has looked hard for the disconfirmation and either found it or failed to find it.


How motivated am I to believe this?

Scale of one to ten. Be honest. A ten is a belief you would be genuinely devastated to lose — one tied to your identity, your community, your sense of how the world is ordered. Those beliefs deserve the most scrutiny, because they are the ones your cognition will work hardest to protect. A one is a belief you hold lightly, that you would revise without distress if the evidence required it. You probably have very few of those.


The distribution of most people's beliefs — heavily weighted toward the high end of that motivation scale, where scrutiny is most needed and least applied — is not an accident. It is the product of an information environment specifically engineered to sell you identity-reinforcing content, because identity-reinforcing content generates the strongest engagement and the most durable loyalty. You did not arrive at your most strongly held beliefs through a neutral process. Almost no one did.

 

No Exit

Here is where this post refuses to give you what the previous two at least gestured toward: a framework. A set of steps. A Terrain Lens that lets you apply the question systematically and arrive at something that feels like resolution.


There is no resolution here. That is the point.


The question — how do you know? — does not have a final answer. It has a practice. And the practice is not comfortable, not efficient, and not compatible with the pace at which the feed delivers content and expects you to respond to it. The feed needs you to know things quickly. Knowing things carefully is slower than the feed allows, which is why the feed has trained you to mistake fast for thorough, confident for accurate, and shared for verified.


The discomfort this question produces — when held honestly, without the release valve of a framework or a five-step process — is not a problem to be solved. It is the correct response to the actual situation you are in. You are operating in an information environment that is producing more content, with more confidence, with less accountability, than any environment in human history. The appropriate response to that environment is not a technique. It is a disposition.


The disposition is this: I do not know as much as I think I know, the things I am most confident about deserve the most scrutiny, and the feeling of certainty is not evidence of accuracy. It is a signal to slow down.


Certainty is not the destination. It is the warning light.


Part I asked you to stop performing critical thinking and start doing it.


Part II asked you to stop using the loop to check the loop.


Part III is asking something harder.


It is asking you to hold the question — the actual, uncomfortable, unanswered question — and let it do its work. Not to resolve it into a method. Not to satisfy it with a source. Not to quiet it with the feeling of having thought carefully enough.


To just stay in it. Uncomfortable. Unresolved. Looking.


That is what knowing actually requires.


How do you know?

Comments


 Todd Ashcraft 2026 

bottom of page