Same Promise, Different Machine
- TJ Ashcraft

- 4 days ago
- 8 min read
AI Was Supposed to Give You Your Time Back. So Was Email. So Was the Smartphone. Notice the Pattern.
PART ONE OF THREE — THE ATTENTION TAX

The Guy Who Builds These Things
Francesco Bonacci builds AI agents for a living. His company makes them. He is, by any reasonable measure, exactly the kind of person AI tools were designed for: technically fluent, early adopter, professionally invested in making the technology work. He published an essay recently about what his days actually look like. Six worktrees open. Four half-written features. Two quick fixes that became rabbit holes. He ends every day exhausted, not from the work itself, but from managing the work. The agents are running. The output is accumulating. He is losing the plot entirely.
He called it vibe coding paralysis. A BCG research team studying 1,488 full-time workers published their findings in Harvard Business Review this past March and called it something else: AI brain fry. Mental fatigue from excessive oversight of AI tools beyond one’s cognitive capacity. Fourteen percent of AI-using workers in the study reported it. In marketing, 26 percent. Double digits across operations, engineering, finance, and IT.
Workers using AI heavily reported 14 percent more mental effort, 12 percent more mental fatigue, and 19 percent more information overload than their less AI-intensive peers. Productivity peaked at two or three AI tools used simultaneously, then dropped. Workers experiencing brain fry made 39 percent more major errors and reported 39 percent higher intent to quit than those who did not.
The tool designed to reduce cognitive load is generating a new category of cognitive load. The technology built to give workers time back is eating the time it was supposed to return. This is not a bug report. It is a pattern. And the pattern is older than AI.
You Have Heard This Before
Email arrived in the workplace in the 1990s with a clear value proposition. No more phone tag. No more waiting for the memo to circulate. Instant communication, asynchronous and documented. It would make organizations faster, leaner, more responsive. It would give knowledge workers control over their time.
What it delivered was a permanent expansion of the workday, a new ambient obligation to be reachable, and an inbox that became its own full-time job. The average knowledge worker now spends roughly 28 percent of their workweek managing email. The communication tool became the communication burden. The thing that was supposed to save time became the thing time gets spent on.
The smartphone arrived in the late 2000s with a different but structurally identical promise. The world in your pocket. Information anywhere, anytime. Freedom from the desk. What it delivered was the desk in your pocket. The boundary between work and not-work dissolved, not because workers chose to dissolve it but because the tool made the boundary impossible to enforce and the culture made enforcing it look like disengagement. Availability became the default. Disconnection became the exception that required justification.
Now AI. The pitch is cleaner and more ambitious than either predecessor. Hand off the grunt work entirely. Let the agents handle the repetitive cognitive tasks. Reclaim your focus for the meaningful, creative, high-judgment work that only humans can do. It is the most seductive version of the promise yet, because it is the most total. Not just faster communication. Not just portable access. Full delegation of the parts of work that drain you.
Every wave promises to give your attention back. Every wave finds a new way to consume it. At some point the pattern is the story, not the wave.
Why Delegation Became a Second Job
The BCG researchers offer a precise description of what goes wrong. Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI. The engineering manager in the study described it exactly: one tool helping weigh technical decisions, another generating drafts and summaries, bouncing between them, double-checking everything, moving no faster, brain just starting to feel cluttered. Not physically tired. Crowded. Working harder to manage the tools than to solve the problem.
This is the mechanism. Delegation requires oversight. Oversight requires attention. When the volume of delegation scales faster than the human capacity to oversee it, the net cognitive load does not decrease. It increases, with the added burden of monitoring something you are no longer fully doing yourself. You are not the worker anymore. You are the manager of workers who never sleep, never push back, and produce output at a pace that exceeds your ability to evaluate it.
The researchers compare it to handing someone who just learned to drive a Ferrari. The power is real. The infrastructure for using it safely does not exist yet. When cars transformed transportation, society built roads, signs, speed limits, and driver education. When email transformed office communication, norms developed slowly around response times and inbox management. No equivalent cognitive infrastructure exists for AI agents. Most organizations are rolling out adoption through top-down mandates and usage metrics: token consumption, lines of AI-generated code, number of agents deployed. They are measuring output. Nobody is measuring attention.
Meta includes lines of AI-generated code as a performance metric for engineers. The incentive structure rewards delegation and punishes scrutiny. The worker who slows down to evaluate what the agent produced is falling behind the worker who ships. Brain fry lives precisely in that gap, in the worker who refuses to let the output go unchecked and gets cognitively pummeled for the refusal.
The Other Failure Mode
Brain fry has a twin. The same BCG research group named it workslop: the hollow AI-generated memos, pitch decks, and presentations that flood inboxes when workers disengage and stop scrutinizing AI output. Where brain fry is cognitive overload, workslop is cognitive surrender. The worker stops caring. The output slips through unchecked. The meeting has an agenda nobody wrote and a summary nobody read and action items nobody remembers agreeing to.
Psychiatrist Gabriella Rosen Kellerman, a co-author of both the brain fry and the workslop studies, frames the distinction cleanly. Brain fry is the failure mode of engagement. Workslop is the failure mode of disengagement. Both trace to the same root: organizations deploying AI agents faster than they are equipping workers to use them sustainably.
The two failure modes together describe the full shape of the problem. You either hold on too tight and get exhausted, or you let go too completely and produce nothing real. The space between them, the sustainable, attentive, genuinely productive use of AI tools, is narrow, ungoverned, and left entirely to the individual worker to find and maintain without support.
The organizations measuring token consumption are not measuring the right thing. The organizations measuring lines of AI-generated code are not measuring the right thing. Nobody has built the metric for the thing that actually matters, which is whether the human behind the tools is still thinking.
What This Moment Is Actually About
Brain fry is not the story. Brain fry is the latest symptom. The story is the promise, the one that gets made with every wave of technology that touches how we work, and the reliable gap between what the promise delivers and what it costs.
Email promised to free us from phone tag and gave us inbox anxiety. The smartphone promised to free us from the desk and gave us the desk everywhere. AI promises to free us from cognitive drudgery and is generating a new form of cognitive drudgery called managing the thing doing the cognitive drudgery. The baseline of what is expected keeps rising. The new surfaces to monitor keep multiplying. The anxiety about falling behind keeps compounding. The promise never quite arrives, but it is always close enough that the next version seems worth trying.
Anthropic co-founder Jack Clark said it plainly in a recent interview: there is no natural stopping point for this technology. It is going to keep getting better and the changes it brings are going to keep compounding with the rest of society. That is not a warning against adoption. It is a description of the environment. The environment does not have a pause button. The question is whether the people inside it are building with any awareness of what the pattern has done every time before.
The answer, so far, is no. Not meaningfully. Organizations that value work-life balance report 28 percent lower mental fatigue among their workers, according to the same BCG research. The simple organizational permission to step away is one of the most significant protective factors in the data. That finding should be unremarkable. It is instead a finding, which means the baseline assumption in most organizations is the opposite: AI adoption means more output, not the same output with more space around it.
The worker who gets brain fry is actually burning out less than peers who are less engaged with AI. They are over-engaged, not checked out. The intervention is not less AI. It is recovery time, clearer limits, workflows that treat human attention as a finite resource rather than a variable to be optimized upward. Brain fry is acute and recoverable. Step away and it dissipates. That reframe matters: this is not a verdict against AI. It is a design brief. The problem is solvable. The organizations solving it are the ones that decided the human behind the tool was worth engineering around.
This Is About You
You are reading this, which means you are probably using AI tools. You are probably using more of them than you were a year ago. You may have noticed something shifting in how your days feel, not the output, which may be higher, but the texture of the work itself. The managing. The checking. The sense that the loop of reviewing and approving and redirecting has quietly become the job, and the job has become harder to locate underneath it.
Ask the question the BCG study is really asking: what are you delegating, and what did delegation cost you? Not in hours. In attention. In the particular kind of thinking that happens when you are fully inside a problem rather than supervising the thing inside the problem. Is that thinking still happening? Do you know where in your day it lives? Or has the management of output crowded out the production of thought?
That question is not an argument against the tools. It is the argument for using them deliberately rather than at the pace the culture is setting. The culture is setting a pace calibrated to token consumption and lines of generated code. It is not calibrated to you.
The Terrain Lens — Applied
Pick one AI tool you use regularly. Not all of them. One.
Track how you use it for a single day. Not the output it produces. The attention it requires. Every time you check what it generated, every redirect you issue, every output you review and approve or reject, every moment you are managing it rather than doing the thing it is supposed to be doing for you. Add it up at the end of the day.
Then ask: is the attention this tool requires less than the attention the task would have required without it? Not the time. The attention. The quality of cognitive engagement per unit of work produced.
If the answer is yes, the tool is working. If the answer is unclear, or no, you have found your brain fry threshold. The technology did not tell you where it was. It was never going to. That is your job, and it has always been your job, with every tool every wave has delivered.
The promise was delegation. The reality is a negotiation. It has always been a negotiation. The workers who come out of this wave intact are the ones who figured that out before the tools figured them out.



Comments