The Commoditization of "Fake Knowledge Work"
The Thesis
"Fake knowledge workers" — those who pass and organize information without adding original judgment — will be displaced by AI. As reasoning and action-taking capabilities grow, the "getting job done" layer commoditizes.
What counts as "fake" knowledge work?
- Summarizing without synthesis
- Scheduling without strategy
- Formatting without editing
- Routing without deciding
- Researching without recommending
- Coordinating without leading
The common thread: transformation without addition. The information goes through you, but you don't change its trajectory.
The Opposing View: Maybe It's Not That Simple
Counter-argument 1: The Lump of Labor Fallacy History shows new technology creates new work categories. ATMs didn't eliminate bank tellers — they made branches cheaper to run, so banks opened more branches and hired more tellers (for relationship work). Past automation waves (industrial revolution, computerization) ultimately increased employment.
Counter-argument 2: Judgment Is Everywhere What looks like "just passing information" often includes micro-judgments invisible to outsiders. The admin who knows which emails deserve immediate attention. The analyst who knows which data points the VP actually cares about. Context and relationships are deeply embedded.
Counter-argument 3: Preference for Humans People may prefer human service for psychological/social reasons even when AI is cheaper. Healthcare, education, hospitality. The human element isn't always about capability.
But This Time Might Be Different
Why AI displacement could be structurally different:
Speed of transition — Previous waves took decades. AI capabilities are compounding annually. Less time for labor market adjustment.
Breadth of impact — Past automation hit specific sectors (manufacturing, data entry). AI affects horizontal functions across all industries simultaneously.
Cognitive work at scale — First time the displacement targets the middle of the skill distribution, not just the bottom. White-collar, educated workers face displacement alongside service workers.
Complementarity vs. substitution — Past tech mostly augmented workers (spreadsheets made accountants more productive). AI can fully substitute for many tasks.
Possible Outcomes
Scenario 1: The Hollowed Middle
High-skill work (strategy, creativity, leadership) and physical-presence work (care, craft, service) survive. The middle — coordination, analysis, administration — hollows out. Leads to polarized labor market, wage divergence.
Scenario 2: The Supervision Economy
Fewer workers do more, with AI handling execution. But someone needs to supervise, validate, course-correct. Creates new "AI management" layer. Smaller workforce, higher-paid remaining jobs.
Scenario 3: The Abundance Dividend
If AI dramatically increases productivity, goods/services get cheaper. Basic needs cost less. Society can support more people doing non-market work: caregiving, community, creativity. Requires political/economic restructuring (UBI, shorter work weeks).
Scenario 4: New Work Categories Emerge (Historical Pattern)
Just as the internet created "social media manager" and "UX designer," AI creates new roles we can't yet name. The question is whether new jobs emerge fast enough to absorb displaced workers.
What "Displaced" Workers Could Do
Move up — From information-passing to judgment-adding. Become the reviewer, the strategist, the decision-maker.
Move sideways — To work requiring physical presence, human relationships, or real-world context that AI can't access.
Move into AI support — Training, evaluation, supervision of AI systems. Humans-in-the-loop roles.
Move into care/craft — Sectors where human presence is the value: healthcare, education, hospitality, artisanal goods.
Exit the traditional market — If abundance materializes, some people may not need traditional employment. Caregiving, community work, creative pursuits.
The Money Question
Will people make more or less?
- Winners: Those whose productivity is multiplied by AI (10x output with same effort)
- Losers: Those whose role is fully substituted
- Wild card: Whether productivity gains translate to higher wages or just higher profits
Historical pattern: productivity gains eventually raise living standards, but with significant lag and uneven distribution. The transition period is painful.
Key variable: Labor market power. If workers can capture productivity gains (through scarcity, skills, bargaining), wages rise. If substitution makes workers abundant, bargaining power evaporates.
Hypotheses to Consider: What's Actually Protected?
Initial framing: "fake knowledge work" (information-passing) gets replaced, "real judgment" survives.
A challenge to that framing: If judgment = pattern-matching on behavioral data, and AI has the data, then judgment also commoditizes. Maybe the protection isn't cognitive capability — it's data availability?
Hypothesis 1: Data availability as the filter
Categories that might be protected because AI can't learn the patterns:
| Category | Why it might be protected | Examples |
|---|---|---|
| Novel situations | No historical data to learn from | First-time decisions, frontier research, new market entry |
| Private contexts | Data legally/socially inaccessible | Therapy, legal, sensitive negotiations |
| Tacit/embodied | Judgment based on sensing, not logging | Relational work, craft, real-time reads |
| Fast-changing | Historical patterns decay quickly | Culture, markets, early-stage companies |
| Adversarial | Patterns get arbitraged away once known | Negotiation, competition, trading |
To validate: Are these categories actually resistant, or just slower to be learned?
Hypothesis 2: Goal-setters (not goal-achievers)
AI optimizes for given objectives. But who decides the objective?
- "Should we prioritize X or Y?"
- "What's the right framing for this problem?"
- "What do we actually want here?"
This is upstream of prediction. Resolving value conflicts, setting direction, deciding what matters. Possibly hardest to replace because there's no "correct" answer to learn.
To validate: Can AI eventually learn value preferences from enough decisions? Is goal-setting just slower pattern-matching?
Hypothesis 3: Taste
Taste is about what should be valued, not what is valued. Reasons it might resist automation:
- Generative, not predictive — creates new patterns, doesn't match existing ones
- Often contrarian — good taste sees value others miss. AI trained on majority preferences regresses to mean
- Involves risk — committing to a position before validation
- Reflexive — what counts as "good taste" changes based on what others do
AI can learn what was valued. Taste decides what should be valued next. The difference between curation and creation.
To validate: Can AI develop "taste" through exposure to high-quality examples? Is taste just a form of pattern-matching we don't understand yet?
(See: Taste Is Conviction, Not Correctness)
Open Questions
- How long is the transition? 5 years? 20 years? Matters enormously for policy.
- Which new job categories will emerge? We can't predict, but history suggests they will.
- Will physical/service work become higher-status? Or will it remain low-wage with more competition?
- How much does "preference for humans" actually matter when AI is dramatically cheaper?
- What's the political response? UBI, retraining, work-sharing, or nothing?
- Is "judgment work" a transitional category? As behavioral data accumulates, does the protected space shrink?
- What work is structurally low-data? Not just "hard to automate now" but "hard to get data on ever"?
Devil's Advocate
Against the "massive displacement" view:
- We've been predicting technological unemployment since the Luddites. It never materializes at scale.
- The labor market is adaptive. New needs emerge, humans find niches.
- AI capabilities are overhyped in the short term. Current agents still fail at many "simple" tasks.
- Organizational inertia is massive. Actual deployment of AI to replace workers will be slower than capability development.
Against the "don't worry" view:
- This time the capabilities are genuinely different. AI can now do cognitive work.
- Speed matters. Even if new jobs emerge, the transition could be devastating.
- The political economy is broken. Gains will concentrate unless actively redistributed.
Related
- Taste Is Conviction, Not Correctness — conviction as differentiator; mediocrity kills
- Agent.md as the Future of Software — the shift from procedures to goals
- Inference-Bridged Workflows — where LLM-powered tools create value
References
- Autor, D. — Work on labor market polarization and skill-biased technical change
- Brynjolfsson & McAfee — "The Second Machine Age" arguments
- Historical data: ATM/bank teller example from Bessen, "Learning by Doing"