Your AI Anxiety May Be More Dangerous Than AI Itself: Why Panic Destroys Judgment While Technology Evolves
A Counterintuitive Historical Truth
In 1930, British economist John Maynard Keynes penned a troubling prophecy in his essay "Economic Possibilities for our Grandchildren":
"We are being afflicted with a new disease... technological unemployment."
His meaning was straightforward: machines had become so capable that they would steal human jobs.
Doesn't this sound remarkably like the opening of every AI anxiety article you've scrolled past today?
Yet when Keynes made this statement, the global employed population stood at approximately 1 billion. Nearly a century later, after the world experienced wave after wave of "machines stealing饭碗" panic—assembly lines, automation, computers, the internet—the global employed population today reaches approximately 3.5 billion (according to International Labour Organization data)—a 3.5-fold increase.
This reveals the core paradox this article explores: every major technological transformation triggers mass unemployment panic, yet total employment rises each time.
Is AI truly different this time? Perhaps. But before answering that question, I want to take you through how humanity has miscalculated this equation repeatedly over the past 200 years. Only by understanding the historical patterns of panic can you preserve your judgment amid today's information flood.
200 Years of "Crying Wolf": A Brief History of Technology Panic
The Machine-Smashing Workers: What Was the Luddite Movement? (1811-1816)
Let's turn the clock back to early 19th-century England.
Between 1811 and 1816, a massive movement erupted in central England—textile workers rushed into factories in groups, smashing new weaving machines. They called themselves "Luddites," named after a legendary worker leader, Ned Ludd. Today, the English word "Luddite" still describes those who resist new technology.
Why did these workers smash machines? Because new power looms allowed one worker to complete the work previously requiring five or six people. The workers' logic was simple: one machine replaces five people, so four-fifths of people must become unemployed. Using elementary mathematics, the reasoning seemed flawless.
Yet reality proved precisely the opposite.
According to economic historian Robert C. Allen's data in "The British Industrial Revolution in Global Perspective": during the Industrial Revolution, British cotton cloth prices dropped approximately 90-95%. What did this price collapse trigger? Demand explosion. British cotton consumption grew from approximately 2 million pounds in 1760 to 588 million pounds in 1850—a 300-fold increase.
This roughly 300-fold demand increase far exceeded the efficiency gains from machines. The result: the textile industry needed not fewer workers, but more.
Hidden within this lies an economic规律 that repeatedly appears, worth remembering. Let me break it down into three steps:
- Machines improve production efficiency → per-unit product cost declines
- Cost decline → price decline → people who previously couldn't afford it now can, demand surges
- Demand surge magnitude > efficiency improvement magnitude →反而 requires more workers
Consider a restaurant introducing automatic cooking machines, reducing per-dish cost from 30 yuan to 10 yuan. If prices follow down, what previously sold 100 portions daily might now sell 1,000. While the back kitchen needs fewer chefs, you need more servers, purchasers, cleaners, food delivery workers—even possibly still chefs to handle dishes the automatic machines can't manage.
Of course, some readers might ask: what if demand doesn't surge? What if efficiency improves but the market is already saturated? This is an excellent question—one we'll address later, as it represents precisely the key point of divergence in today's AI discussions. But first, let's complete the historical picture.
ATM Machines Didn't Eliminate Bank Tellers—Why? (1970s to Present)
This case sits closer to us chronologically and proves more persuasive.
On June 27, 1967, the world's first Automated Teller Machine (ATM) was installed at Barclays Bank's Enfield branch in the United Kingdom. Its function was simple: automatic cash withdrawal. No queuing, no teller required.
The prophecy then mirrored the Luddites exactly: bank tellers were doomed.
What actually happened?
Boston University economist James Bessen studied this case in detail in his 2015 book "Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth." He discovered: from the 1970s when ATMs began large-scale adoption to 2010, the number of American bank tellers not only didn't decrease, but grew from approximately 300,000 to 500,000-600,000.
Why? The logical chain unfolds as follows:
- ATMs replaced tellers' most basic work—cash deposits, withdrawals, and counting
- This meant each bank branch required tellers decreased from an average of approximately 20 to approximately 13
- Labor costs per location declined → opening a new branch became cheaper
- Banks therefore opened more branches to cover more communities
- More locations × each location still requiring several tellers = total teller count反而 increased
Moreover, tellers' job content underwent qualitative transformation. Previously, tellers spent 70% of their time counting cash and filling forms; after ATM adoption, their core work became financial consulting, loan processing, and customer relationship maintenance—things ATMs couldn't do.
The key insight here: technology replaces not "people," but "tasks." A position comprises many tasks; machines take some, making the remaining ones反而 more important and valuable.
Excel Didn't Eliminate Accountants: The Power of Demand Elasticity
Here's another example closer to your daily work.
In 1979, the world's first spreadsheet software VisiCalc was born. By 1985, Microsoft had launched Excel. Before this, a financial statement required accountants to spend hours calculating and checking manually. Spreadsheets transformed this process into minutes.
"Accountants are about to lose their jobs!"—that phrase again.
What actually occurred? Spreadsheets reduced financial analysis costs to extremely low levels. Previously, only large enterprises could afford professional accountants for financial analysis; now, even a small company with just 10 people needed (and could afford) it. The market suddenly opened wide. The total number of accountants increased rather than decreased, and their work content upgraded—from "getting numbers right" to "analyzing what the numbers mean."
Within this lies an economic concept called price elasticity of demand. Don't be intimidated by the terminology—it simply means: when something becomes cheaper, how much more will people buy? If a small price reduction causes people to buy much more, we say that thing has high demand elasticity. Cotton cloth, banking services, financial analysis—history proves these all have very high demand elasticity. When technology drives their prices down, market expansion far exceeds technology's replacement of individual positions.
Summary: Why Does Humanity Always Miscalculate This Equation?
If seeing patterns in history is so easy, why does every generation still make the same mistake?
MIT economics professor David Autor and his collaborators made a startling discovery in their paper "New Frontiers: The Origins and Content of New Work, 1940–2018":
Approximately 60% of existing American jobs in 2018 simply didn't exist in 1940.
Consider what this means. People in 1940 could never have imagined jobs like "social media manager," "UX designer," "data scientist," or "podcast producer"—because the industries these jobs depend on didn't yet exist.
This represents the fundamental bias in human prediction: we're very good at imagining "which existing positions will disappear"—that's a subtraction problem. But we're almost completely unable to imagine "which entirely new positions will emerge"—because that's a creation problem starting from zero.
Consider an analogy: you can accurately predict an old building will be demolished, but you're nearly incapable of predicting what will be built on that land five years later—perhaps a hospital, perhaps a theme park, perhaps a new type of space whose name you can't even pronounce today.
Therefore, when you read headlines like "AI Will Replace XX Million Jobs," remember: this represents at best half the answer. It only tells you the subtraction part, revealing nothing about the addition part—and history tells us addition often exceeds subtraction.
But This Time, Something Might Be Different—Changes in Speed and Target
At this point, I suspect some readers are thinking: you've talked about history all this time, but AI is different!
You have a point. Let's seriously examine what's different this time.
Speed: Unprecedented Adoption Rhythm
Let's arrange a set of numbers:
| Technology | Diffusion Milestone (different statistical methods) |
|---|---|
| Steam Engine | From invention to large-scale industrial adoption: approximately 80-100 years |
| Electricity | From invention to large-scale adoption: approximately 40-50 years |
| Personal Computer | From commercial use to household adoption: approximately 15-20 years |
| Internet | From commercial use to large-scale adoption: approximately 7-10 years |
| ChatGPT reaching 100 million registered users | Approximately 2 months (per UBS/Reuters report) |
Note: Statistical methods vary across items (some measure industry adoption cycles, others measure user registration speed), provided only for sensing magnitude differences in technology diffusion speed.
Economic historian Paul A. David described a规律 in his classic 1990 paper "The Dynamo and the Computer": each generation of General Purpose Technology (so-called general-purpose technologies refer to underlying technologies like steam engines and electricity that can penetrate nearly all industries) diffuses faster than the previous generation. But AI's diffusion speed isn't just "a bit faster than the previous generation"—it's exponentially faster.
McKinsey's 2024 Global AI Survey shows: 72% of surveyed organizations have already adopted AI in at least one business area. This figure was only 55% in 2023—a 17 percentage point jump within one year.
Why does speed matter?
Recall the "subtraction and addition" discussed earlier. Historically, between "old jobs disappearing" and "new jobs emerging" during each technological transformation, there exists a time gap. In the steam engine era, this gap spanned decades—painful, yes, but a generation had sufficient time to adapt and transition. If AI's diffusion speed exceeds previous generations by an order of magnitude, this "transition period" may be dramatically compressed—new jobs will eventually emerge, but what happens to replaced people before they appear?
This isn't fearmongering; it's a serious policy and social question.
When Demand Doesn't Explode: The Other Side of Demand Saturation
The historical cases discussed earlier—cotton cloth, banking services, financial analysis—share a common characteristic: their demand elasticity is very high. When technology drives prices down, market scale随之 explodes. But not all industries operate this way in reality.
When a market has趋于 saturated, or the service itself lacks price elasticity, efficiency improvements don't necessarily trigger demand explosion. For instance, high-end medical surgery demand primarily depends on patient numbers, not price levels—halving surgery fees won't double the number of people undergoing surgery. Certain professional legal service markets also remain relatively fixed in total volume: cheaper lawyer fees won't make people sue over nothing. In these fields, if AI significantly improves each practitioner's work efficiency while total market demand doesn't correspondingly expand, technological replacement may lead to net job reduction in that field.
This precisely addresses the key question promised earlier: not all industries possess high demand elasticity like cotton cloth and banking services. For fields with relatively rigid demand, the virtuous cycle of "efficiency improvement → price decline → demand explosion → job increase" may not materialize. Recognizing this prevents简单地 applying historical规律 to all scenarios.
White-Collar Workers Stand on the Front Lines for the First Time
This may represent the most essential difference between AI and past technological revolutions.
Over the past 200 years, automation's core objective has been replacing physical labor—spinning machines replacing hand spinning, assembly lines replacing manual assembly, automated factories replacing workshop workers. Blue-collar workers were the primary impact target of each technological revolution.
But this time, AI targets cognitive work—translation, programming, copywriting, legal document review, medical imaging diagnosis, financial analysis... work traditionally belonging to college-educated white-collar workers.
David Autor proposed a simultaneously unsettling and exciting viewpoint in his 2024 NBER working paper "Applying AI to Rebuild Middle Class Jobs" (Paper No. 32140):
AI potentially enables workers without elite training to execute expert-level tasks.
What does this mean? Imagine: contract review that previously required five years of law school training can now be 80% completed by a legal assistant with short-term training using AI tools. This threatens lawyers but empowers legal assistants.
Here lies a fascinating sociological phenomenon worth noting. Previously, when technology replaced blue-collar workers, the affected groups had relatively limited voice in public discourse. But this time, those affected by AI are journalists, programmers, designers, lawyers, university professors—people who happen to control microphones and keyboards. Therefore, the AI anxiety you feel partly stems from technology's inherent influence, but also partly from the affected groups' discourse power—their voice far exceeds that of 1811's textile workers.
This doesn't mean anxiety is fake or unworthy of serious attention. But it means: you need to distinguish two things—"AI will change employment structure" (this is fact) versus "you should immediately panic" (this is emotional reaction). The former deserves serious research and response; the latter, as I'll discuss next, may harm you more than AI itself.
The Real Cost of Anxiety—Panic Itself Is Harming You
Anxiety Economics: Who's Harvesting Your Panic?
Every technological panic spawns an industry: the industry selling panic antidotes.
According to multiple market research institutions, the 2024 global AI education and training market scale is approximately 5-10 billion USD, growing approximately 25-35% annually. This is a huge market, naturally containing quality educational resources. But it's also flooded with marketing pitches like:
"Don't learn AI, and you'll be eliminated within three years!"
"AI Era's New Iron Rice Bowl—Just 99 yuan / three-day crash course!"
These courses typically follow this playbook: use panic to drive your payment, teach you how to use a specific tool (like some AI writing assistant's operation process), then within months this tool updates and the specific operational skills you learned become worthless.
This resembles someone in 2000 selling courses teaching "how to make web pages with Dreamweaver"—the technology itself wasn't problematic, but what you paid to learn was the most easily outdated layer.
Rash Career Decisions: Lessons from the Internet Bubble
Let me tell a bigger story.
In the late 1990s, the internet wave swept everything. "The future belongs to the internet" sounded exactly like today's "the future belongs to AI." Large numbers of people made life decisions driven by this narrative—abandoning stable traditional industry jobs, rushing into internet companies, or spending large sums on training programs to learn programming.
In March 2000, the Nasdaq Composite Index (America's technology stock-heavy market index) peaked then plummeted, ultimately falling approximately 78%. Countless internet companies collapsed, large numbers of career-changers became unemployed, and their original stable jobs were gone forever.
The internet ultimately proved itself—of course. But those who made panic-driven decisions at wrong timing points bore real costs.
Today's situation contains a particularly ironic narrative reversal: just a few years ago, overwhelming voices said "everyone must learn programming." Now, equally overwhelming voices say "AI can write code, programmers are about to lose jobs." If you chase every wind direction, you'll forever be chasing, forever anxious, forever led by the nose by narratives.
Psychological Cost: Anxiety Itself Depletes You
This point is often overlooked, but may be most important.
Sustained technological anxiety depletes your cognitive resources—simply put, the energy you use for thinking, judging, and decision-making. When you spend one hour daily scrolling through AI anxiety articles, half an hour debating whether to enroll in courses, and the last fifteen minutes before sleep worrying about your career prospects, that time and energy could have been used for your current work, deep learning in your field, or accompanying your family.
Here lies a profound irony:
You're worried AI will affect your work, but anxiety itself has already affected your work.
AI hasn't replaced you yet, but panic has already weakened you. This isn't a rhetorical device—it's a psychological fact worth taking seriously.
Three Practical Principles—Maintaining Judgment Amid Uncertainty
After all this "don't panic," I don't want you thinking I'm saying "do nothing." Change is real, response is necessary. But the response method should be thoughtful, not panic-driven.
The following three principles, distilled from historical patterns, aren't quick fixes but thinking frameworks.
Principle One: Focus on "Capability Types" Rather Than "Specific Positions"
Have you noticed this career evolution chain?
Carriage Driver → Taxi Driver → Ride-Share Driver
Superficially, these are three completely different "positions." A carriage driver's skills involve harnessing and caring for horses; a taxi driver's skills involve driving cars and knowing routes; a ride-share driver's skills involve using navigation apps and managing online ratings. Specific skills differ completely, but the underlying capability type remains the same: efficiently and safely transporting passengers from point A to point B within a city while providing good service experience.
The same logic applies: an excellent translator's core capability isn't "knowing how to use a dictionary," but "understanding cultural contexts behind two languages and performing accurate conversion." AI translation tools replace the "dictionary lookup" task, but the "cultural context judgment" capability反而 becomes more valuable—because the more AI translation proliferates, the scarcer people who can discover and correct AI errors become.
Therefore, rather than asking "will my position be replaced by AI," ask: "What is the core capability type behind my position? Does this capability become more important or less important in the AI era?"
Principle Two: Do "Slow Variable" Things, Don't Chase "Fast Variables"
What's a fast variable? What's a slow variable?
- Fast variables: How to use a specific AI tool, syntax of a programming framework, a platform's algorithm rules. These change semi-annually, even quarterly.
- Slow variables: Deep comprehension ability, complex problem judgment, specialized domain knowledge accumulation, human collaboration and communication capabilities. These require years乃至 decades to build and won't be eliminated overnight.
Consider an analogy: fast variables resemble waves on a river's surface—looking lively, each wave different. Slow variables resemble the riverbed's shape, determining water flow direction, changing only once every few decades.
Chasing waves on the river surface, you'll never catch up—because waves change faster than you. But if you understand the riverbed, you can predict water flow direction.
Specifically: spending three days learning to use some AI drawing tool, that tool may be replaced by a better version within three months. But spending three years deeply understanding visual communication principles, color psychology, user aesthetic preferences—this knowledge remains effective for the next decade—and恰恰 is the judgment AI tool users most need.
Principle Three: Allow Yourself to "Wait a Bit"
This principle sounds simplest but may be hardest in today's舆论 environment.
Paul David made a famous observation in "The Dynamo and the Computer": electricity took nearly 40 years from invention to truly transforming factory production methods. The reason wasn't poor electricity technology, but that entire production processes, factory building designs, and worker skill systems needed reconstruction around the new technology.
AI operates similarly. ChatGPT has been released for only two-plus years. Technology's true social impact—exactly which jobs it will eliminate, which jobs it will create, which industry work methods it will change—typically requires 3-5 years甚至 longer to see clearly.
Making panic-driven major life decisions within this time window (such as abandoning your deeply cultivated field, spending large savings on crash training courses) carries extremely high risk. This isn't because AI isn't important—AI is extremely important—but because the information you obtain today is insufficient to support correct long-term decisions.
Allowing yourself to say "I'm not certain yet, I'll observe a bit more" isn't laziness, isn't avoidance—it's the most rational strategy under conditions of insufficient information.
Conclusion: Returning to Keynes
Let's return to that 1930 prophecy from the beginning.
Keynes predicted "technological unemployment." Was he right? In a sense, yes—countless specific positions were indeed eliminated by technology over the past century. Typists, telephone operators, elevator operators, film developers... these professions have nearly completely disappeared today.
But inferring "human overall employment is doomed" from this would be gravely mistaken. While these positions disappeared, data analysts, software engineers, social media managers, UX designers, pet behavior consultants—professions unheard of decades ago—emerged.
I don't know what the "pet behavior consultant" of the AI era will be. No one knows. This恰恰 is the point—if someone claims they know, you should remain vigilant.
Therefore, this article's core message, I want to condense into one sentence:
Historically, every "machines stealing饭碗" panic ultimately proved exaggerated—but if you make wrong life decisions due to panic, for you personally, the consequences are real.
ATMs didn't eliminate bank tellers. AI most likely won't eliminate you either.
But panic—unexamined, marketing-driven, independent-judgment-abandoning panic—it truly can harm you.
Remain curious, remain learning, remain calm. Then allow yourself to say:
"I'm not in a rush yet. Let me first do my current work well."
References
- Keynes, J. M. (1930). Economic Possibilities for our Grandchildren.
- Bessen, J. (2015). Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth. Yale University Press.
- Autor, D., Chin, C., & Salomons, A. (2022). New Frontiers: The Origins and Content of New Work, 1940–2018. NBER Working Paper No. 30389.
- Autor, D. (2024). Applying AI to Rebuild Middle Class Jobs. NBER Working Paper No. 32140.
- McKinsey & Company. (2024). The State of AI in Early 2024: Gen AI Adoption Spikes and Starts to Generate Value.
- Allen, R. C. (2009). The British Industrial Revolution in Global Perspective. Cambridge University Press.
- David, P. A. (1990). The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox. American Economic Review, 80(2), 355–361.
- ILO (International Labour Organization). World Employment and Social Outlook. Annual global employment data.