Your AI Anxiety May Be More Dangerous Than AI Itself
A Counterintuitive Historical Fact
In 1930, British economist John Maynard Keynes wrote a disturbing prophecy in an article titled "Economic Possibilities for our Grandchildren":
"We are being afflicted with a new disease... technological unemployment."
His meaning of "technological unemployment" was straightforward—machines became so capable that human jobs would be taken away.
Doesn't this sound exactly like the opening of every AI anxiety article you scroll through today?
But when Keynes said this, the global employed population was only about 1 billion. Nearly a century later, the world has experienced round after round of "machines stealing jobs" panic through assembly lines, automation, computers, and the internet. Today, the global employed population is approximately 3.5 billion (according to International Labour Organization data)—grown to about 3.5 times the original figure.
This is the core paradox this article explores: every major technological transformation triggers large-scale unemployment panic, yet total employment increases each time.
Is AI truly different this time? Perhaps. But before answering this question, I want to take you through how humanity has miscalculated this equation repeatedly over the past 200 years. Only by understanding the historical patterns of panic can you preserve your judgment amid today's information flood.
200 Years of "Crying Wolf": A Brief History of Technology Panic
The Machine-Smashing Workers: What Was the Luddite Movement? (1811-1816)
Let's turn the clock back to early 19th century England.
Between 1811 and 1816, a massive movement erupted in central England—textile workers rushed into factories in groups, smashing new weaving looms. They called themselves "Luddites," named after a legendary worker leader named Ned Ludd. Today, the English word "Luddite" is still used to describe people who resist new technology.
Why did these workers smash machines? Because new power looms allowed one worker to complete the work of originally five or six people. The workers' logic was simple: one machine replaces five people, so four-fifths of people must become unemployed. Using elementary school math, the calculation seems flawless.
But the reality was precisely the opposite.
According to economic historian Robert C. Allen's data in "The British Industrial Revolution in Global Perspective": during the Industrial Revolution, British cotton cloth prices dropped approximately 90-95%. What did this price plummet trigger? Demand explosion. Britain's cotton consumption grew from about 2 million pounds in 1760 to 588 million pounds in 1850—grown to approximately 300 times.
This roughly 300-fold demand far exceeded the efficiency gains from machines. The result: the textile industry needed not fewer workers, but more.
Hidden within this is an economic law that repeats again and again, worth remembering. Let me break it down into three steps:
- Machines improve production efficiency → per-unit product cost decreases
- Cost decreases → price decreases → people who couldn't afford it before can now buy it, demand surges
- Demand surge magnitude > efficiency improvement magnitude → actually requires more workers
Think of it like a restaurant introducing automatic cooking machines, reducing each dish's cost from 30 yuan to 10 yuan. If prices follow down, originally selling 100 portions per day might now sell 1000 portions. Although the back kitchen doesn't need as many chefs, you need more waiters, purchasers, cleaners, food delivery workers—even possibly still needing chefs to handle dishes the automatic cookers can't manage.
Of course, some readers might ask: what if demand doesn't surge? What if efficiency improves but the market is already saturated? This is an excellent question, and we'll address it later—because this is precisely one of the key分歧 points in today's AI discussions. But first, let's finish reviewing history.
ATM Machines Didn't Eliminate Bank Tellers—Why? (1970s to Present)
This case is closer to us and more convincing.
On June 27, 1967, the world's first Automated Teller Machine (ATM) was installed at Barclays Bank's Enfield branch in the United Kingdom. Its function was simple: automatic cash withdrawal. No queuing, no tellers needed.
The predictions at the time were identical to the Luddites: bank tellers were doomed.
What was the result?
Boston University economist James Bessen studied this case in detail in his 2015 book "Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth." He discovered: from the 1970s when ATMs began大规模普及 to 2010, the number of bank tellers in the United States not only didn't decrease, but actually grew from approximately 300,000 to 500,000-600,000.
Why? The logical chain goes like this:
- ATMs replaced tellers' most basic tasks—cash deposits/withdrawals and counting
- This meant each bank branch needed tellers decreased from an average of about 20 to about 13
- Labor costs per branch decreased → opening a new branch became cheaper
- Banks then opened more branches to cover more communities
- More branches × each branch still needing several tellers = total teller count actually increased
Moreover, tellers' job content underwent qualitative transformation. Previously, tellers spent 70% of their time counting money and filling forms; after ATM proliferation, their core work became financial consulting, loan processing, customer relationship maintenance—things ATMs couldn't do.
The key insight here is: technology replaces not "people," but "tasks." A position contains many tasks; machines take some of them, and the remaining parts actually become more important and valuable.
Excel Didn't Eliminate Accountants—The Power of Demand Elasticity
Here's another example closer to your daily work.
In 1979, the world's first spreadsheet software VisiCalc was born. By 1985, Microsoft launched Excel. Before this, a financial report required accountants to spend hours manually calculating and checking. Spreadsheets turned this process into minutes.
"Accountants are going to be unemployed!"—that phrase again.
What actually happened? Spreadsheets reduced the cost of financial analysis to extremely low levels. Originally, only large enterprises could afford professional accountants for financial analysis; now, even a small company with only 10 people needed (and could afford) it. The market suddenly opened up. The total number of accountants increased rather than decreased, and their work content upgraded—from "getting numbers right" to "analyzing the meaning behind the numbers."
Within this lies an economic concept called demand elasticity (price elasticity of demand). Don't be intimidated by the terminology—it's saying one thing: when something becomes cheaper, how much more will people buy. If it becomes slightly cheaper and people buy much more, we say this thing has high demand elasticity. Cotton cloth, banking services, financial analysis—history proves these things all have very high demand elasticity. After technology drives their prices down, market expansion far exceeds technology's substitution of individual positions.
Summary: Why Does Humanity Always Miscalculate This Equation?
If looking back at history makes patterns so easy to see, why does every generation still make the same mistake?
MIT economics professor David Autor and his collaborators made a startling discovery in their paper "New Frontiers: The Origins and Content of New Work, 1940–2018":
Approximately 60% of existing jobs in the United States in 2018 did not exist at all in 1940.
Think about what this means. People in 1940 could never have imagined jobs like "social media manager," "UX designer," "data scientist," or "podcast producer"—because the industries these jobs depend on didn't exist themselves.
This is the fundamental bias in human prediction: we are very good at imagining "which existing jobs will disappear"—that's a subtraction problem. But we are almost completely unable to imagine "which brand new jobs will emerge"—because that's a creation problem starting from zero.
To use an analogy: you can accurately predict an old building will be demolished, but you can almost never predict what will be built on that land five years later—perhaps a hospital, perhaps a theme park, perhaps a new type of space whose name you can't even pronounce today.
So when you read headlines like "AI Will Replace XX Million Jobs," please remember: at best, this is only half the answer. It only tells you the subtraction part, knowing nothing about the addition part—and history tells us addition is often larger than subtraction.
But This Time, There Might Be Something Different—Changes in Speed and Targets
Writing this, I guess some readers must be thinking: you've talked about history for so long, but AI is different!
You have a point. Let's seriously examine what's different this time.
Speed: Unprecedented Adoption Pace
Let's arrange a set of numbers:
| Technology | Diffusion Milestone (different statistical 口径) |
|---|---|
| Steam Engine | From invention to large-scale industrial adoption: approximately 80-100 years |
| Electricity | From invention to large-scale proliferation: approximately 40-50 years |
| Personal Computer | From commercial use to home proliferation: approximately 15-20 years |
| Internet | From commercial use to large-scale proliferation: approximately 7-10 years |
| ChatGPT reaching 100 million registered users | Approximately 2 months (according to UBS/Reuters report) |
Note: The statistical 口径 for these items differ (some measure industry adoption cycles, some measure user registration speed), provided only for sensing the magnitude differences in technology diffusion speed.
Economic historian Paul A. David described a law in his classic 1990 paper "The Dynamo and the Computer": each generation of General Purpose Technology (so-called general-purpose technology refers to underlying technologies like steam engines and electricity that can penetrate almost all industries) diffuses faster than the previous generation. But AI's diffusion speed is no longer a matter of "a bit faster than the previous generation."
McKinsey's 2024 global AI survey shows: 72% of surveyed organizations have already adopted AI in at least one business area. This figure was only 55% in 2023—a 17 percentage point jump within one year.
Why does speed matter?
Recall the "subtraction and addition" discussed earlier. Historically, between "old jobs disappearing" and "new jobs emerging" there's always a time gap. In the steam engine era, this gap was decades; although painful, a generation had enough time to adapt and transform. If AI's diffusion speed is an order of magnitude faster than before, then this "transition period" might be greatly compressed—new jobs will eventually emerge, but before they emerge, what about the replaced people?
This isn't peddling anxiety; this is a serious policy issue and social issue.
When Demand Doesn't Explode: The Other Side of Demand Saturation
The historical cases mentioned earlier—cotton cloth, banking services, financial analysis—have one common characteristic: their demand elasticity is all very high; after technology drives prices down, market size随之 explodes. But in reality, not all industries are like this.
When a market has趋于 saturated, or the service itself lacks price elasticity, efficiency improvements don't necessarily trigger demand explosion. For example, high-end medical surgery demand mainly depends on the number of sick people, not price levels—surgery fees halving doesn't mean twice as many people will get surgery. Certain professional legal service market totals are also relatively fixed: lawyer fees becoming cheaper doesn't mean people will frivolously sue each other. In these fields, if AI significantly improves each practitioner's work efficiency while total market demand doesn't correspondingly expand, then technology substitution may lead to net job reduction in that field.
This is precisely the key question promised earlier: not all industries have high demand elasticity like cotton cloth and banking services. For those fields with relatively rigid demand, the virtuous cycle of "efficiency improvement → price decrease → demand explosion → job increase" may not appear. Recognizing this point avoids simply applying historical laws to all scenarios.
White-Collar Workers Stand on the Front Line for the First Time
This may be the most essential difference between AI and past technological revolutions.
Over the past 200 years, automation's core objective has been replacing physical labor—spinning machines replacing hand spinning, assembly lines replacing hand assembly, automated factories replacing workshop workers. Blue-collar workers were the main impact target of each technological revolution.
But this time, AI is targeting cognitive labor—translation, programming, copywriting, legal document review, medical image diagnosis, financial analysis... these traditionally belong to white-collar workers with higher education.
David Autor proposed a both unsettling and exciting viewpoint in his 2024 NBER working paper "Applying AI to Rebuild Middle Class Jobs" (No. 32140):
AI has the potential to enable workers without elite training to perform expert-level tasks.
What does this mean? Imagine: previously, only lawyers with five years of law school training could do contract review; now, a legal assistant with short-term training using AI tools can complete 80% of the work. This is a threat to lawyers, but empowerment for legal assistants.
Here's a very interesting sociological phenomenon worth noting. Previously when technology replaced blue-collar workers, the affected groups had relatively limited voice in public discourse. But this time, those affected by AI are journalists, programmers, designers, lawyers, university professors—these people happen to control the microphones and keyboards. So the AI anxiety you feel partly comes from the technology's own influence, but also partly from the affected groups' discourse power—their voice is much louder than the textile workers of 1811.
This doesn't mean the anxiety is fake or not worth taking seriously. But it means: you need to distinguish between two things—"AI will change employment structure" (this is fact) and "you should immediately panic" (this is emotional reaction).
The former deserves serious research and response; the latter, as I'm about to say, may hurt you more than AI itself.
The Real Cost of Anxiety—Panic Itself Is Hurting You
Anxiety Economics: Who Is Harvesting Your Panic?
Every technology panic spawns an industry: the industry selling panic antidotes.
According to estimates from multiple market research institutions, the global AI education and training market size in 2024 is approximately 5-10 billion USD, growing about 25-35% annually. This is a huge market, naturally containing quality educational resources. But it's also flooded with marketing pitches like:
"If you don't learn AI, you'll be eliminated within three years!"
"New iron rice bowl of the AI era—only 99 yuan/three-day crash course!"
These courses' playbook is usually: use panic to drive you to pay, teach you how to use some specific tool (like some AI writing assistant's operation process), then within a few months this tool updates and changes, and the specific operational skills you learned become worthless.
This is like someone in 2000 selling courses teaching "how to make web pages with Dreamweaver"—the technology itself wasn't problematic, but what you spent money to learn was the most easily outdated layer.
Hasty Career Decisions: Lessons from the Internet Bubble
Let me tell a bigger story.
In the late 1990s, the internet wave swept everything. "The future belongs to the internet" was exactly like today's "the future belongs to AI." Large numbers of people made life decisions driven by this narrative—abandoning stable traditional industry jobs, rushing into internet companies, or spending large sums to attend training classes to learn programming.
In March 2000, the Nasdaq Composite Index (Nasdaq, the US stock market index mainly consisting of technology stocks) peaked and plummeted, ultimately falling approximately 78%. Countless internet companies collapsed, large numbers of career-changers became unemployed, and their original stable jobs were no longer available.
The internet ultimately of course proved itself—but those who made panic-driven decisions at the wrong timing bore real costs.
Today's situation has a particularly ironic narrative reversal: just a few years ago, the overwhelming voice was still saying "everyone needs to learn programming." Now, the same overwhelming voice is saying "AI can write code, programmers are going to be unemployed." If you chase every wind direction, you'll forever be chasing, forever anxious, forever being led by the nose by narratives.
Psychological Cost: Anxiety Itself Is Draining You
This point is often overlooked, but may be the most important.
Continuous technology anxiety drains your cognitive resources—simply put, the energy you use for thinking, judging, and making decisions. When you spend one hour daily scrolling through AI anxiety articles, half an hour agonizing over whether to sign up for courses, and the last fifteen minutes before sleep worrying about your career prospects, this time and energy could have been used for doing your current work, deeply learning your field, or accompanying your family.
Within this lies a profound irony:
You're anxious about AI affecting your work, but anxiety itself has already affected your work.
AI hasn't replaced you yet, but panic has already weakened you. This isn't a rhetorical device—this is a psychological fact worth taking seriously.
Three Practical Principles—How to Maintain Judgment Amid Uncertainty
After saying so much about "don't panic," I don't want you to think I'm saying "do nothing." Change is real, response is also necessary. But the response method should be thoughtful, not panic-driven.
The following three principles are distilled from historical patterns, not quick-fix prescriptions, but thinking frameworks.
Principle One: Focus on "Capability Types" Rather Than "Specific Positions"
Have you noticed this career evolution chain?
Carriage driver → Taxi driver → Ride-hailing driver
Superficially, these are three completely different "positions." A carriage driver's skills are harnessing horses, caring for horses; a taxi driver's skills are driving cars, knowing routes; a ride-hailing driver's skills are using navigation apps, managing online ratings. Specific skills are completely different, but the underlying capability type is the same: efficiently and safely transporting passengers from point A to point B in the city, providing good service experience.
The same logic applies: an excellent translator's core capability isn't "knowing how to use dictionaries," but "understanding cultural contexts behind two languages and performing accurate conversion." AI translation tools replace the "using dictionaries" task, but the "cultural context judgment" capability actually becomes more valuable—because the more prevalent AI translation becomes, the scarcer people who can discover and correct AI errors become.
So, instead of asking "will my position be replaced by AI," better ask: "What is the core capability type behind my position? Does this capability become more important in the AI era, or less important?"
Principle Two: Do "Slow Variable" Things, Don't Chase "Fast Variables"
What are fast variables? What are slow variables?
- Fast variables: How to use some specific AI tool, syntax of some programming framework, algorithm rules of some platform. These things change every half year, even every three months.
- Slow variables: Deep understanding ability, complex problem judgment ability, accumulation of professional knowledge in specific fields, ability to collaborate and communicate with people. These things take years or even decades to build, and won't be eliminated overnight.
To use an analogy: fast variables are like waves on the river surface, looking very lively, each wave head is different. Slow variables are like the riverbed's shape, determining where water flows, changing only once every few decades.
Chasing waves on the river surface, you'll never catch up—because waves change faster than you. But if you understand the riverbed, you can predict the water flow direction.
Specifically: spending three days learning to use some AI drawing tool, three months later this tool may be replaced by a better version. But spending three years deeply understanding principles of visual communication, color psychology, user aesthetic preferences, this knowledge remains effective for the next decade—and precisely the judgment ability AI tool users most need.
Principle Three: Allow Yourself to "Wait a Bit"
This principle sounds simplest, but may be hardest to do in today's public opinion environment.
Paul David had a famous observation in "The Dynamo and the Computer": from electricity's invention to truly changing factory production methods took nearly 40 years. The reason wasn't that electricity technology was bad, but the entire production flow, factory building design, worker skill systems all needed to be reconstructed around the new technology.
AI is the same. ChatGPT has been released for only a bit over two years. Technology's true social impact—exactly which jobs it will eliminate, which jobs it will create, which industries' work methods it will change—usually requires 3-5 years or even longer to see clearly.
Within this time window, making panic-driven major life decisions (such as abandoning the field you've深耕 for years, spending large savings to attend crash training classes) carries extremely high risk. This isn't because AI isn't important—AI is very important—but because the information you obtain today is insufficient to support you making correct long-term decisions.
Allowing yourself to say "I'm not sure yet, let me observe a bit more" isn't laziness, isn't escape—this is the most rational strategy under conditions of insufficient information.
Conclusion: Returning to Keynes
Let's return to that 1930 prophecy from the beginning.
Keynes predicted "technological unemployment." Was he right? In a sense, yes—over the past century, countless specific positions have indeed been eliminated by technology. Typists, telephone operators, elevator operators, film developers... these professions have almost completely disappeared today.
But inferring "human overall employment is doomed" from this would be completely wrong. While these positions disappeared, data analysts, software engineers, social media managers, UX designers, pet behavior consultants—professions unheard of decades ago—emerged.
I don't know what the "pet behavior consultant" of the AI era will be. No one knows. This is precisely the point—if someone claims they know, you should remain vigilant.
So the core of this article, I want to condense into one sentence:
Historically, every "machines stealing jobs" panic has ultimately proven exaggerated—but if you make wrong life decisions because of panic, for you personally, the consequences are real.
ATMs didn't eliminate bank tellers. AI most likely won't eliminate you either.
But panic—unexamined, marketing pitch-driven, independent judgment-abandoning panic—it really can hurt you.
Stay curious, keep learning, remain calm. Then allow yourself to say:
"I'm not in a hurry yet. Let me first do my current work well."