Your AI Anxiety May Be More Dangerous Than AI Itself: ATM Machines Didn't Eliminate Bank Tellers, But Panic Eliminated Your Judgment
A Counterintuitive Historical Fact
In 1930, British economist John Maynard Keynes wrote an unsettling prophecy in his essay "Economic Possibilities for our Grandchildren":
"We are being afflicted with a new disease... technological unemployment."
His meaning was straightforward: machines have become so capable that they will steal human jobs.
Doesn't this sound eerily familiar? It echoes the opening of nearly every AI anxiety article you've scrolled through today.
But here's the remarkable fact: when Keynes made this prediction, the global employed population was approximately 1 billion. Nearly a century later, after experiencing wave after wave of "machines stealing jobs" panic—assembly lines, automation, computers, the internet—the global employed population today stands at approximately 3.5 billion (according to International Labour Organization data). That's a 3.5-fold increase.
This reveals the core paradox this article explores: every major technological transformation triggers mass unemployment panic, yet total employment increases each time.
Is AI truly different this time? Perhaps. But before answering that question, let's examine how humanity has miscalculated this equation repeatedly over the past 200 years. Only by understanding the historical patterns of panic can you preserve your judgment amid today's information flood.
200 Years of "Crying Wolf": A Brief History of Technology Panic
The Machine-Smashing Workers: What Was the Luddite Movement? (1811-1816)
Let's turn the clock back to early 19th-century England.
Between 1811 and 1816, a massive movement erupted in central England—textile workers stormed factories in groups, smashing new weaving looms. They called themselves "Luddites," named after a legendary worker leader, Ned Ludd. Today, the English word "Luddite" still describes those who resist new technology.
Why did these workers smash machines? Because new power looms allowed one worker to complete the workload of five or six people. The workers' logic was simple: one machine replaces five people, so four-fifths of people must lose their jobs. Elementary school math—flawless.
But reality proved precisely the opposite.
According to economic historian Robert C. Allen's data in "The British Industrial Revolution in Global Perspective": during the Industrial Revolution, British cotton cloth prices fell approximately 90-95%. What did this price collapse trigger? An explosion in demand. British cotton consumption grew from approximately 2 million pounds in 1760 to 588 million pounds in 1850—a 300-fold increase.
Three hundred times the demand far exceeded the efficiency gains from machines. The result: the textile industry needed not fewer workers, but more.
An economic pattern that repeatedly appears hides within this, worth remembering. Let me break it down into three steps:
- Machines increase production efficiency → unit product costs decline
- Costs decline → prices fall → people who previously couldn't afford it now can, demand explodes
- Demand explosion magnitude > efficiency improvement magnitude →反而 requires more workers
Think of it like a restaurant introducing automatic cooking machines, reducing per-dish costs from 30 yuan to 10 yuan. If prices follow down, what once sold 100 portions daily might now sell 1,000. Though the kitchen needs fewer chefs, you need more waiters, purchasers, cleaners, delivery drivers—and possibly still need chefs to handle dishes the automatic machines can't manage.
Some readers might ask: what if demand doesn't explode? What if efficiency improves but the market is already saturated? Excellent question—we'll address this shortly, as this is precisely one of the key分歧 points in today's AI discussions. But first, let's complete the historical tour.
ATM Machines Didn't Eliminate Bank Tellers—Why? (1970s-Present)
This case is closer to us and more persuasive.
On June 27, 1967, the world's first Automatic Teller Machine (ATM) was installed at Barclays Bank's Enfield branch in the United Kingdom. Its function was simple: automatic cash withdrawal. No queuing, no tellers needed.
The predictions then mirrored the Luddites: bank tellers are finished.
What actually happened?
Boston University economist James Bessen studied this case in detail in his 2015 book "Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth." He discovered: from the 1970s when ATMs began large-scale adoption to 2010, the number of bank tellers in the United States not only didn't decrease, but grew from approximately 300,000 to 500,000-600,000.
Why? Here's the logical chain:
- ATMs replaced tellers' most basic work—cash deposits, withdrawals, and counting
- This meant each bank branch needed tellers decreased from an average of approximately 20 to approximately 13
- Labor costs per location declined → opening a new branch became cheaper
- Banks opened more branches to cover more communities
- More locations × each location still needing several tellers = total teller count actually increased
Moreover, tellers' job content underwent qualitative transformation. Previously, tellers spent 70% of their time counting cash and filling forms; after ATM adoption, their core work became financial consulting, loan processing, and customer relationship maintenance—things ATMs couldn't do.
The key insight here: technology replaces not "people," but "tasks." A position contains many tasks; machines take some, making the remaining parts more important and valuable.
Excel Didn't Eliminate Accountants—The Power of Demand Elasticity
Here's an example closer to your daily work.
In 1979, the world's first spreadsheet software VisiCalc was born. By 1985, Microsoft had launched Excel. Before this, accountants spent hours manually calculating and checking financial statements. Spreadsheets turned this process into minutes.
"Accountants are going to lose their jobs!"—that phrase again.
What actually happened? Spreadsheets reduced the cost of financial analysis to extremely low levels. Previously, only large enterprises could afford professional accountants for financial analysis; now, even small companies with just 10 employees needed (and could afford) it. The market suddenly opened. The total number of accountants increased rather than decreased, and their work content upgraded—from "getting the numbers right" to "analyzing what the numbers mean."
An economic concept hides within this, called price elasticity of demand. Don't be intimidated by the terminology—it simply describes one thing: when something becomes cheaper, how much more will people buy? If it becomes slightly cheaper and people buy much more, we say that thing has high demand elasticity. Cotton cloth, banking services, financial analysis—history proves these all have very high demand elasticity. When technology drives their prices down, market expansion far exceeds technology's substitution of individual positions.
Summary: Why Does Humanity Always Miscalculate This Equation?
If looking back at history makes patterns so easy to see, why does every generation still make the same mistake?
MIT economics professor David Autor and his collaborators made a startling discovery in their paper "New Frontiers: The Origins and Content of New Work, 1940–2018":
Approximately 60% of existing jobs in 2018 didn't exist at all in 1940.
Think about what this means. People in 1940 couldn't possibly imagine jobs like "social media manager," "UX designer," "data scientist," or "podcast producer"—because the industries these jobs depend on didn't exist themselves.
This is the fundamental bias in human prediction: we're very good at imagining "which existing jobs will disappear"—that's a subtraction problem. But we're almost completely unable to imagine "which entirely new jobs will emerge"—because that's a creation problem starting from zero.
To use an analogy: you can accurately predict an old building will be demolished, but you can hardly predict what will be built on that land five years later—perhaps a hospital, perhaps a theme park, perhaps a new type of space whose name you can't even pronounce today.
So when you read headlines like "AI Will Replace XX Million Jobs," remember: at best, this is only half the answer. It only tells you the subtraction part, knowing nothing about the addition part—and history tells us addition often exceeds subtraction.
But This Time, Something May Be Different—Changes in Speed and Target
Writing this, I suspect some readers are thinking: you've talked about history, but AI is different!
You have a point. Let's seriously examine what's different this time.
Speed: Unprecedented Adoption Rhythm
Let's arrange a set of numbers:
| Technology | Diffusion Milestone |
|---|---|
| Steam Engine | ~80-100 years from invention to large-scale industrial adoption |
| Electricity | ~40-50 years from invention to large-scale adoption |
| Personal Computer | ~15-20 years from commercial use to household adoption |
| Internet | ~7-10 years from commercial use to large-scale adoption |
| ChatGPT reaching 100 million registered users | ~2 months (per UBS/Reuters report) |
Note: Statistical methodologies vary across these items (some measure industry adoption cycles, others measure user registration speed), provided only to sense the magnitude of differences in technology diffusion speed.
Economic historian Paul A. David described a pattern in his classic 1990 paper "The Dynamo and the Computer": each generation of General Purpose Technology (so-called general-purpose technologies refer to underlying technologies like steam engines and electricity that can penetrate nearly all industries) diffuses faster than the previous generation. But AI's diffusion speed is no longer "a bit faster than the previous generation"—it's dramatically faster.
McKinsey's 2024 global AI survey shows: 72% of respondent organizations have adopted AI in at least one business area. This figure was only 55% in 2023—a 17 percentage point jump in one year.
Why does speed matter?
Recall the "subtraction and addition" discussed earlier. Historically, between "old jobs disappearing" and "new jobs emerging" there's always a time gap. In the steam engine era, this gap was decades—painful, but a generation had enough time to adapt and transition. If AI's diffusion speed is an order of magnitude faster than before, this "transition period" may be drastically compressed—new jobs will eventually emerge, but what about people replaced before they appear?
This isn't fear-mongering; it's a serious policy and social issue.
When Demand Doesn't Explode: The Other Side of Demand Saturation
The historical cases mentioned earlier—cotton cloth, banking services, financial analysis—share a common characteristic: their demand elasticity is very high, and when technology drives prices down, market scale explodes accordingly. But not all industries are like this in reality.
When a market has trended toward saturation, or the service itself lacks price elasticity, efficiency improvements don't necessarily trigger demand explosions. For example, high-end medical surgery demand mainly depends on patient numbers, not price levels—surgery fees halving won't make twice as many people get surgery. Certain professional legal service market totals are also relatively fixed: lawyer fees becoming cheaper won't make people sue over nothing. In these fields, if AI significantly improves each practitioner's work efficiency while total market demand doesn't expand correspondingly, technological substitution may lead to net position reductions in that field.
This is precisely the key question promised earlier: not all industries have high demand elasticity like cotton cloth and banking services. For those fields with relatively rigid demand, the virtuous cycle of "efficiency improvement → price decline → demand explosion → job increase" may not appear. Recognizing this avoids simply applying historical patterns to all scenarios.
White-Collar Workers on the Front Line for the First Time
This may be the most essential difference between AI and past technological revolutions.
Over the past 200 years, automation's core goal has been replacing physical labor—weaving machines replacing hand spinning, assembly lines replacing manual assembly, automated factories replacing workshop workers. Blue-collar workers were the primary impact target of each technological revolution.
But this time, AI is targeting cognitive work—translation, programming, copywriting, legal document review, medical imaging diagnosis, financial analysis... these traditionally white-collar jobs requiring higher education.
David Autor proposed a both unsettling and exciting viewpoint in his 2024 NBER working paper "Applying AI to Rebuild Middle Class Jobs" (No. 32140):
AI could enable workers without elite training to perform expert-level tasks.
What does this mean? Imagine: contract review that previously required lawyers with five years of law school training can now be 80% completed by a legal assistant with short-term training using AI tools. This is a threat to lawyers but empowerment for legal assistants.
A fascinating sociological phenomenon deserves attention here. Previously, when technology replaced blue-collar workers, the affected groups had relatively limited voice in public discourse. But this time, those affected by AI are journalists, programmers, designers, lawyers, university professors—these people happen to control the microphone and keyboard. So the AI anxiety you feel partly comes from the technology's own influence, but also partly from the affected groups' discourse power—their voice is much louder than the textile workers of 1811.
This doesn't mean the anxiety is fake or not worth taking seriously. But it means: you need to distinguish two things—"AI will change employment structure" (this is fact) and "you should immediately panic" (this is emotional reaction). The former deserves serious research and response; the latter, as I'll say next, may harm you more than AI itself.
The Real Cost of Anxiety—Panic Itself Is Hurting You
Anxiety Economics: Who Is Harvesting Your Panic?
Every technological panic spawns an industry: the industry selling panic antidotes.
According to multiple market research institutions, the 2024 global AI education and training market scale is approximately 5-10 billion USD, growing approximately 25-35% annually. This is a huge market, naturally containing quality educational resources. But it's also filled with marketing pitches like:
"Don't learn AI, you'll be eliminated within three years!"
"New iron rice bowl in the AI era—just 99 yuan / three-day crash course!"
These courses' playbook is usually: use panic to drive your payment, teach you how to use some specific tool (like an AI writing assistant's operation process), then within a few months this tool updates and iterates, and the specific operational skills you learned become worthless.
This is like someone in 2000 selling courses teaching "how to make web pages with Dreamweaver"—the technology itself wasn't problematic, but what you paid to learn was the layer most prone to obsolescence.
Rash Career Decisions: Lessons from the Internet Bubble
Let me tell a bigger story.
In the late 1990s, the internet wave swept everything. "The future belongs to the internet" sounded exactly like today's "the future belongs to AI." Large numbers of people made life decisions driven by this narrative—abandoning stable traditional industry jobs, rushing into internet companies, or spending large sums on training to learn programming.
In March 2000, the Nasdaq Composite Index (U.S. stock market index mainly composed of technology stocks) peaked and plummeted, ultimately falling approximately 78%. Countless internet companies collapsed, large numbers of career-changers became unemployed, and their original stable jobs were gone forever.
The internet ultimately proved itself of course—but those who made panic-driven decisions at the wrong timing bore real costs.
Today's situation has a particularly ironic narrative reversal: just a few years ago, the overwhelming message was "everyone must learn programming." Now, the equally overwhelming message is "AI can write code, programmers are going to lose jobs." If you chase every wind direction, you'll forever be chasing, forever anxious, forever being led by the narrative.
Psychological Costs: Anxiety Itself Is Draining You
This point is often overlooked but may be most important.
Continuous technology anxiety drains your cognitive resources—simply put, the energy you use for thinking, judging, and making decisions. When you spend an hour daily scrolling through AI anxiety articles, half an hour agonizing over whether to enroll in courses, and the last fifteen minutes before sleep worrying about your career prospects, that time and energy could have been used doing your current work, deeply learning your field, or accompanying your family.
A profound irony hides within this:
You worry AI will affect your work, but anxiety itself has already affected your work.
AI hasn't replaced you yet, but panic has already weakened you. This isn't a rhetorical device—it's a psychological fact worth taking seriously.
Three Practical Principles—Maintaining Judgment Amid Uncertainty
After saying so much about "don't panic," I don't want you to think I'm saying "do nothing." Change is real, response is necessary. But the response method should be thoughtful, not panic-driven.
The following three principles are distilled from historical patterns—not quick fixes, but thinking frameworks.
Principle 1: Focus on "Capability Types" Rather Than "Specific Positions"
Have you noticed this career evolution chain?
Carriage driver → Taxi driver → Ride-share driver
Superficially, these are three completely different "positions." Carriage drivers' skills were harnessing and caring for horses; taxi drivers' skills were driving cars and knowing routes; ride-share drivers' skills are using navigation apps and managing online ratings. Specific skills are completely different, but the underlying capability type is the same: efficiently and safely transporting passengers from point A to point B in a city, providing good service experiences.
The same logic applies: an excellent translator's core capability isn't "knowing how to use dictionaries," but "understanding cultural contexts behind two languages and performing accurate conversion." AI translation tools replace the "dictionary lookup" task, but the "cultural context judgment" capability becomes more valuable—because the more AI translation proliferates, the scarcer those who can discover and correct AI errors become.
So, rather than asking "will my position be replaced by AI," ask: "What is the core capability type behind my position? Does this capability become more important or less important in the AI era?"
Principle 2: Do "Slow Variable" Things, Don't Chase "Fast Variables"
What's a fast variable? What's a slow variable?
- Fast variables: How to use some specific AI tool, syntax of some programming framework, algorithm rules of some platform. These change every half year, even every three months.
- Slow variables: Deep understanding, complex problem judgment, specific domain professional knowledge accumulation, collaboration and communication abilities with people. These require years or even decades to build and won't be eliminated overnight.
To use an analogy: fast variables are like waves on a river's surface, looking very lively, each wave different. Slow variables are like the riverbed's shape, determining where water flows, changing once every few decades.
Chasing waves on the river surface, you'll never catch up—because waves change faster than you. But if you understand the riverbed, you can predict water flow direction.
Specifically: spending three days learning to use some AI drawing tool, three months later this tool may be replaced by a better version. But spending three years deeply understanding visual communication principles, color psychology, user aesthetic preferences—this knowledge remains effective for the next decade—and恰恰 is the judgment ability AI tool users most need.
Principle 3: Allow Yourself to "Wait a Bit"
This principle sounds simplest but may be hardest in today's public opinion environment.
Paul David made a famous observation in "The Dynamo and the Computer": electricity took nearly 40 years from invention to truly transforming factory production methods. The reason wasn't poor electricity technology, but the entire production process, factory building design, and worker skill systems all needed reconstruction around the new technology.
AI is the same. ChatGPT has been released for only a bit over two years. Technology's true social impact—which jobs it will actually eliminate, which jobs it will create, which industry work methods it will change—usually requires 3-5 years or even longer to see clearly.
Making panic-driven major life decisions within this time window (like abandoning your deeply cultivated field, spending large savings on crash training courses) carries extremely high risk. This isn't because AI isn't important—AI is very important—but because the information you have today is insufficient to support correct long-term decisions.
Allowing yourself to say "I'm not certain yet, let me observe a bit more" isn't laziness, isn't avoidance—it's the most rational strategy under insufficient information.
Conclusion: Returning to Keynes
Let's return to that 1930 prophecy at the beginning.
Keynes predicted "technological unemployment." Was he right? In a sense, yes—countless specific positions were indeed eliminated by technology over the past century. Typists, telephone operators, elevator operators, film developers... these professions have nearly completely disappeared today.
But inferring "human overall employment is finished" from this would be completely wrong. While these positions disappeared, data analysts, software engineers, social media managers, UX designers, pet behavior consultants—occupations unheard of decades ago—emerged.
I don't know what the "pet behavior consultant" of the AI era will be. No one knows. This is precisely the point—if someone claims they know, you should remain vigilant.
So the core of this article, I want to condense into one sentence:
Historically, every "machines stealing jobs" panic ultimately proved exaggerated—but if you make wrong life decisions because of panic, for you personally, the consequences are real.
ATMs didn't eliminate bank tellers. AI probably won't eliminate you either.
But panic—unexamined, marketing-driven, independent-judgment-abandoning panic—it really may harm you.
Stay curious, keep learning, remain calm. Then allow yourself to say:
"I'm not in a rush yet. Let me finish the work at hand first."
References
- Keynes, J. M. (1930). Economic Possibilities for our Grandchildren.
- Bessen, J. (2015). Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth. Yale University Press.
- Autor, D., Chin, C., & Salomons, A. (2022). New Frontiers: The Origins and Content of New Work, 1940–2018. NBER Working Paper No. 30389.
- Autor, D. (2024). Applying AI to Rebuild Middle Class Jobs. NBER Working Paper No. 32140.
- McKinsey & Company. (2024). The State of AI in Early 2024: Gen AI Adoption Spikes and Starts to Generate Value.
- Allen, R. C. (2009). The British Industrial Revolution in Global Perspective. Cambridge University Press.
- David, P. A. (1990). The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox. American Economic Review, 80(2), 355–361.
- ILO (International Labour Organization). World Employment and Social Outlook. Annual global employment data.