Beyond AI Anxiety: Why Historical Patterns Suggest Calm Over Panic in the Age of Intelligent Automation
A Counterintuitive Historical Truth That Challenges Modern AI Fears
In 1930, the renowned British economist John Maynard Keynes penned an unsettling prophecy in his essay "Economic Possibilities for our Grandchildren." He described what he called a "new disease" afflicting society: technological unemployment. His argument was straightforward—machines were becoming so capable that they would inevitably steal human jobs.
Does this sound familiar? It should. This narrative echoes through virtually every AI anxiety article circulating today, nearly a century later. Yet here lies a profound paradox worth examining: Keynes made his prediction when the global workforce numbered approximately one billion people. Fast forward to today, after humanity has weathered successive waves of technological disruption—from assembly lines and automation to computers and the internet—and the global employed population has surged to roughly 3.5 billion, according to International Labour Organization data. That represents a 3.5-fold increase.
This historical reality forms the core question this analysis explores: why does every major technological transformation trigger widespread unemployment panic, yet employment totals consistently rise rather than fall with each wave? Is artificial intelligence genuinely different this time? Perhaps. But before accepting that conclusion, we must first understand how humanity has repeatedly miscalculated this equation over the past two centuries. Only by recognizing the historical patterns of technological panic can we preserve our judgment amid today's information deluge.
Two Centuries of Crying Wolf: A Brief History of Technology Panic
The Machine-Smashing Workers: Understanding the Luddite Movement (1811-1816)
Let us journey back to early 19th-century England. Between 1811 and 1816, the English Midlands witnessed a massive social movement—textile workers stormed factories in organized groups, systematically destroying new mechanized looms. They called themselves "Luddites," named after a legendary worker leader, Ned Ludd. Today, the term "Luddite" persists in English as a descriptor for those who resist new technologies.
Why did these workers destroy machines? The logic seemed mathematically irrefutable: new power looms enabled one worker to accomplish what previously required five or six workers. The workers' reasoning was simple arithmetic—if one machine replaces five people, then four-fifths of the workforce must become unemployed. Elementary mathematics, seemingly flawless.
Yet reality proved precisely the opposite.
According to economic historian Robert C. Allen's research in "The British Industrial Revolution in Global Perspective," cotton cloth prices in England plummeted by approximately 90-95% during the Industrial Revolution. This dramatic price collapse triggered something unexpected: explosive demand. British cotton consumption skyrocketed from roughly 2 million pounds in 1760 to 588 million pounds by 1850—an increase of nearly 300 times.
This three-hundred-fold demand surge vastly exceeded the efficiency gains from mechanization. The consequence? The textile industry required not fewer workers, but substantially more.
Embedded within this historical episode lies an economic pattern that recurs with remarkable consistency. We can distill it into three sequential steps:
- Machines increase production efficiency → per-unit product costs decline
- Cost reduction → price reduction → consumers who previously couldn't afford the product now can, triggering demand explosion
- Demand explosion magnitude > efficiency improvement magnitude → paradoxically requires more workers overall
Consider a modern analogy: imagine a restaurant introducing automated cooking machines, reducing the cost per dish from 30 yuan to 10 yuan. If prices follow suit, daily sales might jump from 100 portions to 1,000 portions. While the kitchen may require fewer chefs, the restaurant now needs more servers, procurement specialists, cleaning staff, delivery personnel—and may still require chefs to handle dishes the automated machines cannot prepare adequately.
Some readers might reasonably object: what if demand doesn't explode? What if efficiency improves but the market is already saturated? This is an excellent question, and we will address it shortly—because this represents precisely the critical分歧 point in today's AI discussions. But first, let us complete our historical survey.
ATM Machines Did Not Eliminate Bank Tellers—Understanding Why (1970s to Present)
This case study offers greater contemporary relevance and persuasive power.
On June 27, 1967, the world's first Automated Teller Machine (ATM) was installed at Barclays Bank's Enfield branch in England. Its function was elegantly simple: automatic cash withdrawal. No queuing, no teller required.
The predictions mirrored the Luddites' logic perfectly: bank tellers were doomed.
What actually transpired?
Boston University economist James Bessen thoroughly investigated this case in his 2015 book "Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth." His findings proved revelatory: from the 1970s, when ATMs began大规模 deployment, through 2010, the number of bank tellers in the United States not only failed to decrease—it grew from approximately 300,000 to between 500,000 and 600,000.
The explanatory logic chain unfolds as follows:
- ATMs substituted for tellers' most basic tasks—cash deposits, withdrawals, and counting
- This meant each bank branch required fewer tellers, dropping from an average of approximately 20 to approximately 13 per location
- Reduced labor costs per branch → opening new branches became more economically viable
- Banks consequently opened more branches to cover more communities
- More branches × several tellers still required per branch = total teller count actually increased
Moreover, the nature of teller work underwent qualitative transformation. Previously, tellers spent roughly 70% of their time counting cash and filling out forms. After ATM proliferation, their core responsibilities shifted to financial advisory services, loan processing, and customer relationship management—tasks ATMs simply could not perform.
The crucial insight here: technology substitutes not "people," but "tasks." A position comprises numerous tasks; machines absorb some subset, rendering the remaining tasks more important and valuable.
Excel Did Not Eliminate Accountants—The Power of Demand Elasticity
Consider an example closer to contemporary professional life.
In 1979, VisiCalc, the world's first electronic spreadsheet software, was born. By 1985, Microsoft had released Excel. Prior to this revolution, financial statements required accountants to spend hours performing manual calculations and verifications. Spreadsheets compressed this process to minutes.
"Accountants will become unemployed!"—the familiar refrain emerged once again.
What actually occurred? Spreadsheets reduced the cost of financial analysis to extraordinarily low levels. Previously, only large enterprises could afford professional accountants for financial analysis; suddenly, even small companies with merely ten employees needed—and could afford—such expertise. The market expanded dramatically. The total number of accountants increased rather than decreased, and their work content upgraded—from "ensuring numbers are correct" to "analyzing what the numbers signify."
This illustrates an economic concept called demand elasticity (price elasticity of demand). Do not let terminology intimidate you; it describes one fundamental phenomenon: when something becomes cheaper, how much more will people purchase? If a small price reduction triggers a large increase in purchases, we say that product exhibits high demand elasticity. Cotton cloth, banking services, financial analysis—history demonstrates these all possess remarkably high demand elasticity. When technology drives down their prices, market expansion far exceeds technology's substitution effect on individual positions.
Synthesis: Why Does Humanity Consistently Miscalculate This Equation?
If recognizing these patterns retrospectively proves so straightforward, why does each generation repeat the same error?
MIT economics professor David Autor and his collaborators revealed a stunning finding in their paper "New Frontiers: The Origins and Content of New Work, 1940–2018":
Approximately 60% of existing jobs in the United States in 2018 did not exist in 1940.
Consider the implications. Someone in 1940 could not possibly have imagined "social media manager," "user experience designer," "data scientist," or "podcast producer"—because these roles depend on industries that simply did not yet exist.
This represents humanity's fundamental prediction bias: we excel at imagining "which existing positions will disappear"—that is a subtraction problem. But we remain nearly incapable of imagining "which entirely new positions will emerge"—because that constitutes a creation problem starting from zero.
Consider an analogy: you can accurately predict when an old building will be demolished, but you cannot possibly predict what will stand on that land five years later—perhaps a hospital, perhaps a theme park, perhaps a new type of space whose name you cannot even pronounce today.
Therefore, when you encounter headlines proclaiming "AI Will Replace XX Million Jobs," remember: this represents at best half the answer. It tells you only about the subtraction component, revealing nothing about the addition component—and history suggests the addition often exceeds the subtraction.
But This Time, Something May Indeed Differ—Changes in Speed and Target
At this point, certain readers are undoubtedly thinking: you have discussed history at length, but AI is genuinely different!
Your point merits serious consideration. Let us rigorously examine what specifically differs this time.
Speed: An Unprecedented Adoption Rhythm
Consider the following comparative data:
| Technology | Diffusion Milestone |
|---|---|
| Steam Engine | Approximately 80-100 years from invention to widespread industrial adoption |
| Electricity | Approximately 40-50 years from invention to widespread adoption |
| Personal Computers | Approximately 15-20 years from commercial use to household penetration |
| Internet | Approximately 7-10 years from commercial use to large-scale adoption |
| ChatGPT | Approximately 2 months to reach 100 million registered users (per UBS/Reuters reporting) |
Note: These statistics employ different measurement methodologies (some measure industrial adoption cycles, others measure user registration speed), serving primarily to illustrate the order-of-magnitude differences in technology diffusion speed.
Economic historian Paul A. David described a pattern in his classic 1990 paper "The Dynamo and the Computer": each generation of General Purpose Technology (so-called general purpose technologies refer to foundational technologies like steam engines and electricity that penetrate virtually all industries) diffuses faster than its predecessor. But AI's diffusion speed has accelerated beyond "slightly faster than the previous generation."
McKinsey's 2024 global AI survey reveals: 72% of respondent organizations have already adopted AI in at least one business domain. This figure stood at merely 55% in 2023—a seventeen percentage point jump within a single year.
Why does speed matter?
Recall the "subtraction and addition" framework discussed earlier. Historically, each technological transformation featured a time lag between "old position disappearance" and "new position emergence." During the steam engine era, this lag spanned decades—painful, yet providing a generation sufficient time to adapt and transition. If AI's diffusion speed exceeds previous generations by an order of magnitude, this "transition period" may compress dramatically—new positions will eventually emerge, but what happens to displaced workers before they appear?
This is not anxiety-mongering; it represents a serious policy and social question demanding thoughtful consideration.
When Demand Does Not Explode: The Other Side of Demand Saturation
The historical cases examined previously—cotton cloth, banking services, financial analysis—share a common characteristic: all exhibited exceptionally high demand elasticity, with technology driving down prices triggering market size explosion. However, not all industries conform to this pattern in reality.
When a market approaches saturation, or when the service itself lacks price elasticity, efficiency improvements do not necessarily trigger demand explosion. Consider high-end medical surgery: demand primarily depends on patient numbers, not price levels—halving surgery fees will not double the number of people undergoing procedures. Certain professional legal service markets also remain relatively fixed in total volume: reduced lawyer fees do not prompt people to frivolously initiate lawsuits. In such domains, if AI substantially improves each practitioner's work efficiency while total market demand remains unchanged, technological substitution may indeed produce net position reduction within that field.
This addresses the critical question promised earlier: not all industries resemble cotton cloth and banking services with high demand elasticity. For domains with relatively rigid demand, the virtuous cycle of "efficiency improvement → price reduction → demand explosion → position increase" may fail to materialize. Recognizing this distinction prevents simplistic application of historical patterns to all scenarios.
White-Collar Workers Stand on the Front Lines for the First Time
This may represent the most essential distinction between AI and previous technological revolutions.
Over the past 200 years, automation's core objective has been substituting physical labor—spinning machines replacing hand spinning, assembly lines replacing manual assembly, automated factories replacing workshop workers. Blue-collar workers bore the brunt of each technological revolution's impact.
This time, AI targets cognitive labor—translation, programming, copywriting, legal document review, medical imaging diagnosis, financial analysis—work traditionally performed by university-educated white-collar professionals.
David Autor's 2024 NBER working paper "Applying AI to Rebuild Middle Class Jobs" (Working Paper No. 32140) proposes a view that proves simultaneously unsettling and exciting:
AI potentially enables workers without elite training to execute expert-level tasks.
What does this mean? Imagine: contract review work previously requiring five years of law school training can now be 80% completed by a legal assistant with short-term training, augmented by AI tools. This poses a threat to lawyers, yet empowers legal assistants.
A fascinating sociological phenomenon deserves attention here. When technology previously displaced blue-collar workers, the affected groups possessed relatively limited voice in public discourse. This time, those AI affects—journalists, programmers, designers, lawyers, university professors—happen to control the microphones and keyboards. Consequently, the AI anxiety you experience stems partly from technology's inherent impact, but also partly from the affected groups' discursive power—their voices dwarf those of 1811's textile workers by orders of magnitude.
This does not imply the anxiety is false or unworthy of serious attention. It means you must distinguish between two propositions: "AI will transform employment structure" (this is factual) versus "you should immediately panic" (this is an emotional reaction). The former warrants serious research and response; the latter, as I will explain next, may harm you more than AI itself.
The Real Cost of Anxiety—Panic Itself Is Harming You
Anxiety Economics: Who Is Harvesting Your Fear?
Every technological panic spawns an industry: the industry selling panic antidotes.
Multiple market research firms estimate the global AI education and training market size at approximately 5-10 billion USD in 2024, growing 25-35% annually. This represents an enormous market containing优质 educational resources. But it also overflows with marketing narratives such as:
"Without learning AI, you will be eliminated within three years!"
"The new iron rice bowl of the AI era—three-day crash course for only 99 yuan!"
These courses typically follow a套路: use panic to drive your payment, teach you to operate some specific tool (such as an AI writing assistant's workflow), then within months that tool updates and your learned operational skills become worthless.
This resembles someone in 2000 selling courses teaching "how to build websites with Dreamweaver"—the technology itself was not problematic, but you paid to learn the most easily obsolete layer.
Rash Career Decisions: Lessons from the Internet Bubble
Let me tell a larger story.
In the late 1990s, the internet wave swept everything. The narrative "the future belongs to the internet" mirrored today's "the future belongs to AI" with eerie precision. Large numbers of people made life decisions driven by this narrative—abandoning stable traditional industry jobs, rushing into internet companies, or spending substantial sums on programming bootcamps.
In March 2000, the Nasdaq Composite Index (the U.S. technology-stock-heavy market index) peaked and then collapsed, ultimately falling approximately 78%. Countless internet companies failed,大批 career-changers became unemployed, and their original stable jobs had vanished.
The internet ultimately proved its value—but those who made panic-driven decisions at wrong timing bore real costs.
Today's situation contains a particularly ironic narrative reversal: just a few years ago,铺天盖地 voices proclaimed "everyone must learn programming." Now, equally pervasive voices declare "AI can write code; programmers will become unemployed." If you chase every wind direction, you will forever chase, forever anxious, forever led by narratives.
Psychological Costs: Anxiety Itself Depletes You
This point often goes overlooked, yet may prove most important.
Sustained technological anxiety consumes your cognitive resources—simply put, the mental energy you use for thinking, judgment, and decision-making. When you spend one hour daily scrolling through AI anxiety articles, half an hour debating whether to enroll in courses, and the final fifteen minutes before sleep worrying about your career prospects, that time and energy could have been devoted to your current work, deep learning in your field, or spending time with family.
Here lies a profound irony:
You worry that AI will affect your work, yet anxiety itself has already affected your work.
AI has not yet replaced you, but panic has already weakened you. This is not rhetorical flourish—it represents a psychological fact worthy of serious consideration.
Three Practical Principles—Maintaining Judgment Amid Uncertainty
Having argued extensively for "do not panic," I do not wish to imply "do nothing." Change is real, and response is necessary. But the response should be thoughtful, not panic-driven.
The following three principles, distilled from historical patterns, offer not quick fixes but thinking frameworks.
Principle One: Focus on "Capability Types" Rather Than "Specific Positions"
Have you noticed this occupational evolution chain?
Carriage Driver → Taxi Driver → Ride-Share Driver
Superficially, these represent three completely different "positions." A carriage driver's skills involve harnessing and caring for horses; a taxi driver's skills involve driving automobiles and navigation; a ride-share driver's skills involve using navigation apps and managing online ratings. Specific skills differ entirely, yet the underlying capability type remains identical: efficiently and safely transporting passengers from point A to point B within an urban environment while providing quality service experience.
The same logic applies: an excellent translator's core capability is not "knowing how to use a dictionary," but "understanding cultural contexts behind two languages and performing accurate conversion." AI translation tools substitute the "dictionary lookup" task, but the "cultural context judgment" capability becomes more valuable—because as AI translation proliferates, those who can detect and correct AI errors become more scarce.
Therefore, rather than asking "will my position be replaced by AI," ask: "What is the core capability type underlying my position? Does this capability become more important or less important in the AI era?"
Principle Two: Do "Slow Variable" Work, Do Not Chase "Fast Variables"
What constitutes a fast variable? What constitutes a slow variable?
- Fast variables: how to use some specific AI tool, syntax of some programming framework, algorithm rules of some platform. These change every six months, even every three months.
- Slow variables: deep comprehension ability, complex problem judgment, specialized domain knowledge accumulation, collaboration and communication capabilities with people. These require years or even decades to build, and will not be eliminated overnight.
Consider an analogy: fast variables resemble waves on a river's surface—appearing lively, each wave different. Slow variables resemble the riverbed's shape, determining water flow direction, changing only once every few decades.
Chasing waves on the river surface, you will never catch up—because waves change faster than you can move. But if you understand the riverbed, you can predict water flow direction.
Specifically: spending three days learning some AI drawing tool means that tool may be replaced by a better version within three months. But spending three years deeply understanding visual communication principles, color psychology, and user aesthetic preferences means this knowledge remains valid for the next decade—and precisely constitutes the judgment capability AI tool users most need.
Principle Three: Allow Yourself to "Wait"
This principle sounds simplest, yet may prove most difficult within today's media environment.
Paul David made a famous observation in "The Dynamo and the Computer": electricity took nearly 40 years from invention to genuinely transforming factory production methods. The reason was not poor electricity technology, but rather that entire production workflows, factory building designs, and worker skill systems required reconstruction around the new technology.
AI follows the same pattern. ChatGPT has been released for only slightly over two years. Technology's true social impact—which positions it will actually eliminate, which positions it will create, which industries' work methods it will transform—typically requires 3-5 years or longer to become clear.
Making panic-driven major life decisions within this time window (such as abandoning your deeply cultivated field, spending substantial savings on crash training courses) carries extremely high risk. This is not because AI is unimportant—AI is profoundly important—but because the information available to you today is insufficient to support correct long-term decisions.
Allowing yourself to say "I am not yet certain; I will observe further" is not laziness, not avoidance—it represents the most rational strategy under conditions of insufficient information.
Conclusion: Returning to Keynes
Let us return to that 1930 prophecy.
Keynes predicted "technological unemployment." Was he correct? In one sense, yes—countless specific positions have indeed been eliminated by technology over the past century. Typists, telephone operators, elevator operators, film developers—these professions have virtually disappeared today.
But inferring from this that "human overall employment is doomed" would be profoundly mistaken. As these positions vanished, data analysts, software engineers, social media managers, user experience designers, pet behavior consultants—occupations unheard of decades ago—emerged.
I do not know what the "pet behavior consultant" of the AI era will look like. No one knows. This is precisely the point—if someone claims to know, you should remain skeptical.
Therefore, this article's core message condenses into one sentence:
Historically, every "machines stealing jobs" panic ultimately proved exaggerated—but if you make wrong life decisions due to panic, the consequences for you personally are real.
ATMs did not eliminate bank tellers. AI most likely will not eliminate you either.
But panic—unexamined, marketing-narrative-driven, independent-judgment-abandoning panic—can genuinely harm you.
Remain curious. Remain learning-oriented. Remain calm. Then allow yourself to say:
"I am not yet in a hurry. Let me first do my current work well."