Automation first claimed the horse. Then the automobile maker. And now, perhaps, the rest of us?
Economists aren’t effectively communicating how AI is going to reshape the economy. Many don’t know which intellectual “crutches” upon which to rely to make sense of the technologies impact. Is it more similar to electricity? The internet? Or, as a broader phenomena, similar to the creation of fire? The industrial revolution? Globalization?
The AI revolution is well underway. Breakthroughs in “chain-of-thought” AI models enable these systems to reason before responding. The most advanced models are now ranked among the world’s top 200 competitive programmers, capable of publishing peer-reviewed research and outperforming graduate-level knowledge benchmarks. AI technology is increasingly changing fields from medicine and IT to communications and beyond.
Tech giants can’t build infrastructure fast enough. Microsoft revived the dormant Three Mile Island Nuclear Generating Station to power its data centers. Meanwhile, Nvidia, the computer chip designer, became the world’s most valuable company last year. That’s not to say all challenges have been solved—models still can’t act autonomously, make truly new scientific discoveries, or coordinate at scale—but these are all a problem of engineering.
Thus, while some of these historical cognates are valuable benchmarks, the best way to understand the shape of what’s coming is to develop a novel inside view, understand what the technology might be, and then reason from there.
The Time Horizon Model
In Silicon Valley, where I live and where the cutting edge of AI development is found, the graphs people are paying attention to are the changes in the lengths of time an AI agent can complete autonomously. This represents how long the technology can—like a good employee—operate without external intervention. It’s shorthand for self-directed work.

The key idea where the American worker is concerned is that your job is as automatable as the smallest, fully self-contained task is. For example, call center jobs might be (and are!) very vulnerable to automation, as they consist of a day of 10- to 20-minute or so tasks stacked back-to-back. Ditto for many forms of many types of freelancer services, or paralegals drafting contracts, or journalists rewriting articles.
Compare this to a CEO who, even in a day broken up into similar 30-minute activities—a meeting, a decision, a public appearance—each required years of experiential context that a machine can’t yet simply replicate.
To illustrate the model more concretely, consider two IT professionals from the same Fortune 500 Company: a technical support specialist and an IT systems architect. On paper, they look remarkably similar—both work in the technology department, both have computer science degrees, both interact with the same systems, and both might even have started at the company around the same time. The company pays them well (the specialist earns $85,000, while the architect makes $145,000), provides them the same benefits, and both are respected team members with “IT professional” on their business cards.
Yet their vulnerability to automation couldn’t be more different.
The technical support specialist spends their day handling user tickets—password resets, connectivity issues, software installation problems. Each task typically takes 5- to 20-minutes, follows standardized protocols from the company knowledge base, and concludes with a clear resolution. Though these actions require concrete skills most don’t possess, they address dozens of similar issues daily, with the company tracking their performance by tickets closed per hour. AI systems are already demonstrating they can handle 70-80% of these discrete, repetitive tasks through chatbots and automated diagnostics.
In contrast, the IT systems architect designs and implements the company’s technology infrastructure over multi-month or multi-year horizons. They might spend a quarter planning a cloud migration, considering hundreds of interdependencies, negotiating with vendors, making trade-offs between cost and reliability, and building consensus across departments. While they use AI tools to generate code snippets or analyze data, combining stakeholder management, long-term planning, and technical decision-making requires maintaining context across months and weighing constantly shifting priorities. These longer time horizons and the contextual knowledge accumulated over years make their role dramatically more resilient to current automation approaches.
This pattern repeats across industries: the shorter the time horizon of your core tasks, the greater your automation risk.
Of course, this is not a perfect predictive model. There are other factors that matter, at least for this first round of AI automation. Those include:
Data availability. How much data is there of people doing your job, and how easily can it be collected? The more easily available or collectable data of someone doing the task, the easier it is to automate. Given this, we’re likely to see the massive proliferation of “keylogger” or “screen-recording” type software to collect “end-to-end” trajectories of humans completing tasks. These monitoring tools capture the exact sequence of actions human workers take to complete assignments—from keystrokes and mouse movements to planning procedures—creating datasets that AI systems can learn from. Companies are incentivized to implement such tracking because it transforms their existing workforce into training data generators, allowing them to gradually automate processes while maintaining quality as AI models learn directly from their most experienced employees.
Relative employee power. Both Hollywood actors and dock workers have been able to resist much automation because they have lots of power, particularly unionized. Actors can hold up entire movie productions and dockworkers the entire maritime trade system. Ordinary Americans don’t want to see either their movies or their purchases held up, and thus industrial action grants them lots of leverage. In contrast, a contractor who has very few rights granted by employment law and no individual power is much more likely to go first. For some of the white-collar jobs with professional associations, they may be able to react quickly enough to be passed over by at least the first wave of AI.
The inherent humanity of what you do. We still watch humans play competitive chess and singers perform live. We enjoy humans doing those innately human tasks. No one is clambering for robot sports competitions, even if prototypes are waiting in the wings to smash records. We should expect that people will continue to value human actors in many activities. The essential appeal of these experiences often lies precisely in their human limitations and imperfections—we appreciate the struggle, emotion, and our sense of connection that comes from watching fellow humans perform at their best, something that perfectly optimized machines cannot yet replicate.
The trust factor. We’ll likely continue to have human politicians (even if they have AI campaign managers) and military officials managing the nuclear weapons because people will continue to only trust people, not machines, to make certain critical decisions with high stakes. This holds true on smaller scales too — we’re likely to prefer human primary school teachers, caregivers, and counselors for roles where human connection and understanding are essential components of the relationship and service provided.
Remote work replaceability. The most vulnerable jobs are those that can be replaced by a drop-in remote worker. If access to email, Slack, well-scoped tasks, the regular onboarding materials, and occasional manager check-ins are all that’s required, a job becomes very straightforward to automate. And the lockdowns during the COVID-19 pandemic made clear that a very large fraction of the economy may be done remotely.
Putting these factors together, we can assess automation vulnerability through a simple framework:
High vulnerability | Low vulnerability |
Short time-horizon tasks (minutes to hours) Abundant, accessible training data Low employee bargaining power Limited inherent value to human performance Tasks can be done entirely remotely | Long time-horizon tasks (days to months) Limited or difficult-to-acquire training data High employee bargaining power High inherent value to human performance Tasks require physical presence or dexterity |
In somewhat ironic twist, the most vulnerable jobs, then, are not those traditionally thought of as threatened by automation—like manufacturing workers or service staff—but the “knowledge workers” once thought to be automation-proof. And most vulnerable of all? The same Silicon Valley engineers and programmers who are building these AI systems.
Software engineers whose jobs are based on writing code as discrete, well-documented tasks (often following standardized updates to a central directory) are essentially creating the perfect training data for AI systems to replace them. Indeed, OpenAI reports that their latest “Deep Research” model can successfully replicate 42% of OpenAI employee’s “pull requests”—actual coding tasks previously done by their own engineers.

This might suggest that we’ll first see large-scale automation not at McDonald’s but in software engineering. At the beginning, early AI models were limited to basic text completions—perhaps a task that would take humans seconds. All ChatGPT could do at first was speak—an action people do without thinking.
Over time, the models have displayed the ability to do longer and longer-horizon tasks. The best AI models today can code a simple web page from scratch or compile a referenced literature review; tasks that might have taken a skilled professional an hour or more to complete.
And the graph is exponential. As AI agents are able to spend more time acting coherently and consistently with a goal, jobs which solely consist of discrete, well-described, short-horizon tasks—like a software engineer pushing changes to a certain codebase that requires a human check—are the first targets.
That means the very cognitive tasks once considered uniquely human—coding, writing, analyzing data—are precisely the ones most vulnerable to current AI capabilities, while physical labor requiring dexterity and adaptability remain challenging to automate.
Evolutionarily, this makes sense. Many of these white-collar jobs today involve skills that we, as a species, have only developed very recently. If you went back in time 250 years and asked someone to write code or complete tasks in Excel, they’d probably burn you at the stake for witchcraft. While we may think of it as old hat today, it’s fairly unusual for the human mind to be writing code or drafting contracts. In contrast, many more inherent skills that come easily to us—reading expressions, moving dexterously, or thinking creatively—are much harder for us to develop in AI systems.
So What Does AI Mean for Workers?
Concretely, then, there are specific groups that are most vulnerable to this upcoming automation.
Young people looking for jobs in the future are most at risk. It’s much easier for an employer to, for example, not hire for a position or impose a hiring freeze on new junior talent and replace them by making existing workers more productive or manage end-to-end AI agents. Their entry-level jobs are the most automatable, and they have much less negotiating power and the least leverage to protest this upcoming change.
The legal profession offers a stark example of how automation will have a different effect based on which areas it is directed toward. Consider the traditional responsibility breakdown within a law firm:
- Entry-level paralegals spending thousands of hours on document review, legal research, and drafting routine motions (15- to 60-minute discrete tasks) are already seeing significant portions of their work automated by AI systems.
- Mid-level associates who manage case strategy and discovery processes (multi-day horizon tasks) find AI enhancing their productivity rather than replacing them.
- Senior partners spend their time building client relationships and making high-stakes strategic decisions (weeks to months of coherent work) remain largely insulated from direct replacement.
It would, therefore, behoove young people to develop skills that are difficult to replace—sharpening social skills, perhaps, or investing time doing out-of-school projects that leave them in this CEO-like role. (And, while they’re at it, persuade their parents to invest in a little Nvidia).
Contractors and temporary digital workers, such as freelancers on sites like Upwork or Fiverr who handle one-off requests for companies and organizations, are similarly vulnerable. OpenAI recently released a benchmark, SWE-Lancer, which is explicitly designed to track what fraction of tasks their existing AI systems can perform. There is abundant data on many of the tasks, such as copywriting, and they are well-scoped with a clear metric of success, insofar that the person who is commissioning the task has all the implicit knowledge and context, not the contractor themselves.

In the short-term, many junior white-collar jobs will see massive productivity gains, such as software engineering (where over ninety percent of US-based developers use AI tools in their day-to-day work) or paralegals. But this development also reduces the need for such jobs.
Economists might traditionally argue that this vulnerability depends on whether the automation occurs in a growing industry that responds well to increased supply (like software engineering) or a shrinking industry more likely to lay off workers (like traditional marketing). But this distinction collapses in the face of overwhelming cost advantages. When AI can perform the same work at a fraction of human cost, each additional unit of demand will be filled by machines, not humans—regardless of industry growth. Think of the replacement of horses by automobiles: no matter how much transportation demand grew, horses couldn’t compete because cars were superior in nearly every dimension.
Once AIs can perform tasks on a long enough horizon, or successfully string together—”orchestrate”—a number of these shorter-horizon tasks to achieve a broader goal, it broadly eliminates the need for such workers. The comparative advantage for humans won’t be in doing the same work more cheaply, but in performing fundamentally different types of work that AI cannot yet approach. As for the horses, well, they simply retreated to their pastures—a luxury we’ll see whether displaced human workers will have.
High-skilled, specific blue-collar jobs like plumbers or lab technicians or even primary school teachers may therefore be more insulated against this round of automation. Robotics, while advancing fast, is progressing much slower than software—driven by a combination of the availability of data and the core of the technical problem itself. A robot hand, for example, has so many more axes of freedom in which it can move and so much less data, compared to a next-word predictor like ChatGPT, which only has to choose one. This may not be the case for long, though. Today’s robotics hardware may have enough physical dexterity to perform many day-to-day tasks—the bottleneck may therefore be a software and data one, which seem to be clearing at a similarly exponential rate.
The jobs that will remain for now are the ones where the role’s humanity is inherently valuable—whether because of the high levels of trust we will not want to defer to AI systems (such as with politicians) or because we enjoy the fact that a human is performing that task, such as in much entertainment. Still, as AI progresses, these may ultimately be substituted by AI systems, even if not directly. We still watch humans play chess despite computers being superior players—but will this remain true when AI can create unlimited, personalized AR experiences tailored perfectly to our preferences?
Rethinking Work
These developments in AI raise more fundamental questions. What is the purpose of work in human life? Is it merely a means to economic security or does it serve deeper psychological and social functions?
Work offers structure to our days, meaning to our individual lives, social connections, identity, and a sense of contribution to something larger than ourselves. Many find personal fulfillment and self-actualization through their careers, developing skills that bring satisfaction independent of financial rewards. For countless people, the question “what do you do?” is deeply tied to their sense of self and place in the world. Automation threatens not just livelihoods but these fundamental aspects of identity and purpose.
Beyond the individual, work has been foundational not just to survival but to our moral and social order. It provides the discipline that shapes character, the self-reliance that builds dignity, the productive contribution that earns respect within one’s community, and the legacy that connects generations. The family unit itself has been structured around productive roles, with work providing the means for parents to fulfill their most sacred duty: providing for their children. The prospect of widespread automation forces us to consider how these values can be preserved in a radically different economic landscape.
Sam Altman, the CEO of OpenAI, has spoken in vague terms about guaranteeing a universal basic income for all Americans in a world where AI replaces jobs en masse. But it may be unwise to rely on his word, or benevolence. OpenAI claims that their new $500 billion “Stargate Project” might create hundreds of thousands of new jobs. And indeed the growth of AI in the United States and the construction of power plants and data centers may cause the temporary resurgence of American manufacturing. However, these jobs, too, will likely be automated in the long run. The overall effect of AI on employment is unlikely to be positive.
Perhaps the ideal approach is not to choose between these imperatives—but to decouple them. Economic security could be guaranteed through baseline provisions, while new forms of meaningful participation—both inside and outside traditional market structures—are cultivated and valued. The non-economic dimensions of work require solutions that preserve opportunities for skill development, social connection, and meaningful contribution, even as traditional employment changes dramatically.
A Policy Response
The American worker, very soon, will then be in need of a new approach to economic security and meaningful participation. What could that look like?
We should first understand that building superintelligent AI systems and end-to-end AI agents is a technological and policy choice. The buildout of these systems is going to require intense capital build-out and the support of the government. And the data centers on which those systems are going to be trained are being licensed and constructed now.
Outside of a few industries, where the effects of automation are already apparent, union leaders and professional associations have remained silent on the issue of AI. Instead of being reactive, such leaders need to get ahead of AI’s potential effects on their industry and consider options to protect their members.
We must recognize that we face a genuine choice about whether these systems are trained at all—which is not an inevitable technological destiny—and this decision should remain at the forefront of our policy conversations. If we do decide to go ahead, here are initial principles and policies that could guide our response:
1. Economic Security Without Dependency
The government, beyond UBI, should consider proposals that discourage the disempowerment of American workers. For example, they might choose to implement an AI agent tax on companies who use the technology, which may be able to fund a UBI program. Like human economic activity and income is taxed, AI agents should be taxed similarly.
2. Democratic Access to Technology
Second, we might want to choose for there to be a universal compute allocation—i.e., the provision of a certain amount of computing power for every American in order to hold onto human agency and allow each human to have AI agents acting on their behalves.
3. Preserving Meaningful Human Involvement
Third, we might adopt “human-in-the-loop” requirements. Today, there are tasks for which the value of a human involved in a process remains critical—or at least positive. When this is no longer the case, such as in Chess or Go, humans may be actively harmful for the objective of “winning the game” and thus there will be strong incentives for full automation. We can mandate that humans stay engaged in the process regardless to promote meaningful work and oversight.
These changes are not inevitable. Just as globalization and the offshoring of the American industrial base in the 1990s was a policy decision, the choice to develop and deploy AI systems across the economy will be too. President Trump’s recent executive order on AI explicitly encouraged creating AI to “promote human flourishing,” rather than sharply ideological grounds: that sentiment—putting people before the technology as an end—is critical.
This will involve engaging with automation at the frontier, understanding what particular features of a job and its constituent tasks make it vulnerable or resilient to automation—namely, how long it takes to automate the smallest constituent task. Young people, contractors, and digital workers are particularly at risk. But, when they become unemployed, the rest of us might be too. We will have to have a serious conversation about the protections the American worker, and the public more broadly, will get from this coming wave.
We should build technology insofar as it can improve our ability to realize human freedoms and liberties. And, perhaps, no further.