Young people have always struggled to find work. During the Great Depression, they stood first in the breadlines. In the Seventies, youth unemployment hit 18% in many countries. After the 2008 financial crisis, it doubled in places such as Spain and Greece. But each time, the economy recovered and young workers eventually found jobs.
This time might be different. Britain’s unemployment rate, according to figures released this week by the Office for National Statistics, is at its highest in four years. We are in a jobs recession. And it appears that young people are being disproportionately affected. Since ChatGPT was released in November 2022, entry-level jobs have dropped by one third — a development for which large language models have been blamed. Companies are increasingly replacing junior workers with AI that can write code, analyse data, and handle customer service. The first rungs on the white-collar ladder — research assistant, junior analyst and so on — are disappearing.
Previous unemployment crises were temporary. Workers, young and old, waited out the recession, then found their place in the job market. But AI doesn’t go away when the economy improves. It only gets better. Paradoxically, we may be on the verge of unprecedented economic growth alongside unprecedented unemployment.
Until fairly recently, it was thought that the first modern jobs to be automated by AI would be those requiring the least academic education: manual jobs, such as those in manufacturing and agriculture. That forecast turned out to be incorrect. The tasks once considered particularly challenging, uniquely human (writing, programming, art) are the ones most vulnerable to current AI capabilities. In contrast, many skills that come easily to us (moving dexterously, making emotional connections) are much harder to replicate using AI.
Consider the case of Annabel Beales, a copy editor. In 2023, she landed her “dream job”, editing copy for a garden centre, drawing on, specialised knowledge from her university degree and formal training. But eight months in, her workload started to lighten, and she overheard her boss advising colleagues to “[j]ust put [their writing] in ChatGPT”. She was laid off six weeks later. Beales, a white-collar worker, bore the brunt of AI, not her gardener colleagues.
Some economists warn that millions of jobs could vanish within a few years. Others argue new roles will emerge faster than old ones disappear. Tech leaders promise AI will free us from drudgery while expert forecasters fear mass displacement (and replacement).
But some patterns are already emerging about which jobs will disappear first and which might survive longest. Of particular importance is the matter of autonomy — the extent to which AIs can act without supervision. ChatGPT can write emails and generate reports, but someone still has to send those emails and act on those reports. These chatbots, however, have led tech leaders to repeatedly insist AI will “augment” rather than automate. If demand doesn’t rise correspondingly, augmentation still reduces jobs. One person can now do the work of three.
But the real change will be autonomous AI agents. In June, one of us — Jason — attended an event, in San Francisco, fully organised by a team of AI agents. As far as we know, it was the first of its kind, and it was quite delightful. Sure, the AI agents couldn’t yet book a proper venue — we met in a public park — or organise catering. But 26 people still showed up. The agents are now setting up their own merch store. In the Bay Area, commuters often pass adverts for AI sales agent startup, Artisan, urging employers to “stop hiring humans.”
Not all challenges have been solved. AI models mostly still can’t act autonomously, make truly new scientific discoveries, or coordinate at scale — but these are all tractable engineering problems. And companies are hard at work to solve them.
So how do you predict which jobs will disappear? The most important factor is the “time horizon” of a task: how long an average human needs to complete it.
In Silicon Valley, where cutting-edge AI is being developed, the graph that people watch most closely is the so-called “time horizon” of tasks that AI can do well. This measures how long the technology can operate like a good employee. METR, a research non-profit that evaluates AI capabilities, has been tracking this phenomenon. They’ve found that AI’s ability to complete longer tasks is doubling every 3.5 to seven months. Existing models can handle tasks that take humans up to about 20 minutes with 80% reliability. At the moment, when put to work on tasks requiring more than four hours, the models almost always fail. But if this exponential trend continues, AI agents could tackle month-long projects within a decade.
For example, call centre workers spend their days handling calls that take 10 to 20 minutes. Each call is a complete task an AI can master. The same goes for freelancers writing blog posts or paralegals drafting contracts, or analysts entering data.
Compare this, though, with a CEO whose day is broken up into activities — a meeting, a decision, a public appearance — that each requires years of context which a machine can’t yet replicate. The CEO might spend 30 minutes deciding whether to acquire a competitor, but that decision draws on decades of experience with similar deals, knowledge of company culture, and understanding of market trends. It’s not really a 30-minute task.
This suggests the first rule of a longer-lasting job: increase the context needed to perform your work. The rule corresponds well to climbing the corporate ladder: the higher you go, the more your decisions depend on long-term strategic planning and years of accumulated knowledge that can’t be easily replicated.
But task duration alone doesn’t tell the whole story. Other factors matter too. Every support chat, every line of code, every written report creates data that is used to train AI models. Companies are already deploying “productivity monitoring” software that records employee screens and keystrokes: ostensibly for performance review, but really to capture how humans complete tasks.
Your particular industry might have some resilience. Hollywood actors stopped AI from using their likenesses because they could halt billion-dollar productions if their demands weren’t met. Dock workers keep automation at bay because they can shut down ports. White-collar professionals, learning from the dock workers, might organise quickly enough to survive the first round of automation. In contrast, freelancers and contractors — who have no collective bargaining power — might be left worse off. Indeed, Duolingo’s CEO announced they will “gradually stop using contractors to do work that AI can handle” , though he later pulled back after public backlash.
The transition, then, won’t happen overnight. At first, companies will keep employees as managers of AI agents, with each worker supervising automated systems that gradually take over their tasks. And some work will remain inherently human. We still watch people play chess even though computers have the beating of them. Live music, sports, and theatre draw audiences precisely because humans perform them. The appeal lies in limitation and struggle. We connect with the effort, not just the outcome.
Trust is important too. We’ll keep human politicians and military commanders for the reason that people are reluctant to delegate life-and-death decisions to machines. Parents want human teachers for their children, patients prefer human doctors for serious diagnoses, and clients choose human lawyers for high-stakes cases.
But preferences and levels of trust do change, eventually. Will we still want human entertainment when AI can create unlimited, personalised experiences tailored exactly to our tastes?
Overall, perhaps it is lockdown-friendly remote work, the kind that became the norm during the Covid-19 pandemic, that is the best insight into what is to come. If your job can be done from home via email, Slack, and video calls, then it is likely that AI will soon be a competent substitute for you. The pandemic revealed how many roles require only digital tools and clear instructions. Those jobs became automation-ready overnight.
The enormous changes afoot demand deeper questions about work’s purpose. Beyond paycheques, jobs provide structure, identity, social connection, and meaning. The question “What do you do?” shapes how we see ourselves and how others see us. But the solution isn’t choosing between economic security and meaningful work — it’s decoupling them. Guarantee baseline economic security while creating new forms of meaningful participation outside traditional employment. Preserve opportunities for skill development, interpersonal connection, and societal contribution, even as jobs change dramatically.
The next couple of years are crucial. Building superintelligent AI should not be treated as inevitable. These systems require massive capital investment and government support. Data centres need licenses and construction approval. And union leaders and professional associations have a role to play. Instead of reacting after automation hits, they should get ahead of it and protect their members proactively.
Right now, then, we face a genuine choice about whether to build these systems at all. If we proceed, here are some guiding principles.
First, we should tax AI agents like we tax human workers. Governments should use that revenue to fund programmes that maintain worker agency rather than creating helpless dependence.
Second, people should have democratic access to technology. Give everyone a compute allocation — enough processing power to run their own AI agents. We should prevent technology from concentrating power in fewer hands.
Third, we should preserve meaningful human involvement. Society should not stop at the low bar of requiring a human “in the loop”. Instead, pursue robust, effective techniques of supervising AIs even when their capabilities outstrip our own. When humans begin to drag down machines — like in chess — we lose the incentive to include them. But sometimes we need to choose human participation over pure optimization.
These changes aren’t inevitable. Offshoring manufacturing in the Nineties was a policy choice. So is the deployment of highly capable AI across the economy. Earlier this year, in his speech outlining an “AI Opportunities Action Plan”, Keir Starmer proclaimed that the UK must “make it work for working people”. To succeed, his Labour government will need to understand what makes jobs vulnerable: how long it takes to automate the smallest constituent task. Young people, contractors, and digital workers face the highest risk. But when they become unemployed — a process that has already begun — the rest of us will follow.
We should build technology to promote human liberty and prosperity. If technology fails to meet those goals, then it is no real progress.
Click this link for the original source of this article.
Author: Jason Hausenloy
This content is courtesy of, and owned and copyrighted by, https://unherd.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.