How exactly can AI take your job in the next few years? (Part 1)
Part 1: What AI will actually be able to do by 2027-2030
We keep hearing that AI is going to replace a bunch of jobs in the next few years, but no one really knows exactly how that’s going to happen. When will this start, and which exact jobs are in danger? What’s the plan for dealing with this? Right now, it seems like people are vaguely aware of this threat but continuing on with business as normal. If this is really coming in the next 24 months, shouldn’t we be a little more worried, and have a few more answers to these questions?
That’s what I wanted to figure out when I began writing this article. I wanted to cut through the vague proclamations and actually lay out a bit of a clearer path for what’s coming. Other than feeling like society as a whole was not treating this issue with the seriousness it deserved, my motivation was also personal; I wanted to know, how long can I continue doing the kind of work I am doing?
This article is broken down into two parts:
Part 1: What will AI actually be able to do in the next few years? (this post)
Current AI systems clearly aren’t ready to replace most of our jobs, so when will they be ready?Part 2: So…will we lose our jobs?
Once we understand what AI will actually be able to do, we’ll look at what that means for jobs and the white-collar labor market as a whole. Will you lose your job? I’ll walk through some specific scenarios that are likely about to play out.
This article focuses only on white-collar jobs, as that is the main segment of the workforce that AI will impact in the next few years.
This post is part 1, and focuses on how the technology will change over the next 2-5 years. If you are familiar with most of this already, I suggest skipping to Part 2, as that may be more interesting to you.
Part 1: What will AI actually be able to do in the next few years?
Currently, our AI tools have many limitations, such as:
They don’t have long-term memory
We can only interact with them through chat interfaces, or in other narrow ways. We can’t interact with them in the same way we might with another coworker, i.e. through meetings, emails, Slack/Teams messages, etc.
They still can’t use a computer the same way humans can
They are still limited in their understanding of our instructions. They don’t usually understand the exact situation we face, because they don’t have all the context we have about our workplace and our personal lives. This makes them feel less intelligent.
Because of these limitations, we don’t truly feel like AI systems can replace humans. And it is also why current predictions of how AI will impact the future often focus on specific, narrowly defined tasks that AI will be able to automate.
Most studies that examine how AI will impact the workforce treat AI as a task automation tool. They examine what tasks humans do in their jobs, and try to categorize whether AI will be able to do those things, according to all the limitations we mentioned above. These studies usually lead to the conclusion that AI will only significantly impact a small proportion of the workforce in the near future.
But these predictions are looking at AI the wrong way.
Coworkers
ChatGPT was an “oh shit” moment for the world. It was a moment where everyone recognized that a new technology had been unleashed which was way different than anything else we had access to before.
Another such moment will arrive within the next 1-2 years. This technology will have many names, usually involving the word, “agent”, but I’m going to call it the remote AI coworker.
Imagine being able to purchase a product that can actually do everything a human can do on a computer. Like a truly effective remote coworker. Not only can it use a computer, it has the skills of a mid-level software engineer, it’s able to pass the US medical licensing exam, it aces the bar, and can crush basically any written tests you would give to a graduate university student.
This is the remote AI coworker. And it will be broadly available to the public between 2026 and 2028.
That is a bold statement, so let me back it up.
The trajectory of AI
“We are on course for [artificial general intelligence] by 2027. These AI systems will basically be able to automate all cognitive jobs (think: all jobs that could be done remotely).”
(Extremely) exponential progress
In 2023, researchers created one of the most challenging tests that had ever been designed for AI, a test that even PhD experts could only score around 70% in. It contained difficult questions in biology, physics, and chemistry, and we expected this test to be a good way to measure AI progress for years to come. The top AI model at the time, GPT-4 (released in March 2023), only scored around 36%, which was a bit better than what you would score if you randomly guessed answers.
18 months later, an AI model was released which outperformed Phd experts and scored almost 80% on this test.
The trend of AI tests being released that are considered “impossible to beat”, and then being defeated by AI models within just a few months has become common in the last 2-3 years. This is because the algorithm that trains these models produces better results every time we give it more computing power and more data. And these are both problems that we are able to solve with more money.
“Money in, IQ points come out” - Jeremie Harris
Just 5 years ago, we were spending less than $1M per AI model. Today, we are training models on a $1B budget. By 2027, we will be spending hundreds of billions of dollars to train the next generation of AI models.
These performance improvements are starting to make their way to a new application of AI technology: agents.
This is a test developed to measure how well AI agents can perform real world tasks that are done in human jobs. It contains tasks such as navigating to websites, filling out forms, and contacting people to ask questions. These were tasks with multiple steps and required AI agents to be able to proficiently use a computer, and generally operate like remote workers would in a workplace setting. The top AI model in May 2024 was only able to perform about 9% of these tasks correctly.
In January 2025, our top AI model is now able to complete 38% of the tasks correctly. We more than tripled the performance of AI agents within 8 months. Within the next year or two, we expect to reach scores of 80% or more on such tests.
In January 2025, the CEO of OpenAI stated in his personal blog, and in an interview with Bloomberg:
“In 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.”
“I think [artificial general intelligence] will probably get developed during this president’s term"
That same month, Mark Zuckerberg stated,
“Probably in 2025, we at Meta…are going to have an AI that can effectively be a sort of mid-level engineer that you have at your company that can write code.”
The first versions of these will be released within 2025 (one already has been), and most people will dismiss these early agents as useless due to their propensity to make mistakes. But as we have seen, the exponential improvement in these systems will soon make these agents more reliable than human coworkers for most tasks.
And the odds are, this will happen between 2026 and 2028. Companies are already anticipating this. Salesforce has already stopped hiring software engineers.
Dario Amodei, CEO of the leading AI company Anthropic, stated recently:
“By 2025, 2026, maybe 2027, we would expect to see models that are better than most humans at most things”.
What will this technology actually be able to do?
Ok, but…how exactly will these plug into our workplaces? It can help to give specific examples of how certain occupations will use this technology to truly be able to appreciate its real impact on the workforce.
The purpose of the case studies below is not to predict how exactly these jobs will change, but a question I always had is, “how exactly will AI do the things I do in my job today?”. I want to dive into specific tasks for this purpose, basically to sanity check myself and make it clear that this technology can get deep into the weeds.
AI will impact basically every white collar occupation that primarily uses a computer, including fields with high training requirements such as medical, legal, and scientific research. But for the sake of this article (and my personal knowledge of how these jobs work), I will provide a quick case study into how the following occupations will be impacted:
Software engineers
Marketing associates
Customer support associates
But first, let’s set the context of how a company would actually start using these remote AI coworkers.
Hiring a remote coworker
Even if you hire a genius to do a job, they still need to figure out how the company actually does things and learn all the internal jargon, processes, and norms in order to be useful to their team. This is called onboarding. A significant portion of this onboarding will be done almost instantly with these remote AI coworkers, as they will integrate into the companies’ digital repositories of information, and ingest everything they can.
This initial “ingestion of data” will include:
All internal knowledge bases and wikis (e.g. Google Drive, Confluence, Notion)
All messages in internal communication platforms (i.e. Slack, email)
All project management boards and task tracking systems (e.g., Jira, Asana, Trello)
Code repositories, commit histories, and associated documentation (e.g., GitHub, Bitbucket)
Customer relationship management systems (e.g., Salesforce, HubSpot)
After this initial phase of data ingestion, the AI co-workers will already be 75-80% ready to start fully contributing to their team. They will be “put to work” immediately, but there is a limiting factor which will mean they will take a few months before they are fully “up to speed” and able to contribute to their maximum capacity.
This limiting factor is the fact that most information is not stored anywhere digitally; it is stored in people’s heads. One of the reasons it takes months, sometimes years for new workers to start contributing to their full capacity is that they are constantly gathering information about the company from other coworkers through conversations and formal/informal meetings. The AI coworker will have to go through this same process, and we can call this the “longer” onboarding.
Let’s assume that for the case studies below, your personal remote AI coworker has been around for 6 months already, and has gone through this initial onboarding, as well as the “longer” onboarding.
Quick case study 1: Junior software engineer
As a junior software engineer, your job mostly involves picking up “tickets”, which are usually descriptions of features that need to be built in the software platform your team is building. A ticket could be as simple as “edit this web page’s font to be size 16”, or as complex as “design and implement a payment system for our online store”.
For most tickets:
The remote AI coworker will be able to take in a ticket, understand what part of different parts of the codebase to modify, and make the required changes within a matter of seconds. Once they produce this code, they will need some guidance from humans to ensure that their solution matches all the “unsaid” requirements of the ticket. There are many things that humans assume and don’t write down, and this will often need to be specified by humans in the “review” process of these tickets.
Essentially, a human will mostly be around to “guide” the AI coworker to do its work properly.
For some tickets:
For some tickets, the AI model will either:
Make progress, but get stuck due to limitations in its knowledge or context required to solve the problem
Solve the problem incorrectly
This is where humans will be required to step in and help out, and unblock the AI systems whenever they get stuck.
Takeaway
Software engineers will transition to primarily be “managing” or “steering” AI models to do what needs to be done, rather than primarily writing code themselves.
Quick case study 2: Marketing associate
As a marketing associate with a few years of experience, you might handle tasks such as drafting text for campaigns, creating social media posts, analyzing campaign performance, and conducting basic research. You are also often communicating with other teams and marketing associates to do your work, so you need to have good interpersonal and communication skills.
Most tasks
AI will generate first-draft copies for emails, social media posts, landing pages, etc. Much of the repetitive writing will be done at the click of a button. It will also be able to effectively communicate with other team members through any platform used by the company, such as email, Slack, Teams, and also be able to attend meetings virtually.
Other tasks
When brainstorming entirely new campaign ideas or responding to nuanced marketing challenges (like crisis communications or highly specialized brand messaging), AI will produce content that is “off-brand” or too generic. A human would need to step in to refine or completely revamp the approach. It will also struggle with things like interpreting complex data that doesn’t have much context.
Takeaway
Marketing associates will move away from the grunt work of manual data collection or writing repetitive copy. Instead, they would spend more time as an editor and strategic reviewer—tweaking the AI’s output to match the brand’s voice, verifying facts, and making final decisions on campaign directions. These were things that were usually reserved for higher-level marketing managers.
Quick case study 3: Customer support associate
Customer support associates often spend their days handling queries from customers via phone, email, or chat. These range from simple “Where’s my order?” questions to more complex troubleshooting or escalations.
Most tasks
The AI co-worker will handle basic customer queries—tracking orders, providing estimated delivery times, etc. For common or previously documented issues (e.g., resetting passwords, clarifying billing questions), AI will be able offer step-by-step instructions and even schedule appointments
In these straightforward cases, human support associates would primarily monitor AI interactions, stepping in to confirm unusual requests or placate an upset customer if the AI’s tone or approach starts to falter.
Other tasks
If a customer’s issue involves unique circumstances, the AI may not have enough context or may provide unhelpful answers - a human will need to take over in these cases. Some issues will require a personal touch and empathy as well, which a human will also need to step in for.
Takeaway
The role of the customer support associate will shift from direct problem-solving to being needed for escalations and more nuanced situations that the AI is struggling with. While the AI coworker will handle a large volume of routine inquiries quickly and consistently, humans will still need to provide their context and their empathic skills.
Takeaways
The obvious takeaway is that in any kind of knowledge work, we will be transitioning away from doing the core work ourselves, and moving towards becoming “managers” of these remote AI coworkers. If you have a junior employee that now handles most of your routine tasks, you will no longer spend most of your time doing those tasks.
This is an entirely new way for humans to offer value in the economy.
It’s an entirely new world.
It can be easy to intellectually understand this but not be able to truly feel the magnitude of it. If you still have doubts about the timelines of this transition, I encourage you to do a quick google search for “AI agents” and switch to the news tab. This article might be a good place to start.
So what does this transition mean for our jobs? Will white collar workers still be able to offer value in this economy? Will we face mass layoffs? Read part 2 to find out.