Software is Eating Labor: How LLMs Are Transforming the Economics of Work
The “Software is eating labor” thesis is driving a lot of discussion and investment at the moment. For more context, see Sequoia’s analysis or this A16Z podcast episode.
The underlying idea is that LLMs mean that software can do a bunch of stuff that previously only people could do, and it can do it at a lower cost.
This means the market for software gets bigger because it becomes more economically sensible to have an AI do it than having a person do it.
You can sort of think of what people do as having three angles:
- Stuff which is basically words (e.g., talking and putting words into a computer)
- Stuff which is moving things around in the physical world (everything from waiting tables to brain surgery)
- Stuff which is forming relationships with other human beings
Previously, software could do a little bit of stuff which is basically words and very little moving around of things in the real world or forming relationships with other human beings.
It could facilitate moving stuff around in the world (e.g., the admin and planning component) and make it easier for humans to form relationships (e.g., CRM), but did very little beyond that.
It could do a little bit of stuff which is basically words as long as it didn’t need to really understand those words. So examples of this include executing computer code that a human has written, managing support tickets that a human was dealing with, or storing HR data about people which a human created and interpreted.
In the medium term, the big shift large language models drive is that software can now understand words.
So where a task is heavily made up of understanding words and taking other actions which are basically manipulating words, software can do far more of the overall task.
Outside of software engineering, customer support is a good example. Whether it’s via email or phone, agents can now — in many situations — do as good a job as a person at conversing with someone asking for support, taking actions to solve their problem, and managing that back and forth.
Where they can’t take the actions or the action needs human oversight, they can delegate this to a person, so it’s not a full replacement, but maybe 90% of the work can be done by agents. Which effectively means one human can handle 10x the load if they use an agent.
So one way to categorize work done by humans at the moment is “how much of what they do is made up of manipulating words.”
This will typically give you two components:
- The job itself (answering support tickets, brain surgery, waiting tables)
- The admin that goes with the job (planning, coordinating, incident reports, TPS reports, etc.)
For answering support tickets, you can imagine that 10% of your time is spent on admin and 90% on answering tickets. Then perhaps 90% of the admin and 90% of the tickets fall into the category of “an AI can do this by manipulating words,” which then means that both “admin” and “the job” can be reduced by 90%, so one support person can now handle about 10x the workload.
Customer Support and Law are the two examples that get used a lot at the moment because they are somewhat outliers in that they are mainly about manipulating words and there are an awful lot of people doing them. So there’s a big opportunity to build LLM-powered software there.
These get noticed first primarily because they also have a very minimal component of “moving things around in the real world.” And without a big leap forward in robotics, AI still struggles to move stuff around in the real world.
But probably the more exciting opportunity is the much higher volume of jobs which have an essential component of moving things around in the real world but where a disproportionate amount of time is not spent doing this — it’s spent on either admin or sub-tasks that are essentially moving words around.
Nursing is a great example. Studies show that nurses spend upwards of 25% of their time on generic administrative tasks. In hospitality, roles which you’d typically consider primarily frontline can spend upwards of 33% of their time on such administrative tasks.
This is where I think we’ll see some of the most impactful and economically significant advances from LLMs over the next 2-4 years.
If you found this interesting, you might also enjoy:
- AI and the Future of Software: Thoughts on LLMs and What’s Next — My broader thoughts on where AI is heading and what’s coming next
- Unreasonable Things I Believe About LLMs — Some contrarian takes on large language models