Are You Replaceable?
The answer might surprise you.
Last updated
The answer might surprise you.
Last updated
Given what you know about AI, ML and the importance of data, we need to define a framework for spotting occupations that are likely to get automated. On the one hand, as an entrepreneur, it may mean your barrier to entry is lower. You won't have to hire as many people (or any, in some cases), and that equals more potential opportunity.
For employees, the situation is a bit more grim, since your paychecks could end up on the chopping block. And it won't be a matter of finding a new employer: you'll need to come up with a whole different career to pursue! Anyone who is mid-career or later will have to think long and hard about this subject.
Here are a simple set of questions that you can use as a heuristic for figuring this out:
Does the work involve predictable patterns of activity? If it's very predictable, then it's a prime target for all kinds of automation.
Are there already frameworks in place for handling the work? Frameworks often translate into structures that can be fed into an algorithm.
How difficult is it to gather data about what is being done? If it's easy to gather data, it may be easy to train a machine learning system.
In case you aren't sure about whether data is being (or even can be) collected, think about how many machines and/or pieces of software you interact with on a daily basis. This is important, because jobs that involve lots of tech are jobs where data can be easily collected.
A related question is: "How ambiguous is my work?" Even if there are lots of data points available, a job can be very difficult to automate when there's a lot of human-induced ambiguity involved. These types of jobs are usually heavy on human-to-human interaction, and tend to revolve around systems that are not easy to define.
Given this framework, let's walk through some examples and see if we can spot some potential targets for automation.
Trucking is a multi-billion dollar industry that relies heavily on human labor to make money. Companies need to move physical goods across land, and trucking does that by putting a human being behind the wheel of a large, loud truck that will get those goods to their destination.
Unfortunately for those companies, trucking has to contend with the fact that people aren't that great at driving. Drivers make navigational errors, don't get enough sleep, take drugs or alcohol, and get into crashes. Even if a driver does everything right, they have to contend with other people driving their vehicles who might present problems for them.
Modern technology has reduced the frequency of some of these problems, but the human element continues to be the problem. Fortunately for trucking companies (and unfortunately for truckers), it's now possible to gather substantial amounts of data about driving that can be used to automate the task.
Driving a truck is perhaps the definition of an instrumented job, since the entire role of the human revolves around interacting with a machine. The primary bottleneck is our ability to handle the uncertainty of a road environment filled with humans. Progress has been made, but there's still some way to go. Once that bottleneck has been addressed, driving jobs like truckers, bus drivers, cabbies, etc. will likely disappear or at least be much less common.
Fast food is a ripe target for automation because it's an assembly line for food. Getting food prepared and delivered to the customer is a well-defined, predictable, and repetitive process. It looks like this:
Customer orders food, in this case we'll say he orders a burger with fries and a drink.
Employee A assembles a burger using a pre-determined recipe (grab buns, add 1 tbsp of ketchup, etc.).
Employee B drops cooks fries using a pre-determined recipe (drop fries into oil, cook for 30 seconds, add 1 tbsp of salt, etc.)
Employee A and B drop their finished products in a paper bag, which is grabbed by Employee C.
Employee C fills a cup (of a pre-determined size) with Pepsi/Coca-Cola/whatever, then hands the cup and bag to customer.
This is a simplification of the process, but not by much. Each step of that algorithm could be done better, faster, cheaper, and with more precision than a human. As such, it's clear that fast food will eventually be a domain dominated by machines.
Fast food giants like McDonald's and Wendy's have been working on this for years. It's one of the few occupations that I'm willing to put in the "definitely getting automated" category because the tech is already here and it just needs wider adoption.
I've met quite a few people who are confident that writing code will not be automated in the future. Not coincidentally, nearly all of those people were either already programmers or learning the trade. As a former programmer myself, I disagree.
The reason I say this is simple: there's a boatload of data out there about how programs are built (just look at Github if you don't believe me), and most modern programs tend to use the same kinds of components (UI, database, encryption, etc.). There's even a term in programming for how engineers are expected to structure their code to deal with specific types of problems: design pattern.
Much of the economic opportunity in programming comes from building systems that take data from one place, transform it into something else, and then deliver it to end-users. This process is called Extract, Transform, Load (or ETL for short).
In theory, an ML system could be built that simply looks for patterns in open source code repos and then outputs highly optimized programs that can solve common problems. This could be particularly true for ETL problems, since designing those projects is about defining where your data comes from, what you want to do with it, and how you want to deliver the end product. With enough examples, it seems unlikely that an ML system couldn't take the place of the army of programmers now doing that work.
Github Copilot has already demonstrated that this is the future trend for programmers. While it is ostensibly an "AI coding partner," with time there's potential for it to replace coders entirely.
However, there will still be room for some human programmers. At the top of that pyramid are going to be the people who built the ML systems, who are highly skilled, experienced professionals. They'll be the only ones who understand how to build and maintain such high-value systems, and their salaries will be high.
You can already see this dynamic in action by looking at the offerings from companies like Algorithmia. Their platform is designed as a marketplace that allows businesses to "borrow" algorithms (including ML algorithms) and use them for a fee, saving them the cost of hiring an expensive programmer to build a piece of functionality. Even though there are programmers making the algorithms sold on the platform, they're essentially replacing legions of other programmers by offering their work in this way.
The other side of the spectrum is not so pretty. For programmers who don't have much skill (or at least aren't skilled enough relative to the people building the ML systems), their time will most likely be spent customizing what the ML systems spit out. That work will not require massive amounts of brain power, which means it is unlikely to be highly paid.
This is all speculative, of course, but I think it's safe to say that programmers (outside of the elite-level coders) are in for a bumpy road ahead.
People who write the words you see on ads, landing pages, and other forms of marketing materials are quickly seeing their ability to survive in the marketplace disappear. With the appearance of ChatGPT and a variety of marketing-specific tools built on the OpenAI GPT systems, it's become trivial to generate this form of writing.
For now, the role of a copywriter has become copyeditor - they feed some ideas into the machine, and then edit or add to the output. While they can certainly still add some value in places, it's hard to argue that becoming a copywriter or hiring a full-time copywriter is a wise decision at this point.
Here's where things start to favor humans again. Attorneys have a job that's difficult to gather data about and involves all kinds of ambiguity. This is because their job revolves around understanding archaic, complex legal systems and persuasion. ML software can do things like sort through court rulings to find specific data, automate certain paperwork-related processes and rate lawyers based on their performance (using metrics like number of guilty verdicts), but capturing the value of an attorney on the whole will be hard.
Lawyers have to consult with clients, come up with a story based on incomplete or distorted evidence, and then persuade other human beings to believe that story. These are not easily defined, repetitive processes. Each case is going to present new challenges, and the data that can be collected for each one is not necessarily going to be helpful for future cases.
It is reasonable to assume that the repetitive grunt work done by paralegals will get automated away, but the attorneys themselves are going to be hard to replace. Their work is simply too ambiguous and too human-oriented.
Like laywers, police officers have a job that's incredibly difficult to automate. This is because their work revolves around complex, ambiguous interactions between people. Couple that with an equally complex, ambiguous legal system which has to be followed and you'll quickly see how difficult it would be to build robot cops.
Consider how hard it would be to automate the most controversial (and yet central) component of the job: the use of force. Police officers are unique in that they have to make decisions–usually very quickly–about whether it's acceptable to pull out their guns and shoot other human beings. Their decisions are sometimes wrong (for a variety of reasons) and innocent people die.
Even though there are certainly cases where the officer is acting in bad faith, much of the time these wrongful shootings are due to the need to make split-second decisions about ambiguous situations. Each situation is unique, and it would be incredibly difficult to build a computer system (even an ML system) that could accurately detect all the nuances needed to make a "good" decision to shoot.
Officers have been, and will continue to be, augmented with technology (such as cameras on their cars and uniforms), but the core of their job will be difficult to automate for the foreseeable future.
An interesting idea that has proven to be quite accurate over the years is Moravec's paradox. The TLDR of this concept is that we're quite bad at building machines which do things we view as easy, but good at building machines that handle tasks we view as difficult.
Consider this: unless you're an infant (and if you're reading this as a baby, I'm impressed), walking, talking and grabbing things aren't tasks you have to think too hard about. We do them subconsciously, as they are highly evolved abilities that have been around since the beginning of our species. Nobody has to really teach you how to do any of those things, unless you've suffered some kind of injury and have to re-learn them.
On the other hand, playing chess and writing code are not evolutionarily ancient skills. As such, you have to be trained how to do those things and many people find both tasks very difficult. Because they don't come naturally to us, we must be taught how to do both.
Which one is harder to create a machine around? Turns out it's the first category, largely because we aren't even sure how to describe those kinds of actions to a computer. We've poured who-knows-how-much time and money into building robots that can walk like humans, but after decades of trying we still haven't nailed.
Translate this to work, and you can see where the pattern is emerging. Blue collar jobs, which everyone thought would be the first to go, tend to be the hardest to automate. How can you create a machine that crawls under a sink and does the work of a plumber when just about every house and plumbing situation has its own weird wrinkles?
White-collar jobs like designers and writers are losing their jobs to machines rapidly, despite the fact that everyone thought "creative" jobs would be the safest. Turns out that creativity isn't what matters, it's ambiguity.
Despite all of this, it's worth keeping in mind that AI as it stands today is primarily a means of augmentation. Survival in this particular phase of the technology does require you to actively try to anticipate where total replacements will happen, but it also demands that you understand how to combine people and machines.
You can ask three questions to get a rough idea about how in danger a given profession is: Does the work involve predictable patterns of activity? Are there already frameworks in place for handling the work? How difficult is it to gather data about what is being done?
Jobs with high levels of ambiguity and uncertainty are the least likely to be automated, while those that are structure-heavy and prone to data collection are the most likely to be automated.
Moravec's paradox describes this dynamic well, as it predicts that skills which we do unconsciously will be far harder to automate than what we have to train ourselves to do.
For right now, AI is mostly an augmentation technology and not a full-blown replacement for people.