Countering several recent reports that suggest AI will take away millions of jobs, researchers at MIT have found that it is still a lot cheaper to use humans for most jobs in the US.
In a paper titled "Beyond AI Exposure," MIT FutureTech, along with The Productivity Institute and IBM's Institute for Business Value offers a detailed evaluation of tasks viable for automation, specifically in the realm of computer vision.
"We find that at today's costs, US businesses would choose not to automate most vision tasks that have 'AI Exposure,' and that only 23% of worker wages being paid for vision tasks would be attractive to automate," researchers said in the paper.
This report arrives amidst growing assertions from various agencies and companies about the potential of AI to supplant human roles. The IMF analysis said that AI could affect up to 40% of jobs worldwide, with the figure potentially rising to 60% in more developed economies.
A World Economic Forum survey indicated that nearly 75% of companies anticipate adopting generative AI, ranking it just behind humanoid and industrial robots in terms of expected job displacement.
However, this does not rule out AI eventually taking over human jobs. The MIT paper pointed out that AI could overtake humans if the costs of its deployment come down or if it is deployed via AI-as-a-service platforms that have greater scale than individual firms. The overall conclusion of the paper is that AI job displacement would be slower than expected.
"The economics of AI can be made more attractive, either through decreases in the
cost of deployments or by increasing the scale at which deployments are made, for example by rolling-out AI-as-a-service platforms, which we also explore," the researchers said in the paper. "Overall, our model shows that the job loss from AI computer vision, even just within the set of vision tasks, will be smaller than the existing job churn seen in the market, suggesting that labor replacement will be more gradual than abrupt."
The paper also observed that the majority of earlier reports on this topic lack clarity on the specific timeline and scale of automation. This ambiguity stems from the fact that such predictions often fail to address the direct technical feasibility or economic practicality of AI systems. Instead, they rely on a comparative analysis of tasks against AI capabilities to suggest a potential for automation.
The MIT study is significant because researchers followed an approach that took several important factors into play. They address three important shortcomings of AI exposure models to construct a more economically grounded estimate of task automation.
Initially, they conducted surveys with workers knowledgeable about the tasks to gauge the performance needed from an automated system. Then, they developed a model to calculate the costs associated with constructing AI systems capable of achieving such performance, a crucial step given that highly precise systems can incur significant expenses. Finally, they assessed the economic appeal of adopting AI.
"Consider a small bakery evaluating whether to automate with computer vision," the paper said. "One task that bakers do is to visually check their ingredients to ensure they are of sufficient quality (e.g., unspoiled). This task could theoretically be replaced with a computer vision system by adding a camera and training the system to detect food that has gone bad."
For a small bakery employing five bakers with typical annual salaries of$48,000 each, the potential labor savings from automation amount to$14,000 yearly. Given that this sum falls short of the expenses associated with developing, deploying, and maintaining a computer vision system, it is economically unfeasible for this bakery to replace human labor with AI.