What does AI really mean for the future of knowledge work?
Extending the industrial revolution to white-collar labor
Industrializing AI is a periodical that I started to share my thoughts on applying AI to improve the performance of businesses, organizations, markets, and societies. I focus on the following themes:
What jobs can AI do well, and under what conditions?
What are the risk and compliance implications of AI adoption?
How can businesses measure ROI on AI initiatives?
In a more theoretical vein, inspired by my doctoral research circa 2004-9, I also explore the potential for AI to extend the industrial revolution to knowledge work. AI may facilitate the analysis, standardization, and continuous improvement (“kaizen”) of information processing, decision-making, and coordination tasks that we generally associate with white-collar professionals. This seems likely to unlock productivity gains and wealth creation, perhaps even comparable to those resulting from the rise and refinement of mass production in the 19th and 20th centuries.
Divergent signals: $920B annual upside vs. 95% pilot failure
Perspectives on AI in the mainstream media are wildly divergent. On Monday, one article cited Morgan Stanley research projecting that AI “could generate annual net benefits of roughly $920 billion for S&P 500 companies”, representing value creation potential “equal to 24-29% of current S&P 500 market capitalization.”
On the same day, another article cited an MIT NANDA study finding that “for 95% of companies in the dataset, generative AI implementation is falling short” with “the vast majority” of AI pilot programs “delivering little to no measurable impact.”
So will the current wave of Generative AI launch a new era of rapid economic growth, or simply mature into yet another technology among many, like databases or word processors or expert systems? At this point, it’s probably fair to say that no one really knows, but there’s a growing body of rigorous research to support the view that the transformative impact of AI will be far-reaching and profound, perhaps even rivaling the industrial revolution.
AI interviewers yield more offers and higher retention
One way to sort out the confusion is to look at rigorous academic research on AI applications in business. Happily, there is a growing volume of such research, including an illuminating paper also posted on Monday: “Voice AI in Firms: A Natural Field Experiment on Automated Job Interviews,” by Brian Jabarian (University of Chicago) and Luca Henkel (Erasmus University Rotterdam.) Thanks to GenAI expert Ethan Mollick for calling attention to this paper. If you don’t already follow Ethan, you probably should – he stands out as one of the most insightful commentators on the uses and misuses of GenAI.
Working with a large recruitment process outsourcing firm in the Philippines, Jabarian and Henkel evaluate the effects of automating the job interview process using GenAI. In this experiment, over 70,000 job applicants were interviewed by either AI or a human recruiter, but human recruiters were still tasked with deciding whether to offer a job to a given applicant. The results are clear: AI already outperforms human interviewers, and the performance gap will almost certainly grow as the technology improves. Here are some of the key findings:
AI interviews are 12% more likely to lead to a job offer, because AI conducts more comprehensive interviews, eliciting more relevant information from the applicant.
After receiving a job offer, applicants interviewed by AI are 18% more likely to actually start the job.
After starting a job, applicants interviewed by AI are 17% more likely to be employed for at least one month.
When given a choice of an AI interviewer or a human interviewer, over two thirds of applicants chose AI.
71% of applicants interviewed by AI had a positive experience, compared to only 52% for applicants interviewed by a human.
Only 3.3% of applicants interviewed by AI felt discriminated against based on their gender, compared to 6.0% of applicants interviewed by a human.
Clearly, for this relatively formulaic interview task, AI performs better than humans. Moreover, these results were achieved even though the AI system encountered a technical failure in 7% of the interviews. As the AI system is refined, the performance differential relative to humans will almost certainly increase.
Interestingly, the use of AI to conduct job interviews did not shorten the cycle time from initial contact to starting a job, because it took more time for human recruiters to review AI interviews and make offer decisions than to make offer decisions based on the interviews that they had conducted themselves. To improve productivity, the logical next step could be to evaluate whether AI can make job offer decisions. While this might raise concerns about bias, the experimental results suggest that offer decisions made by AI could actually be more fair and effective than those made by humans.
Human recruiters underestimate AI performance
The paper provides an instructive demonstration of how to measure ROI on an AI initiative. Importantly, it also shows that such careful measurement is crucial, because subjective impressions of AI performance are likely to be mistaken. After the experiment, the researchers conducted a survey asking the human recruiters to evaluate the effectiveness of the AI interviews.
Recruiters dramatically underestimated the AI’s performance. 36% of human recruiters expected the offer rates to be lower for AI interviews, vs. only 15% who (correctly) expected the offer rates to be higher. 48% expected applicants interviewed by AI to have lower retention rates, vs. only 13% who (correctly) expected the retention rates to be higher.
Returning for a moment to the MIT report mentioned above, could it be that many AI pilots are deemed failures based on subjective perceptions of humans, who are biased against AI? It’s a possibility worth considering, and it underscores the importance of robust, quantitative evaluation criteria for AI initiatives.

