From Bruegel by Jeremy Bowles:
Who will win and who will lose from the impact of new technology onto old areas of employment? This is a centuries-old question but new literature, which we apply here to the European case, provides some interesting implications.
The key takeaway is this: even though the European policy impetus remains to bolster residually weak employment statistics, there is an important second order concern to consider: technology is likely to dramatically reshape labour markets in the long run and to cause reallocations in the types of skills that the workers of tomorrow will need. To mitigate the risks of this reallocation it is important for our educational system to adapt.
Debates on the macroeconomic implications of new technology divide loosely between the minimalists (who believe little will change) and the maximalists (who believe that everything will).
In the former camp, recent work by Robert Gordon has outlined the hypothesis that we are entering a new era of low economic growth where new technological developments will have less impact than past ones. Against him are the maximalists, like Andrew McAfee and Erik Brynjolfsson, who predict dramatic economic shifts to result from the coming of the ‘Second Machine Age’. They expect a spiralling race between technology and education in the battle for employment which will dramatically reshape the kind of skills required by workers. According to this view, the automation of jobs threatens not just routine tasks with rule-based activities but also, increasingly, jobs defined by pattern recognition and non-routine cognitive tasks.
It is this second camp - those who predict dramatic shifts in employment driven by technological progress - that a recent working paper by Carl Frey and Michael Osborne of Oxford University speaks to, and which has attracted a significant amount of attention. In it, they combine elements from the labour economics literature with techniques from machine learning to estimate how ‘computerisable’ different jobs are. The gist of their approach is to modify the theoretical model of Autor et al. (2003) by identifying three engineering bottlenecks that prevent the automation of given jobs – these are creative intelligence, social intelligence and perception and manipulation tasks. They then classify 702 occupations according to the degree to which these bottlenecks persist. These are bottlenecks which technological advances – including machine learning (ML), developments in artificial intelligence (AI) and mobile robotics (MR) – will find it hard to overcome.
Using these classifications, they estimate the probability (or risk) of computerisation – this means that the job is “potentially automatable over some unspecified number of years, perhaps a decade or two”. Their focus is on “estimating the share of employment that can potentially be substituted by computer capital, from a technological capabilities point of view, over some unspeciﬁed number of years.” If a job presents the above engineering bottlenecks strongly then technological advances will have little chance of replacing a human with a computer, whereas if the job involves little creative intelligence, social intelligence or perceptual tasks then there is a much higher probability of ML, AI and MR leading to its computerisation. These risks range from telemarketers (99% risk of computerisation) to recreational therapists (0.28% risk of computerisation).