Many years ago, I first read Ray Bradbury’s story “There Will Come Soft Rains,” anthologized in The Martian Chronicles. The story, if you haven’t read it, offers a day in the life of what we now might call the ultimate “smart home” — the last house standing in a California suburb after nuclear devastation — as its robotic enhancements blandly and automatically go through their daily tasks in the absence of their living human overlords. The house even selects the poetry reading for the evening, the six couplets by Sara Teasdale that give the story its name.
When the story first appeared, in 1950, its most arresting message no doubt lay in its post-apocalyptic imagery: the silhouette of a man mowing his lawn, captured by the nuclear flash; the “radioactive glow” given off by the town at night. Far more resonant today, however, seem to me the pictures of automation replacing common human activity: the armies of robotic mice and “scrap rats” that scurry around to clean the house, alert to any detritus that blows in; the bathtub, automatically drawing a bath for the children; the house as “an altar with ten thousand attendants,” blithely serving, unconcerned that “the gods had gone away.”
While the spectre of automation, computers, and robots taking over human activity, and human jobs, has long been a staple of science fiction, mainstream discussion of the possibility — arguably the most pressing economic concern of our times — seems only to have really picked up steam in the past couple of years. A watershed moment was a breathless Bloomberg report earlier this year on a 2013 study from Oxford’s Martin School that suggested that 45% or more of the U.S. labor market might be vulnerable to replacement by automation within the next 20 years. The Bloomberg report, aided by the usual spiffy infographic, immediately went viral, and sent people stampeding to the appendix of the Oxford paper to see where their own occupations stood on its spectrum of doom.
The results — while no doubt the source of considerable schadenfreude among audiologists, choreographers, personal trainers, and others whose jobs scored low on the “computerizable” index of the Oxford study — were surprising and sobering for the many others for whom the handwriting apparently is on the wall. Most striking, perhaps, is the wide range of positions thought to be at risk: not just factory jobs, so many of which have already fallen prey to robotic replacements, but other positions such as insurance underwriter (99% probability of replacement by automation), restaurant host/hostess (97%), restaurant cook (96%), compensation and benefits manager (96%), budget analyst (94%), and accountant (94%). Persons holding these latter jobs must find the conclusion especially vexing, as they no doubt have tended to view themselves as among the “knowledge workers” to whom the future is supposed to belong.
And, of course, there are professional drivers of various types, all of whom score high on the vulnerability index. Indeed, the incredible and rapid progress of self-driving cars serves as a metaphor for the larger societal dislocations that seem so suddenly to be coming into view. Only four years ago, when the project was first announced, the Google self-driving car struck one as a fanciful, utopian experiment. But as that experiment has proceeded, the supposedly innumerable and higher-order human capabilities involved in piloting a vehicle have been falling so rapidly to automated solutions that a time when self-driving vehicles will not only be permitted, but required, appears to be in sight. And while there seems little doubt that self-driving vehicles will carry many benefits to society, it seems equally clear that they are going to throw a lot of people out of work.
The driverless car is only the most visible example of the general trend: The rapid advances in processing power, storage, software and sensor development, and, yes, human talent have reached the point of mutual self-reinforcement, putting the previously distant dreams of artificial intelligence on the table for all to see and grapple with. And the question is, what does it all mean for the quotidian reality that most of us inhabit, so much of which is defined by our work?
The usual answer from Silicon Valley (which, perhaps not coincidentally, is the place that benefits most from the current trends) is that we will enter some sort of post-work utopia, or that the encroachment of automation on ever-increasing swaths of human value creation will “free us up” to do “higher-order tasks,” whatever those might be. (At least, presumably, until machines develop to the point where they take over the higher-order work as well.) The most enthusiastic of the techno-Pollyanas (in this, as in so much else) is Kevin Kelly, the “senior maverick” at Wired magazine, who writes, at the end of a long essay on why we should embrace our robotic overlords:
We need to let robots take over. They will do jobs we have been doing, and do them much better than we can. They will do jobs we can’t do at all. They will do jobs we never imagined even needed to be done. And they will help us discover new jobs for ourselves, new tasks that expand who we are. They will let us focus on becoming more human than we were.
Let the robots take the jobs, and let them help us dream up new work that matters.
For me, there are several problems with this vision of a future of robotically enabled navel-gazing about “work that matters.” For one, there seems little evidence that post-industrial capitalism (and particularly the Silicon Valley elites that appear to hold the reins right now) has much interest in it. There’s at least some reason to believe that the march to automation has underlain the sluggish return to pre-recession employment levels and the troubling increase in income inequality that has characterized the past 15 years. The vested interests who have benefited have little reason to change things for the betterment of society at large, irrespective of the “change the world” rhetoric that gushes from the communications offices of Silicon Valley firms. The proverbial “one percent” might have the luxury to “dream up new work that matters,” but it’s hard to be as sanguine for the rest of us.
(An interesting variant of Kelly’s vision, by the way, comes from Jaron Lanier, the author of Who Owns the Future? To his credit, Lanier is willing to envision a potential future of “hyper-unemployment, and . . . attendant political and social chaos” that might ensue if no steps are taken to align the current drive toward “software-mediated productivity” with human needs. His apparent “solution,” however, is that in the future, all of us will receive a stream of thousands of nanopayments from the Facebooks of this world, who will, for some reason, start compensating us for the reams of information we all now happily provide for free. This answer, in addition to seeming highly unlikely to come to pass, also strikes me as a rather barren view of human value-creation.)
Another problem with the proposals of insouciant futurists like Kelly, and many others who envision a radical reshaping of human work for the better, is that, while they invariably invoke the likelihood of a “painful transition” between the current, flawed now and the utopian later, they provide little information on how we might negotiate it. For most of us, the “work that matters” is the work we have now, which supports our families and our futures, and we won’t have the option of waiting for a better future when that work dries up. Even if Kelly’s vision were a plausible one, we face an economic version of the “uncanny valley” of robotics: We stand at the edge of an economic precipice, looking at the distant other side, with little indication of how to cross to it.
The economist’s answer to the quandary attempts to take comfort from history, pointing to the Industrial Revolution’s impact on existing jobs and the new jobs that it ultimately created. Similarly, according to these voices, the raft of new technologies will destroy many jobs, but also create new ones. In an interview, Andrew McAfee, the coauthor of The Second Machine Age, made the following observations about the impact of the industrial robot Baxter:
Baxter is taking away some routine manual work in factories. At the same time, he is going to need people to repair him, to add functionality, to train him. So Baxter and his kin are absolutely going to create labor.
This statement, of course, ignores some basic mathematics: the number of people required to support Baxter will be far smaller than the number of people “he” will displace; otherwise there’d be no need for a Baxter at all.
In the end, the optimism that McAfee expresses in the interview that we will, as with the Industrial Revolution, “wind up in another happy equilibrium,” rings a bit hollow. The current disruption appears to be operating across a swath of human endeavor unprecedented in its breadth, potentially leaving large numbers of “ordinary” workers without many options. And each new advance in labor-displacing technology seems only to create new ideas on how to take things further and extend automation to new and previously off-limits areas of human endeavor.
We are left, then, with a somewhat darker view: a largely unmanaged replacement of human capital with automation, proceeding at once-unimagined speed, and without a political context equipped to deal with it. A recent posting by Nicholas Carr notes work from a number of economists suggesting that we are actually starting to witness a decline in the demand for workers with high cognitive skills — the “higher-order tasks” that automation is supposed to free us for — and that this decline “has indirectly affected lower-skill workers by pushing them out of jobs that have been taken up by higher-skilled workers displaced from cognitive occupations.” As computers increasingly take on more analytical tasks and tasks formerly requiring human judgment, Carr suggests, it’s possible that workers across the board “are being pushed down the skills ramp,” not up.
While hardly an uncontroversial figure, the economist Lawrence Summers seems to have captured the crux of the baleful situation rather succinctly just a few weeks ago, at a “Conference on Inclusive Capitalism” organized by the Financial Times. Unencumbered by future visions dreamed up by an entitled Silicon Valley elite, Summers candidly stated that “We are seeing less and less opportunity for what average people — people lacking in certain skills — are going to be able to do.” And, he added, part of the answer lies in “channeling technological change so it reinforces the abilities of all, not just levers the abilities of those who are most able.”
“That is going to be the largest challenge for capitalism going forward,” Summers concluded. “And the first step is to recognize it.” Let’s hope that someone is listening.