Bursting the Optimistic Technology Bubble

By Evan SelingerJuly 31, 2015

Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford

MARTIN FORD’s Rise of the Robots: Technology and the Threat of a Jobless Future is a long-form exercise in dystopian scenario planning. Given how that genre reads, reviewers have naturally focused on this book’s grim prognosis and radical solution for avoiding disaster. They’ve assessed the likelihood of innovation-driven capitalism collapsing under the weight of mass unemployment and decimated consumer purchasing power and confidence. And they’ve judged the cost and consequences of giving citizens a guaranteed minimum income as a safety net and as a risk-taking incentive.


Tempting as it is to offer my own predictions, I think other avenues need to be explored. In particular, there’s still much to learn from Ford’s technology-driven concern that “a great many people will do everything right […] and yet still fail to find a solid foothold in the new economy.”


Ford has thought long and hard about automation. Back in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (2009), he cast aspersions on the so-called “Luddite Fallacy,” a development economics concept often used to dismiss arguments about the end of work: “while technological progress will cause some workers to lose their jobs as a result of outdated skills, any concern that advancing technology will lead to widespread, increasing unemployment is, in fact, a fallacy.” Ford moved the conversation about jobs in an interesting direction by questioning whether history is really caught in an ever-repeating loop.


Think of it this way: The market seems resilient because new jobs are created when gains in technological efficiency render older ones obsolete. As the much-repeated story goes, after agricultural work became mechanized, society didn’t grind to a halt. On the contrary. While the rise of factory-style farming may have created temporary employment problems, over the long haul those who would have been farmhands were incentivized to get different jobs, including ones using new machines.


Sure, the world lost a sizable number of manually laboring hands that had been planting and harvesting on small, locally owned farms. Sustainability advocates then lamented eroding agrarian sensibilities, the diminished quality of modern agribusiness food, and the increasingly normalized maltreatment of animals. But, all this said, deploying a narrow neo-economic lens has meant focusing rather on the flipside of loss — that is, on how well the engine of prosperity chugs along. Wages rise and the price of many desired goods drops. It’s all good from this perspective.


But even if we perform the narrow economic squint, prosperity isn’t guaranteed to last forever. What if the familiar two-step pattern of old-job-ending/new-job-beginning gets disrupted and the old happy endings are no more? What if the majority of workers displaced by advancing technological modes of production don’t get opportunities because their hands, hearts, and minds are too slow or too costly to employ? What if meeting the demands of growth incentivizes companies to hire technological surrogates? Indeed, what if advanced machines stop needing human operators to get jobs done, and the soulless logic of capitalism fuels ever-growing inequality?


These questions may seem purely hypothetical. But Ford sees plenty of evidence that justifies soberly posing them at this particular historical moment, when “machines themselves are turning into workers, and the line between the capability of labor and capital is blurring as never before.”


On the one hand, old stories, like the one about agricultural work, are getting updated. For example, the Japanese are now using devices to identify ripe strawberries by color and pluck them within a matter of seconds. On the other hand, automation isn’t just taking over repetitive, assembly-style mechanical labor. In other words, it isn’t just limiting options for human employment in expected sectors. Rather, Ford claims that the future of the service industry writ large might well be affected, as exemplified by Momentum Machines. This San Francisco start-up is poised to introduce cost-cutting technology designed to whip up, in rapid-fire fashion, gourmet burgers “at fast food prices.” Such technology won’t just supply restaurants. Momentum Machine’s business plan includes stores and “perhaps even vending machines.”


More shocking, Ford proclaims that even white-collar jobs are becoming precarious. Everywhere he sees signs of their impending obsolescence. Forbes and other venues are publishing computer-generated news. Law firms are using eDiscovery software to analyze documents. Professors are handing off essays to computers to grade. The London Symphony orchestra played a well-received musical composition created by a computer.


The technological developments driving these examples lead Ford to conclude “there are good reasons to believe that America’s economic Goldilocks period has […] come to an end.”


A “Goldilocks economy” exists “when growth isn’t too hot, causing inflation, nor too cold, creating a recession.” Like Goldilocks’s porridge, it is “just right.” Ford appropriates the related idea of a “Goldilocks period” from Jared Diamond’s account of the 19th-century European colonization of Australian agriculture:


Like American economists in the 1950s, the Australian settlers assumed that what they were seeing was normal, and that the conditions they observed would continue indefinitely. They invested heavily in developing farms and ranches on this seemingly fertile land.


Within a decade or two, however, reality struck. The farmers found that the overall climate was actually far more arid than they were initially led to believe. They had simply had the good fortune […] to arrive during a climatic “Goldilocks period” — a sweet spot when everything happened to be just right for agriculture. Today in Australia, you can find the remnants of those ill-fated early investments: abandoned farm houses in the middle of what is essentially a desert.


Heated debate surrounds the future of the United States, and Ford marshals plenty of evidence to justify pessimism. To give two examples, he emphasizes that, “as of 2013, a typical production or nonsupervisory worker earned about 13 percent less than in 1973 […] even as productivity rose by 107 percent and the costs of […] housing, education, and health care have soared.” And the other example: “The first decade of the twenty-first century resulted in the creation of no new jobs” and “income inequality has since soared to levels not seen since 1929.”


Reviewing all of Ford’s statistics as well as chasing down interpretations that challenge his analyses would lead us down a well-traveled economist’s rabbit hole. To get a fresh perspective, we ought instead to consider whether Ford is right in identifying a techno-optimism bubble that impedes smart discussion of the economic consequences of innovation.


Ford only uses the word “bubbles” a few times in his book. An important use occurs when he differentiates recurring economic issues from the stark new problems that technology poses. Consider the following paragraph:


Among practitioners of economics and finance, there is often an almost reflexive tendency to dismiss anyone who argues that this time might be different. This is very likely the correct instinct when one is discussing those aspects of the economy that are primarily driven by human behavior and market psychology. The psychological underpinnings of the recent housing bubble and bust were almost certainly little different from those that have characterized financial crises throughout history […] It would be a mistake, however, to apply that same reasoning to the impact of advancing technology.


Here he runs the risk of conflating two different things: 1) whether something new is actually happening with technology itself; and 2) whether longstanding psychological tendencies make it hard to detect technological novelty. Indeed, if many readers find Ford’s skepticism ridiculous, maybe their ridicule goes beyond standard disbelief supported by wonky economic calculations. Maybe naysayers like Ford have a hard time being taken seriously because of the “social-psychological phenomena” that give rise to contagiously optimistic bubble-thinking.


Vincent Hendricks, professor of Formal Philosophy and director of the Center for Information and Bubble Studies at the University of Copenhagen, notes: “The term ‘bubble’ is no longer confined to just financial movements […] [I]t can refer to irrational, collective, aggregated behaviour, beliefs, opinions or preferences based on social proof in all parts of society.” According to Hendricks, “boom-thinking, group-thinking, herding [and] informational cascades” are some of the main mechanisms that lead to bubbles forming and not bursting until devastating damage occurs. Perhaps these or related mechanisms partially account for the tendency that most bothers Ford — the tendency to anchor the future in “lessons gleaned from economic history.” They don’t concede or recognize that “the question of whether smart machines will someday eclipse the capability of average people to perform much of the work demanded by the economy will be answered by the nature of the technology that arrives in the future.”


When seen through the lens of bubble phenomena, it does indeed seem that many people may be in denial about the likely devaluation of human labor with advancing information technology. Professionals and intellectuals of various stripes are happy to concede that low-skill and low-education jobs are vulnerable to automation (and, of course, globalization), but assume their own jobs will be shielded from technological takeover. They cling to the outdated belief, shouted across the marketplace and educational sectors, that investing in higher education of an intellectually demanding sort will ensure success in those jobs that computers can’t dominate.


It’s an understandable conviction. If it’s wrong, then today’s lucrative positions, now showered with social prestige, will be eliminated. Widespread advice about how to stay a step ahead of machines will come to seem hopelessly idealistic. And cherished convictions about sophisticated human judgment being irreducible to computational processes will be nothing more than Pollyannaish dogma. Ford challenges comforting status quo thinking on the subject by declaring:


As the technological frontier advances, many jobs that we could today consider nonroutine, and therefore protected from automation, will eventually be pulled into the routine and predictable category. The hollowed-out middle of the already polarized job market is likely to expand as robots eat away at low-wage jobs, while increasingly intelligent algorithms threaten high-skill occupations.


Ford does concede some fields will better withstand the automation onslaught than others. For examples, he characterizes healthcare as an especially robust sector. While those “areas of medicine” that “don’t require interaction with patients” (e.g., radiology) are on shaky ground, healthcare workers who physically and conversationally engage the sick are in much better shape.


In the end, however, Ford depicts resilient professions as rare. He foresees a tidal wave of apocalyptic destruction arising from potent overlaps between big data (exploding through cloud computing) and artificial intelligence (amplified by deep learning):


organizations are collecting incomprehensible amounts of information about nearly every aspect of their operations, and a great many jobs and tasks are likely to be encapsulated in that data — waiting for the day when a smart machine learning algorithm comes along and begins schooling itself by delving into the record left by its human predecessors.


Like escaped prisoners too busy focusing on where they’re going to notice the trail of clues they’re leaving behind for their captors, workers everywhere, according to Ford, are creating massive digital footprints that reveal how they deliberatively and intuitively solve problems. Ever-smarter machines will follow their tracks in order to turn the data into reliable, computational heuristics and problem-solving techniques.


And that, in a nutshell, is the root of Ford’s diagnosis of the problem. We’re human. In the absence of specialized training that pushes against our natural inclinations, we’re biased to see the world through human eyes. From such a perspective, today’s accomplishments will be met or exceeded by future generations of humans. But the world looks different from a machine’s perspective. The machine’s focal point will be massive databases and other digital repositories overflowing with enough raw material to create templates of skill that don’t require humans for their operation.


Ford’s bubble hypothesis suggests we’re too optimistic about the future for three reasons: we’ve got all-too-human sensibilities; we’re biased to believe experts, including economists, who channel such sensibilities; and there’s little incentive to depart from status quo thinking because, if we do, then we’re left with only two rational options — to pursue healthcare work; or, against the odds, to become precisely those disruptive entrepreneurs creating technologies that speed up our collective impotence. These aren’t great choices for a lot of us. As for shifting to a collective action solution, like demanding a guaranteed income for all, this seems inconceivable in the American political context.


One way to assess the bubble hypothesis is to see how it stands up when applied to specific cases. Let’s take the future of law. In “Four Futures of Legal Automation,” Frank Pasquale and Glyn Cashwell essentially pump the brakes on enthusiasm for automating legal work; they note that while some parts of the litigation process have been delegated to machines, more complex areas of the law — many of which, it seems, cannot be fully understood without direct experience or rich sociological understanding of their operations — are less amenable to computerization. Highlighting the differences, they caution against prognosticators who extrapolate from cherry-picked cases (supporting the “humans-out-of-the-loop” model) to the field in general:


Classic values of administrative procedure, such as due process, are not easily coded into software language. Many automated implementations of social welfare programs, ranging from state emergency assistance to Affordable Care Act exchanges, have resulted in erroneous denials of benefits, lengthy delays, and troubling outcomes. Financial engineers may quantify risks in ever more precise ways for compliance purposes, but their models have also led to financial instability and even financial crisis.


Pasquale and Cashwell also argue that “difficult cases” that go beyond “settled law,” such as deciding how to handle emergencies like cybersecurity breaches, might strain the ability even of highly advanced artificial intelligence. In these instances, competing value-based judgments would need to be made, and computers might lack the tacit knowledge and normative sensibilities to make those judgments.


And then there’s the diagnostic problem. When automation skeptics point out problems with existing systems, they’re essentially providing uncompensated labor that helps software companies remedy mistakes. This criticism-response dynamic can become a perverse feedback loop that funnels into automation advocates and scholars claiming “progress.” It might have disincentived Pasquale and Cashwell from disclosing specific problems they’ve encountered with automated legal software.


Finally, there’s the political problem. Pasquale and Cashwell argue that in the years ahead it will take more than “the degree to which tasks are simple or complex” to determine which legal jobs get assigned to humans and which to machines. Sociological and political factors will also play a role, with “extralegal developments” proving “crucial” for “determining the future balance of computational and human intelligence in the law.” For example, if political will grows for combatting inequality, then society may determine that “human judgment” is needed to execute the critical legal tasks associated with reviewing complex regulations and drafting measures for better regulating markets and sectors that challenge social justice ideals.


These issues — selective choice of examples, misjudging the computer’s perspective, market complications holding back automation critics, and automation speculators’ tendency to divorce technological options from political decisions — apply to prognostication in lots of fields, not just in the law. As the debate about automation continues, it will become increasingly important to weigh them against the biases that concern Ford.


And that’s why it’s not enough to fight predictions with counter-predictions. Whether we live in a bubble that prevents us from looking into the abyss of future unemployment depends in part on the psychological, social, political, economic, and even technological conditions that shape the marketplace of opinions on the matter. Futurism needs to be tempered by rigorous inquiry into its own underlying dynamics.


¤


Evan Selinger is an associate professor of philosophy at Rochester Institute of Technology, where he is also affiliated with the Center for Media, Arts, Games, Interaction, and Creativity (MAGIC).

LARB Contributor

Evan Selinger (@evanselinger) is a professor of philosophy at Rochester Institute of Technology.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!