Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI

By Evan SelingerOctober 14, 2019

Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI

Possible Minds by John Brockman

IT’S ALMOST A BANALITY nowadays to remark that artificial intelligence (AI) is so deeply embedded in our infrastructure that it’s affecting decisions everywhere. But what’s not trite is considering exactly how it will change markets, medicine, transportation, military operations, politics, social relations, criminal justice, and the likes of you and me — which will largely depend on big tech companies like Google, Amazon, Facebook, and the rest. If these behemoths continue to grow by supporting products and services that cause harm, then the most important stories we tell about AI won’t be about technology, but about capitalism incapacitating democratic governance. In other words: They will be about the private sector dictating the terms of innovation, including the direction of regulation. While the 25 contributors to Possible Minds: 25 Ways of Looking at AI have lots of smart, multidisciplinary things to say about software and society, they mostly underplay or quickly move past the supersized consequences of supersized corporate ambitions. This is an unfortunate omission, and it would be a dangerous one in the policy context.

With so many authors, Possible Minds covers lots of ground. Its main themes revolve around zeitgeist-level concerns with how narrow AI (which performs well in discrete tasks) is shaping society now and how artificial general intelligence (which can learn across domains and think for itself) might shape it in the future. Broader themes dominate in part because the contributors have such varying research agendas. Some of them are physicists, computer scientists, and entrepreneurs, while others have expertise in fields like art history, psychology, and biology. What they have in common: Having enough accolades to be household names or near household names in their fields of expertise and sometimes beyond. This is presumably why the editor, conversation catalyst and controversial Edge creator John Brockman, invited them to contribute to this volume. They are his version of the great minds of the moment.

As a guiding question, he asks them to consider how evolving narratives about AI corroborate and contest Norbert Wiener’s prescient concerns. A renowned cyberneticist who died in 1964, Wiener created a stir with the publication of his somber The Human Use of Human Beings (1950), a mass-market version of his earlier Cybernetics: Or Control and Communication in the Animal and Machine (1948). His ideas about automation continue to resonate today, and the dilemmas he identified still drive raging debates. In short: If his ideas have become familiar, familiarity hasn’t led to triviality by any means.

Born at the tail end of the 19th century, Wiener died many decades before the reign of ubiquitous, networked digital computers. As a result, as one of the contributors to the volume, computer scientist and roboticist Rodney Brooks, aptly notes, there’s only so much we can expect from him. Wiener couldn’t have foreseen the commercial internet or surveillance capitalism. It’s silly, then, even to try to look to him for a blueprint on how to stop Silicon Valley companies from weaponizing “the legal mumbo jumbo of the terms-of-use contracts” to eviscerate privacy through commercialized data exploitation. Still, Wiener did gesture to contemporary problems, and he exhibited a virtue that Brooks says is necessary for getting ourselves out of the mess these companies have created: moral leadership. At a cost to his career, Wiener criticized corporations and the military. Citing ethical reasons, he also refused funding from these sources.

Seth Lloyd, a theoretical physicist at MIT, observes that Wiener was right about automation fueling widespread cultural anxiety about changing workforce dynamics. Wiener also realized that authoritarian governments were likely to exploit computers and pressure democracies into becoming “more authoritarian themselves in confronting the threat of authoritarianism.” Indeed, the United States is still playing catch-up to this insight, and it has yet to come to terms with Russia using computational propaganda to interfere in its 2016 presidential election. In addition, it’s involved in a global AI arms race that seems all too likely to tip the country toward authoritarianism: it’s competing so intensely with China, for instance, that civil libertarians worry that the incentives for advancing dangerous surveillance technologies, like facial recognition, might overpower constitutional protections and their guiding principles.

Another contributor, Steven Pinker, a professor of psychology at Harvard University, is worth mentioning because he provides one of the strongest rejections of Wiener’s dark prophecies. As he sees it, Wiener overestimated the threat of powerful people using technology to subjugate others, and of technological governors ushering in their own forms of fascism. Pinker anticipates that citizens living in open, democratic societies will successfully resist catastrophic threats to their collective well-being because they’ll be moved by rational arguments to use technology responsibly.

Pinker’s Enlightenment optimism sounds decidedly Pollyannaish. It’s hard not to wonder what world he’s living in. He caricatures “tech prophets” for warning of a “surveillance state,” stipulating that they only consider the most extreme outcomes, like the government using technology to “monitor and interpret all private communications” and to suppress all forms of “dissent and subversion.” In reality, the leading conversations about surveillance are so gloomy and level-headed that perhaps Pinker should stick to his areas of expertise. Frankly, if you’re not worried about democracy being damaged after reading “The Dawn of Robot Surveillance: AI, Video Analytics, and Privacy,” a recent report written by Jay Stanley, senior policy analyst at the ACLU, you probably don’t understand it. Stanley rightly contends that “we are on the cusp of a fundamental change in the nature of surveillance — akin to a ‘phase transition’ in physics.” And that’s because society is transitioning “from collection-and-storage surveillance to mass automated real-time monitoring,” thanks, in large part, to the fact that advances in deep learning are improving intelligent video analytics. Simply put, the emerging security goal is to transcend the limits of “dumb” cameras that record information for human authorities to process and connect the improved digital “eyes” to “smart” systems that can understand what they see and “alert the authorities when something or someone deemed ‘suspicious’ is detected.”

In Pinker’s view, Wiener’s great cybernetic insight is that “[t]he laws, norms, customs, media, forums, and institutions of a complex community could be considered channels of information propagation and feedback that allow a society to ward off disorder and pursue certain goals.” In other words, ideas matter, including ones about how to protect the progressive values that Wiener deeply cherished from the challenges posed by AI-infused surveillance. But progressive ideas won’t win the day simply because people make rational arguments about regulation. In the debate over facial recognition technology, for instance, partisans argue over what, exactly, a rational response should entail; indeed, the very standard for differentiating necessary from excessive proposals is itself disputed. If, as I believe, extreme threats justify extreme responses, then what’s required is the moral courage to experiment with forms of governance like the bans on facial recognition software enacted in San Francisco, Oakland, and Somerville, which critics like to deride as reactionary technopanics.

Control problems related to AI (which, per convention, contributors often call “the Control Problem”) are at the heart of a great many essays in the volume. Without knowing that someday machines would beat world champions at chess and Go, Wiener realized that humans might have a hard time directing AI. He ruminated at length on how “Arthur Samuel’s checker-playing program learn[ed] to play checkers far better than its creator.” Many of the contributions to Possible Minds delve into the issue of how far AI, understood as a reified entity with its own independent abilities, can be expected to outstrip its human creators. Frank Wilczek, a professor of physics at MIT, spends much of his chapter developing the argument that humans can look forward to a cyborg existence for the next several generations. Neil Gershenfeld, director of MIT’s Center for Bits and Atoms, muses over whether “the maker movement is the harbinger of a third digital revolution” where “physically self-reproducing automata” could enslave us like “Skynet robotic overlords.” Venki Ramakrishnan, a Nobel Prize–winning biologist, confesses he’s disturbed by the prospect that “one day a computer may well come up with an entirely new result — e.g., a mathematical theorem whose proof, or even whose statement, no human can understand.”

Jaan Tallinn, co-developer of Skype and Kazaa, is so fixated on control problems that he judges most people’s social concerns about AI as “parochial.” Wiener’s warnings about technology itself are, he writes, more important than his social concerns:

Wiener primarily warned of the social risks — risks stemming from careless integration of machine-generated decisions with governance processes and misuse (by humans) of such automated decision making. Likewise, the current “serious” debate about AI risks focuses mostly on things like unemployment or biases in machine learning. While such discussions can be valuable and address pressing short-term problems, they are also stunningly parochial.


What, then, does Tallinn see as an AI issue that “adequately convey[s] the stakes of the game”? His answer is environmental risk. How, Tallinn wonders, can we prevent a superintelligent AI “with a much larger footprint than our own […] from rendering our environment uninhabitable for biological life-forms”? This may be true, but one might wish for a more thoughtful engagement with the steps that could both lead to and prevent such a disastrous future. Tallinn quickly notes that tech companies are incentivized to deny the risks that AI poses. Evoking Wiener, he states: “In some very real sense, big corporations are nonhuman machines that pursue their own interests — interests that might not align with those of any particular human working for them.” But this observation doesn’t motivate Tallinn to consider realistic strategies for limiting corporate power. Instead, with a naïveté akin to Pinker’s faith in the power of rational ideas, he believes “the AI-risk message can save humanity from extinction.” Especially now, he remarks, since “DeepMind, OpenAI, and Google Brain” have produced “the first technical AI-safety papers” and this type of information is making its way to the “world’s political and business elite.”

In a general way, I found more thoughtful analyses of AI, including in light of Wiener’s concerns, to exist elsewhere than in this volume. This said, MIT professor of art history Caroline A. Jones has an interesting chapter that discusses how artists can “remind us of the creative potential of paths not taken.” Such paths might have allowed society to avoid having AI deployed to erode “small-scale capitalism, the social contract, and the scaffolding of civility.” And Hans Ulrich Obrist, the artistic director of the Serpentine Galleries in London, reminds us that aesthetic presentations of AI that harness “artists’ critical visual knowledge and expertise” are useful for seeing AI in “a critical and analytical way.” Overall, however, Brockman’s contributors don’t have much to say about art, much less about an art form that can help us think about some of these issues: speculative fiction. This seems like a lacuna of sorts. Speculative fiction about AI can move us to think outside the well-trodden clichés — especially when it considers how technologies concretely impact human lives — through the influence of supersized mediators, like governments and corporations. Contra Tallinn, Ted Chiang is especially good at considering how “the parochial” matters for creating the conditions that can make AI dangerous. Long before existentially troubling control problems will arise — if, in fact, they ever do — these mediators will shape an immense number of decisions that determine how, exactly, AI products and services are integrated into society. Seemingly small decisions, like what consumers purchase, can have big and lasting effects.

Calling Chiang’s fiction widely acclaimed is an understatement. He’s won two of the most prestigious awards in the genre, the Nebula and the Hugo, and his novella “Story of Your Life” was adapted as the Oscar-winning movie Arrival. Chiang’s new collection, Exhalation, contains his most recent nine stories, with the oldest one dating back nearly 15 years ago. Parts of it indirectly — by subtly showing rather than telling — suggest why the contributors to Possible Minds do not say more about corporate power. In his opinion piece “Silicon Valley Is Turning into Its Own Worst Fear” (2017), he actually tells the reasons.

There, Chiang argues tech company leaders lack psychological insight into why they, of all people, worry about AI extinguishing humanity through the myopic pursuit of banal goals. Elon Musk once proposed a thought experiment about humans tasking an AI to pick strawberries. Over time, the software redesigns itself to meet the core objective with maximum efficiency; eventually, it blankets every nook and cranny of the planet with strawberry fields and annihilates civilization in the process.

In Musk’s formulation, human bodies pile up because a bot’s values are misaligned with our own and the bot doesn’t grasp the full impact of its behavior. Chiang, however, doesn’t see the scenario as a parable about a non-malevolent entity unleashing unintended consequences. Instead, he argues that the tale of mechanical massacre is riddled with subtext about the immorality of unrestrained human ambition:

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly.


Chiang thus argues Musk’s doomsday scenario isn’t a useful tool for thinking critically about the risks AI poses. The thought experiment is poisoned by projection, reflecting a libertarian desire to free corporations from regulatory constraints that impede market growth. Furthermore, Chiang sees it as shedding light on the irony of big tech companies embracing AI ethics. At the same time that tech companies score public relations points by stressing the importance of AI respecting human values, they fight regulation that restricts their own inhumanely wielded power. “Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals,” Chiang declares, “because that’s the attitude they adopted.”

Chiang’s psychologizing puts some of the claims made in Possible Minds in a new light. Consider MIT physicist and AI researcher Max Tegmark’s declaration:

Indeed, postponing work on ethical issues until after goal-aligned AGI is built would be irresponsible and potentially disastrous. A perfectly obedient superintelligence whose goals automatically align with those of its human owner would be like Nazi SS-Obersturmbannführer Adolf Eichmann on steroids: lacking a moral compass or inhibitions of its own, it would, with ruthless efficiency, implement its owner’s goals, whatever they might be.


Pursuing ruthless efficiency without being tempted by ethical constraints? Sounds an awful lot like Chiang’s indictment that “the fears of superintelligent AI […] reflect […] the inability of technologists to conceive of moderation as a virtue.”


Commercial exchanges are important in many of the Exhalation stories. In my favorite chapter, “The Lifecycle of Software Objects,” Chiang raises profound questions about AI ethics by imagining a company selling artificial beings called digients as pets, becoming emotionally attached to them, and considering them worthy of protected rights. “The Truth of Fact, the Truth of Feeling” largely deals with a search tool called Remem that can hyperefficiently bring up video of digitally recorded past experiences. A spokesperson from Whetstone, the company selling it, pitches technologically enhanced memory as a tool for fundamentally improving our lives, even making us more forgiving. “Dacey’s Patent Automatic Nanny” tells the story of why a mathematician created a child-rearing machine and sold the first version of the device at the beginning of the 20th century, but ultimately failed to create a viable alternative to human-human interaction. “What’s Expected of Us” shows us the despair that people feel after buying a Predictor, a small device that undermines free will by flashing a green light one second before you can press the single button on its display. And in “Anxiety is the Dizziness of Freedom,” devices called “prisms” (short for “Plaga interworld signaling mechanism”), which enable communication with alternate versions of ourselves in parallel universes, become monetized for enterprising and desperate clients alike.

Chiang’s emphasis on consumer technology is important because consumer products play such a crucial role in shaping our beliefs, attitudes, and expectations toward AI. For example, the more people grow accustomed to using facial recognition products and services that enhance efficiency and that can, in the moment, seem altogether too fun or mundane to be harmful — whether it’s tagging photos, unlocking a phone, or projecting how your face might look in the future — the more facial recognition technology becomes normalized. Normalizing fuels “function creep,” and as expansion drifts toward ubiquity, it becomes harder to care about the problems around privacy and civil liberties that facial surveillance poses. Not surprisingly, Amazon, the company that’s arguably the commercial backbone of the internet and whose founder and CEO, Jeff Bezos, is “the wealthiest person alive,” continues to market its facial recognition software, Rekognition, to law enforcement despite strong opposition from privacy, civil liberties, and human rights groups over the dangers of doing so. To make things worse, a recent “breakthrough from its AI experts” is leading the company to claim the “algorithms can now read fear on your face, at the cost of $0.001 per image — or less if you process more than 1 million images,” despite scientific skepticism over the prospects of drawing reliable inferences about emotions from facial expressions.

While Brockman and his contributors make it clear that there are many stories to tell about the promises and perils of AI, Chiang reminds us that markets will have an oversized influence on which AI-infused products and services get developed, how they’ll enter and exit our lives, and who we’ll become as their influence impacts how we think, play, work, and govern. It’s too bad that Brockman didn’t make political economy a priority when selecting the authors for Possible Minds, and that he gives Stephen Wolfram, CEO of Wolfram Research, the last chapter, and thus the final word. Wolfram states: “More and more, the AIs will suggest to us what we should do, and I suspect most of the time people will just go along with that. It’s good advice — better than what you would have figured out for yourself.” Big tech companies would love our blindly deferring to their proprietary algorithmic recommendations for what to read, see, and listen to, where to go, what to buy, and even how to talk with others; they’d love our not questioning who actually benefits from our following their suggestions. Perhaps, then, the most important narratives aren’t about AI per se but capitalism — about whether the progressive values that Wiener championed are a match for the private sector having so much power over innovation.

¤


Evan Selinger is a professor of philosophy at Rochester Institute of Technology.

LARB Contributor

Evan Selinger (@evanselinger) is a professor of philosophy at Rochester Institute of Technology.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!