CORPORATIONS REGULARLY ADVERTISE their commitment to “ethics.” They often profess to behave better than the law requires and sometimes may even claim to make the world a better place. Google, for example, trumpets its commitment to “responsibly” developing artificial intelligence and swears it follows lofty AI principles that include being “socially beneficial” and “accountable to people,” and that “avoid creating or reinforcing unfair bias.”

Google’s recent treatment of Timnit Gebru, the former co-leader of its ethical AI team, tells another story. After Gebru went through an antagonistic internal review process for a co-authored paper that explores social and environmental risks and expressed concern over justice issues within Google, the company didn’t congratulate her for a job well done. Instead, she and vocally supportive colleague Margaret Mitchell (the other co-leader) were “forced out.” Google’s behavior “perhaps irreversibly damaged” the company’s reputation. It was hard not to conclude that corporate values misalign with the public good.

Even as tech companies continue to display hypocrisy, there might still be good reasons to have high hopes for their behavior in the future. Suppose corporations can do better than ethics washing, virtue signaling, and making incremental improvements that don’t challenge aggressive plans for financial growth. If so, society desperately needs to know what it takes to bring about dramatic change. On paper, Susan Liautaud is the right person to turn to for help. She has impressive academic credentials (a PhD in Social Policy from the London School of Economics and a JD from Columbia University Law School), founded and manages an ethics consulting firm with an international reach, and teaches ethics courses at Stanford University.

In The Power of Ethics: How to Make Good Choices in a Complicated World, Liautaud pursues a laudable goal: democratize the essential practical steps for making responsible decisions in a confusing and complex world. While the book is pleasantly accessible, it has glaring faults. With so much high-quality critical journalistic coverage of technologies and tech companies, we should expect more from long-form analysis.

Although ethics is more widely associated with dour finger-waving than aspirational world-building, Liautaud mostly crafts an upbeat and hopeful narrative, albeit not so cheerful that she denies the obvious pervasiveness of shortsighted mistakes and blatant misconduct. The problem is that she insists ethical values and technological development pair nicely. Big Tech might be exerting increasing control over our lives, exhibiting an oversized influence on public welfare through incursions into politics, education, social communication, space travel, national defense, policing, and currency — but this doesn’t in the least quell her enthusiasm, which remains elevated enough throughout her book to affirm the power of the people. Hyperbolically, she declares, “No matter where you stand […] you have the opportunity to prevent the monopolization of ethics by rogue actors, corporate giants, and even well-intentioned scientists and innovators.”

Such cheerful boosterism abounds in her book. Liautaud admits she clicks “I agree” on consumer applications when she has “no choice.” And yet she doesn’t want you to despair over ubiquitous incomprehensible privacy policies, inscrutable terms of service agreements, and rampant use of dark patterns threatening personal and collective liberty. Why? Because Liautaud finds solace in the thought that “we can withhold our consent when we are not properly informed (whatever the reason).” Given the high costs of opting out of poorly understood services, this is a glaring mismatch between threat and response. Sadly, the excessive positive thinking doesn’t end there. On one page, Liautaud admits tackling pressing ethical problems requires collective action. On another, she flatters readers by grandly describing their personal choices. She writes, “[Y]our ethics effort contributes to a global awakening.” I expected the next sentence to contain instructions for ordering an inspirational office poster.

The problem with Liautaud’s outlook is captured in the title. It places too much emphasis on the so-called “power of ethics” and gives far too little attention to the sources of power that constrain, deflate, appropriate, and destroy ethical aspirations. Indeed, given the framework Liautaud adopts, it would be awkward to say much about the difficulty of making robust policy changes that challenge power imbalances. Although Liautaud proclaims herself a “realist,” in reality her devotion to being a “staunchly pro-innovation ethicist” entails embracing a politically charged ideological commitment — hardly a recipe for being clear-eyed enough to reject pipe dreams. Pro-innovation tunnel vision drives overly conservative approaches to technology governance.

The book does do two things well. To popularize ethics, it contains smooth prose (thanks, in part, to editorial contributions from Lisa Sweetingham), omits off-putting academic jargon, and rarely drops academic names. Contrary to what one might expect, Liautaud mercifully skips the usual primers into the foundational philosophical schools of ethics. Hallelujah for not doing the obvious! There are already many approachable summaries of great thoughts from great thinkers, from textbooks to blog posts and podcasts. In a world where technologically mediated change creates anxiety by challenging traditional norms, we need accessible books on the subject. Carefully deploying nuggets of humanities and social sciences wisdom without getting bogged down in academese is a virtue in this context.

To Liautaud’s further credit, she presents many resonant examples. Her book opens with a harrowing account of how Boeing previously mismanaged its approach to airplane safety. And it proceeds by raising, and sometimes answering, provocative questions about how society can be better prepared to deal with complex challenges involving technology. To mention a few of the plentiful examples, Liautaud discusses the deleterious political consequences of disinformation rapidly spreading on social media, consumers using genetic testing products without grasping the full array of consequences, lawmakers struggling to respond to the new distributions of power that technologies like 3D printers facilitate, and everyday people being unsure how to socialize and work in a world where robots are becoming increasingly human-like and finding themselves conflicted about whether to post photos of their children on social media. To demonstrate how her ideas apply to experiences that don’t revolve around disruptive technology, Liautaud introduces hypotheticals like determining when older adults should stop driving. But smooth writing and good examples aren’t enough.

One problem that deserves highlighting is Liautaud’s claim to provide an all-encompassing four-step ethical framework that “individuals, organizations, and governments” can apply to “any decision.” Bolstered by basic ideas about “forces” that influence behavior and “pillars” that underpin ethical choices, it comes down to asking people to do the following: think about their guiding principles, question whether they have enough information to make a wise decision, identify stakeholders, and consider the short-, medium-, and long-term consequences of their choices. Frankly, anyone who purports to offer an ethical toolkit so sweeping and powerful yet so simply formulated is necessarily over-promising the goods it can deliver.

Liautaud wants us to trust that her ethical program can yield powerful results because she’s “road-tested” it “with all sizes and sectors of organizations, from multinational corporations and tech start-ups to global NGOs, academic institutions, and hospitals.” Rich as that experience sounds, it doesn’t equip us to judge her track record. After all, she doesn’t provide readers with much information about her clients, nor about what problems they struggled with and what improvements her consultation inspired. Although there are good reasons for her not to share this information, such as being bound by non-disclosure agreements, we can’t perform anything like an audit or credibly determine if, under institutional constraints, her glowing self-assessment has merit.

More fundamentally, as anyone who specializes in technology ethics knows, it’s unlikely that a contribution as sweeping and straightforward as Liautaud’s can get much further than surface-level analysis. While each of her examples is interesting, there’s an obvious cost to packing a book with so many of them. Quantity diminishes quality. Indeed, the case studies never advance discussions of responsibility beyond parroting well-established points from journalists and opinion writers.

Relatedly, while Liautaud’s ideas can spark a-ha moments, they’re hardly enlightening in any profound sense. Perhaps some readers hadn’t thought of stakeholders in the broadest possible sense including “any person, organization, object, or factor that could influence, or be affected by, a decision or situation.” But if they’re satisfied because Liautaud expands their understanding, they won’t realize she brushes aside crucial follow-up questions. For example, as a matter of justice, should an organization always prioritize the most vulnerable stakeholders who can suffer the worst consequences?

Likewise, after warning readers of the well-known danger of truth becoming so “compromised” that society faces an epidemic of “alternative facts,” Liautaud presents a battle plan that does a disservice to anyone who believes it addresses root causes. Reminding readers to seek out different viewpoints, avoid coming to rash conclusions, be wary of mistaking consensus for agreement on facts, and not to presume the quest to find the truth will be easy, tells them nothing about how to combat the perverse financial incentives that reward old and new media companies for spreading disinformation. Nor does it help them understand why so many people gravitate toward outlandish conspiracies, feel disenfranchised, or embrace unethical beliefs that mainstream information sources try to resist.

In sum, Liautaud leans into a familiar consultant’s trick: repackage common sense, appropriate widely recommended advice, adopt pithy communication strategies, and add a pinch of newly minted terminology to superficially make the package seem minty fresh. That’s why two thoughts should run through the reader’s mind when she catchily tells them that the book is an invaluable guide for “ethics on the edge,” and when she busts out additional Liautaudisms like “banished binary,” “crumbling pillars,” and “ethics on the fly.” First, that’s something only an aspiring “thought leader” would say. Second, always be skeptical of thought leaders.

Finally, it’s a major ethical shortcoming when Liautaud refuses to consider why some people might want to reject hazardous technologies outright. As law professor Frank Pasquale notes, these days nuanced debates about algorithmic accountability often involve clashes between two camps. On one side are the tech proponents who want to improve technologies by making them fairer, more transparent, and subject to greater accountability measures. On the other side are proponents of banning technologies and forbidding certain uses of them.

It’s perfectly fine for Liautaud to disagree with the ban advocates. But not taking them seriously enough to credibly discuss their position creates the impression she has a caricatured understanding of why they advocate for the following:

The closest Liautaud comes to addressing the logic of prohibitions is when she writes:

In some cases, we need to plant a stake in the ground […] with respect to outlier, clearly unacceptable robot and AI powers. For example, giving robots the ability to indiscriminately kill innocent civilians with no human supervision or deploying facial recognition to target minorities.

This statement sounds good in the abstract. But under scrutiny, it turns out to be meaningless.

The debate over autonomous lethal robots is contentious because international humanitarian law provides guidelines, not rules, for determining who is and isn’t an enemy combatant. The discussion, therefore, is about what constitutes discrimination (and related judgments, like proportionality); it isn’t about whether the military should permit indiscriminate murder. Furthermore, the killer robot debate extends to the following question: if autonomous systems are someday able to outperform us in discrimination tasks, then should they be given the authority to deploy lethal force without humans in the loop? Ban advocates say no because of considerations related to moral agency and due process.

Likewise, no reasonable person who defends the police’s use of facial recognition technology says society should permit law enforcement to target minorities. Instead, the debate is over whether biases embedded in the software add yet another basis for minorities foreseeably being disproportionately harmed. And it extends far beyond issues of false negatives and positives. Perhaps the biggest question is whether, for privacy and civil liberties reasons, it would be even more dangerous to equip police with perfectly accurate facial surveillance systems.

Ethicists make ethical choices. Choosing how to analyze technology, like deciding how to design and sell it, entails ethical commitments. Combining ethical optimism and innovation enthusiasm might be the right recipe to become a successful consultant. But society could sure benefit from skepticism and realism.

¤

Evan Selinger is a professor in the department of philosophy at the Rochester Institute of Technology.