AI Critique 2.0

By Leif WeatherbySeptember 1, 2021

AI Critique 2.0

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford
Revolutionary Mathematics: Artificial Intelligence, Statistics, and the Logic of Capitalism by Justin Joque
New Laws of Robotics: Defending Human Expertise in the Age of AI by Frank Pasquale
The Promise of Artificial Intelligence: Reckoning and Judgment by Brian Cantwell Smith

PATTERN AND PREDICTION might be taken as the keywords of the young 21st century. Finding patterns automatically, in the “ocean” of data we hear so much about today, falls increasingly to avant-garde algorithms called “neural nets,” which “learn” patterns in consumer data, traffic data, image data. Whatever you’re doing right now, you’re probably not more than one step downstream from a net, which has determined the content you’re seeing, the route you’re taking, the size of the box that was delivered, the specific driver who picked you up at the airport. Nets have spread over the globe in the last decade, recognizing our faces, spotting tumors on X-rays, and personalizing our advertisements. AI has spilled out into the world, where it was once isolated in heuristic mathematical and computing systems.

The first of these nets to succeed in the public eye learned to see cats from millions of images harvested from the cat-heavy internet and labeled by workers on Amazon’s Mechanical Turk. The technique is called “deep learning” because it takes input data and passes it through many “layers” of “neurons,” intentionally producing a random — and so initially false — outcome, and then sending the signal back to the beginning for correction, over hundreds or thousands of iterations. The vaudeville act gets less amusing when its object is labor or identity, though. As Kate Crawford put it in her recent book, Atlas of AI, mug shots and other police data are the “urtext of the current approach to making AI.” Justin Joque, in his forthcoming book Revolutionary Mathematics, calls the statistics on which these methods are based “the mathematics of capitalist orthodoxy.” AI systems trap us in a kind of collective solipsism, taking our own judgments and actions and surrounding us with quantitatively warped versions of them, presenting the world to us through a strange lens both tailored to us “personally” and completely alien.

Joque’s and Crawford’s books are part of a group of recent assessments of neural nets. Frank Pasquale’s New Laws of Robotics evaluates the social effects and politics of these systems, while Brian Cantwell Smith’s The Promise of Artificial Intelligence, by far the most compelling philosophical account of the new AI to date, asks after their cognitive status — a necessary question, since these systems are capable of much more than their rigid artificially intelligent predecessors. They play Go rather than chess; they solve real-world decision problems rather than logic puzzles; and they immiserate directly rather than in an ancillary fashion.

Crawford calls this new AI “politics by other means” and “a registry of power” in an attempt to change the narrative from philosophical and science-fictional boosterism to the plain truth that “AI systems are expressions of power that emerge from wider economic and political forces, created to increase profits and centralize control for those who wield them.” These systems, for Crawford, are not “abstractive” so much as extractive, relying on rare earth minerals and notorious mining practices that contribute to the climate crisis and on hidden labor like Turkers, TaskRabbits, and Uber drivers. Smith sees these systems “[m]ining vast troves of data, and employing computational power beyond imagining” such that “they will increasingly dominate the substrate and infrastructure of life on the planet.”

We often hear that digital methods and machine learning in particular represent a “new Taylorism,” measuring and surveiling labor in the effort to make it faster, less costly, more efficient. But where Taylor subjected the industrial factory to “scientific” measurements, AI makes the very tools that workers use into trackers. Think of the scanner that the UPS delivery person carries, which tracks, in tandem with his GPS system, both packages and labor itself. The situational intelligence of the worker becomes the target of AI in a “nearly automatic production of knowledge,” in which humans “appear not to add intellectual labor to the production” at all, in Joque’s words. The problem and fascination of this situation calls for a new approach to the critique of AI, and these books push us firmly in that direction.

¤


The first wave of AI critique emerged in reaction to what philosopher John Haugeland called “good old-fashioned AI,” or GOFAI. These “classical” systems, like the “General Problem Solver” that crunched high-level logical inferences, were pure rules, top-down symbol manipulators with powerful but narrow application. The philosopher Hubert Dreyfus argued that these formal systems were not “in-a-situation,” a notion he adapted from Martin Heidegger’s concept of “being-in-the-world.” Meaning was contextual, he argued, and AI operated only with rigid, uncontextualized symbols. He gained a friend, among many enemies, in the AI engineer Joseph Weizenbaum, who plunged himself into the uncanny valley when his program ELIZA (generally thought of as the first chatbot) gained the trust of his secretary as an ersatz therapist. Horrified, Weizenbaum borrowed Max Horkheimer’s notion of an “instrumental rationality” and crusaded to sequester AI in applications appropriate to mere “reckoning,” as opposed to human reason. By now it could not be more clear that those two regions are mixed, making a sort of digital parody of Horkheimer’s (and Theodor Adorno’s) notion of a “dialectic of Enlightenment,” in which reason brings about utterly irrational systemic consequences.

In The Promise of Artificial Intelligence, Brian Cantwell Smith argues that even a new, second wave of AI will not “lead to genuine intelligence,” though it “will achieve formidable reckoning prowess.” Recalling Weizenbaum, he argues that machine learners reckon but cannot judge. Judgment, for the philosopher, involves engagement with and “deferral” to the world. We are committed to that which surrounds us epistemologically, and machines are not. Whatever they gather, detect, express, ultimately comes from us. We depend on context, but machines depend on input — a distinction descended from Dreyfus. Nets have learned to talk, but they “do not know what they are talking about.”

This lack of awareness has not prevented the second wave of AI from breaking through any cordon that Weizenbaum, Smith, or anyone else might have wanted to place around it. Instead of manipulating “symbols,” it operates by connecting data points and detecting patterns. The philosopher Gilbert Simondon once called the evolution of technology “serrated,” marked by single discoveries, like the steam engine or the vacuum tube, that launched a middle-term development followed by a plateau. Serration is a good image for AI too, guided as it is by science-fictional aspiration that is always defeated, but spins out technical plateaus from which economic and social effects devolve. Where symbolic AI played semiotic games — even language games — machine learning has gained purchase in the interstices of our processes of reasoning, creation, and governance.

Because there is so much digital data now, patterns really can be detected, some of them alien to human concepts. Smith says that the interest of deep learning is in the recognition that the world is “ineffably dense.” First-wave AI erred by thinking of rationality as “postregistration,” something we do on top of and after we have neutrally gathered perceptions. On this view, what is gained in perception is innocent of logical manipulation, and should be equally well handled by a rational human or a logical machine. Machine learning, on the other hand, “is helping to open our eyes to what phenomenologists have long understood: that holding registrations accountable to the actual (nonregistered) world is part and parcel of intelligence, rationality, and thought.” The only catch is that the “registration” of the net is caught between a human source and an algorithmic process, so that rather than intelligence we get the proliferation of half-automated signs, unreliable shards of what would be judgment or wisdom. Patterns exist everywhere, after all, not just in images but also in what we might take to be the most intimate and private space of our minds.

¤


Amazon announced in 2019 that Rekognition, facial recognition software apparently named by the scriptwriters who did The Terminator, had improved capacity to detect “all seven emotions.” The very phrase causes me more than seven emotions. Crawford narrates the history of Paul Ekman’s research, which is widely accepted without qualification in computer science and AI circles. Ekman — the basis for the disastrous Tim Roth vehicle Lie to Me — was approached and funded by ARPA (now DARPA), the military-industrial complex’s R-and-D wing, to undertake comparative anthropological research on the facial expressions that correspond to emotions. The undertrained Ekman took his huge grant and followed the work of Silvan Tomkins, who had argued that affect was based in biology. Tomkins, however, had thought of the role of affect as flexible, a reaction that could take on many shapes and forms. By contrast, Ekman asked subjects across the globe to make a face corresponding to a named emotion, recording the evidence photographically and calculating the relation between expression and emotion with a complex formula that, predictably, gained steam when he computerized it in the 1980s. Crawford claims these calculations were performed by an early machine learning system.

Among Ekman’s many critics was the influential anthropologist and cyberneticist Margaret Mead, who argued that the expression of emotion was relative to culture. Even if one was looking for a universal set of twitches in the face, however, the method of naming the emotion in advance ruins the exercise. Yet, as Crawford recounts, next to no dissent in the use of Ekman’s research exists, even as start-ups like Affectiva are snapped up by the bigger platforms and affect recognition tools are applied “in national security systems and at airports, in education and hiring start-ups, from systems that purport to detect psychiatric illness to policing programs that claim to predict violence.” Crude assumption becomes capture, and the statistical norm of the learning system actually determines the ability to travel, to be unjailed, to have a job.

Crawford sees this as a profit-driven “cartoon sketch that cannot capture the nuances of emotional experience in the world,” and has recently called for regulation and refusal of these systems. After all, what if these systems could capture emotion in a more nuanced fashion? Improving their accuracy would not change that they are put to use in the name of centralization of markets, control of communication and labor. Affect becomes a market infrastructure produced as much as it is recognized. A corrupted projection of our “innermost” collective selves guides the flow of capital, repeating and even creating new inequalities. What is to be done?

¤


Pasquale has a program, based rhetorically on Isaac Asimov’s famous three laws from I, Robot. But where the scion of classical science fiction saw an ethical code for robots, effectively a “do no harm” flowchart, Pasquale sees a stagnating economy, a decaying infrastructure, and a vampiric system of “knowledge” meant to drain waning lifeblood from society. He argues that learning systems should remain a question for the culture of expertise, forming a strong ancillary force to collectives of human wisdom and practice, “complementing professionals” but not replacing them, not “counterfeiting humanity.” For this to work, the new AI must be systematically removed from “zero-sum arms races” of all kinds, and “robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s).”

In this, Pasquale is building on his notion of a “second wave of algorithmic accountability.” While the “techlash” initially focused on bias as a form of error, this type of critique was easy for engineers to absorb. If a credit recommendation system reproduces the historical problem of redlining, in an example of what Ruha Benjamin calls “the New Jim Code,” engineers can claim to recognize and attempt to fix the problem. Think of Mark Zuckerberg’s promise to fix disinformation and hate-speech problems on his platform, an infinite deferral. No one, probably not even Zuckerberg, thinks this promise will change the systemic racism, which creates a holding pattern. The accountability that Pasquale proposes would try to end this deferral by asking where and how (and even if) these systems should be deployed. Crawford’s term for this kind of approach is a “politics of refusal.” But, reading these calls, one gets the feeling that we are latching the barn door while plugging our ears to the stampede outside. Not only has the “artificial intelligence of bots” replaced deliberation with a “brutal imperative of profit,” but a pursuit of “the dictates of reason,” according to Pasquale, has produced the “essence of unreason” — we are already in the dialectic of data Enlightenment.

If the first wave of algorithmic accountability was too easy for Silicon Valley to absorb, the second wave can read like nostalgia for a time before the casualization of labor and the monopolization of data. To labor, speak, or socialize today is to do so in the channels allowed by AI-dependent platforms, which promise efficiency, happiness, and rational allocation of welfare. Rather than real efficiency or cost reduction, though, they siphon value from the underlying infrastructure, diverting funds from it and making the whole system worse. More computation means more work, more cost, not less, since, as Pasquale writes, “AI advances are fueled by data, and data-gathering done well is expensive.” This is not to say that computation holds no promise, but what Pasquale calls “high-quality automation,” which would place sophisticated tools in the hands of qualified workers, requires a “cost cure,” a form of fair pay for the proliferation of labor tasks, hermeneutic and otherwise, that are entailed by roboticization of workplaces. The only question is where the political will for such a massive investment would come from.

To ask the question of AI today is to confront the entire platform model of economics. The concentration of a few firms at the top of various digital industries dovetails with the maltreatment of labor. Pasquale’s cure is “fair pay” for the work generated by AI — the same AI that is currently being sold as a way not to pay for that work. It’s hard to avoid the conclusion that more than new laws for robotics is needed. Potentially a lot more: platforms represent a new form of capital, as K. Sabeel Rahman and Kathleen Thelen have shown, a “patient” capital not demanding immediate profits, seeking to take control of entire supply chains, and forming an alliance with consumers that is hostile to labor. This is why Pasquale calls for the proper payment of adjuncts in university, the alleviation of student debt, training for teachers, collective bargaining agreements, and legal protections for workers across the AI-affected spectrum, i.e., all of the economy. New laws for robotics can only have an effect if they complement a massive restructuring of the relationship between the market and society.

¤


Justin Joque may provide a theoretical foundation for this political fight because he shifts the base text of critique away from the phenomenological questions and into a Marxist frame. Drawing on apostate Frankfurt School member Alfred Sohn-Rethel, Joque thinks of nets and their mathematics as a form of “real abstraction,” which Sohn-Rethel used to characterize Marx’s notion of the “value-form,” the simultaneously abstract and material fact of commodity value. Sohn-Rethel took the radical position that our ability to think was premised on exchange — the whole history of trading relations. For Joque, the real abstraction of the present is in the quantitative application of subjective data at scale, a sort of dialectical inversion that lurked, until the advent of the nets, in the arcana of statistical techniques.

Joque’s writing makes for riveting prose, given the aridity of the topic. Frequentism, he tells us, was a doctrine of probability in which the likelihood of an event is measured against randomness. How good our guesses are can be judged against the actual average of outcomes in ideal circumstances. The crucial shift to the “mathematics of capitalist orthodoxy” came with the rise of “Bayesianism” (named for the 18th-century mathematician Thomas Bayes), in which there is no objective probability at all. Rather than think that our data is only good if it fits the outcomes of a normally proceeding world, this statistical approach evaluates subjective expectations against other subjective expectations. Outcomes are incidental — what matters is the constant refresh of data, the impression, the renewed guess. Events are ancillary, simply the feed hooked up to our horizons of probability.

It is this understanding of statistics that machine learning takes up, since it too is based on the constant refresh of data, the flexibility to recalculate and recalibrate as input streams are updated. But neural nets are also the technology of Bayesianism because they brook no “outside” world: there is only data, harvested and curated however it happens to be, incorporating bias and producing immiseration and profit. The expectation that things will continue on platform-preferred lines, that labor will remain really subsumed in data channels and wealth will remain concentrated at the top, produces the “odds” that in turn motor the infrastructure, in an automated self-fulfilling prophecy.

Joque treads lightly but insists that neural nets and their “Bayesian” logic form a new mode of capitalist metaphysics, a “torsion between the subjective and the objective,” meaning that “the cost of being a hardnosed realist is believing in the most imaginary inventions like a near infinite series of coin flips or the guaranteed value of money. The more one tries to imagine a world beyond subjectively produced, human abstractions, the more central these abstractions become.” But we should not try to escape from abstraction. Critique runs the risk of attempting to establish a largely imaginary “impossible and unalienated form of capitalism.” But abstraction didn’t come from data, and the most deleterious consequences of large-scale data crunching are hardly captured by drawing a line in the sand between human and machine. Joque proposes instead that we must “free alienation from capitalism,” moving the stakes beyond the search for authenticity that has defined so much AI critique down to the present.

The language of a politics that includes the capacities released by digital technologies has yet to take full form. Dispensing with the dualisms of the first wave is crucial, and these books take some decisive and some tentative steps in that direction. AI is politics all the way down (Crawford); it rests on an infrastructure in dire need of more fundamental reform (Pasquale); it is formidable yet unanchored in reckoning prowess (Smith); it realizes a form of capitalist abstraction through which we must go to find any solution (Joque). These are elements of an unformed critical vocabulary that must be fleshed out in the years to come. AI Critique 2.0 will have to be political and mathematical, critical (in the Frankfurt School sense) and philosophical (in the logical sense), all at once. Neither phenomenology nor policy recommendations will be enough. As Joque points out, at stake is nothing less than the metaphysics of the present. The only question is whether we make it explicit, raise it to self-consciousness, or remain in the deadlock between denial and boosterism.

¤


Leif Weatherby is director of the Digital Theory Lab and associate professor of German at New York University. He is working on a book about cybernetics and German Idealism.

LARB Contributor

Leif Weatherby is director of the Digital Theory Lab and associate professor of German at New York University. He writes about digital culture, political economy, and the German philosophical tradition. He is currently working on a book about cybernetics and German Idealism.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!