Deceitful Media: Artificial Intelligence and Social Life after the Turing Test | Leonardo/ISASTwith Arizona State University

Deceitful Media: Artificial Intelligence and Social Life after the Turing Test

Deceitful Media: Artificial Intelligence and Social Life after the Turing Test
by Simone Natale

Oxford University Press, New York, 2021
204 pp. Trade, $99.00; paper, $29.95
ISBN: 9780190080365; ISBN: 9780190080372.

Reviewed by: 
Anthony Enns
January 2022

The term “apocryphal media” is often used to describe media technologies that do not work as intended but are nevertheless believed to be functional. Media artist Jamie Allen also argues that “all technologies contain at least some element of apocrypha, as they always comprise functions or benefits that exceed their limitations in the here and now,” and he emphasizes the importance of exposing the misrepresentations and misinterpretations of new media, which often seek to deceive users and earn their trust.

Simone Natale’s new book Deceitful Media makes a similar argument with regard to computer-mediated communication, as it shows how computers have often been perceived as magical devices that seem to be endowed with intelligence and personality. First, the book provides a comprehensive history of artificial intelligence and the development of chatbots like ELIZA, PARRY, and various entries in the Loebner Prize competition, which were all based on Alan Turing’s famous “imitation game.” This history shows how artificial intelligence was conceived from the beginning as inherently deceptive, as its purpose was to simulate rather than replicate human intelligence. It also shows how chatbots are “designed with a model of the human in mind,” as they are “envisioned, developed, and fabricated so that they can adapt to their users” (102). The book then examines the history of computer-controlled characters and virtual assistants, like Microsoft Bob and Alexa, which do not attempt to pass as human but still employ various techniques of deception in order “to create the psychological and social conditions for projecting an identity and, to some extent, a personality” (114). Chatbots and virtual assistants are similar, in other words, because they are both designed to conform to recognizable patterns of human social interaction, and they both provide a “layer of illusion” that serves to conceal their underlying technological systems.

Like Allen, Natale concludes that “all interfaces essentially rely on deception” (46). He also points out that users are complicit in these deceptions, as they actively participate in and help to construct representations that create the illusion of social interaction. Natale refers to these representations as “banal deceptions” due to the fact that they often go unnoticed, and they have become incorporated into even the most mundane aspects of everyday life. Their invisibility and ubiquity also enhance the risks involved in their usage, as it has become increasingly difficult to distinguish between humans and machines, yet these machines are “embedded in a hidden system of material and algorithmic structures that guarantee market dominance to companies such as Amazon, Apple, and Google” (122). These interfaces also have the potential to reinforce certain biases and prejudices—particularly with regard to class, gender, and race—as the simulation of social interaction often relies on the replication of stereotypes. These interfaces also have the potential to influence user behavior, as “the dynamics of projection and representation embedded in the design of voice assistants might be employed as means of manipulation and persuasion” (124-125). Natale thus promotes “an ethics of fairness and transparency between the vendor and the user,” which “should not only focus on potential misuses of the technology but on interrogating the outcomes of different design features” (131). He also urges users to “interrogate how the technology works, even while we are trying to accommodate it in the fabric of everyday life” (132). As our interfaces become more deceptive, in other words, we must develop the necessary skills to see through their deceptions.

Natale’s book makes a compelling case for the prevalence of deceitful media in contemporary culture and the need for users to become more aware of the potentially negative effects of these media. While chatbots and virtual assistants were initially conceived and promoted as harmless games, and their playful aspects helped to encourage their widespread adoption, their impact is clearly far from inconsequential. Natale’s book is thus extremely timely and relevant, although some readers may be anxious for more information on the impact of these interfaces on contemporary politics. For example, AI tools were reportedly used to influence the results of the Brexit referendum in 2016, and they have also been accused of influencing election outcomes in France, Germany, Austria, Italy, Mexico, and the U.S. While Natale speculates that voice assistants might be employed to manipulate voters at some point in the future, he seems to overlook the many interfaces that could be currently employed to manipulate voters around the world—an oversight that, if corrected, would make his call for the interrogation of new media all the more urgent.