Review of Living with Robots

Living with Robots
Paul Dumouchel, Luisa Damiano, Editors; translated by Malcolm DeBevoise,

Harvard University Press, Cambridge, MA, 2017
280 pp., illus. 12 b/w. Trade, $29.95
ISBN: 9780674971738

Reviewed by
Ellen Pearlman
January 2019

Living with Robots, translated from French into English, is bifurcated by two competing interests – the practical/technical and the philosophical. Like any marriage it must have been a good idea at the time, but the end result is a high-end, messy squabble about a very important subject that never resolves itself. The book’s basic thesis is sound: that we are building artificial social agents that can both help us understand others and ourselves and that “social robotics is a form of experimental anthropology.” We think we are interacting with another being but in fact are cruelly alone. This leads to a new social species and lays the foundation for “a new discipline: synthetic ethics”, a very unique and compelling idea. Robots should be slaves, employees, and soldiers with no weaknesses or disobedience – fully autonomous, but at the same time deeply non-autonomous. Their caretaker role, a function more fully developed in Japan as companion robots and rooted in Japan’s long time embrace of Manga and Anime is examined. Until recently robots were incapable of spontaneous laughter, tears, blushing and most notably committing suicide, though those distinctions are blurring. This perspective is really relevant and exciting stuff, but it doesn’t last for long as the narrative takes a sharp turn into the nature of the human mind and dwelling on emotions as discrete, private experiences.

The change of voice alternating between chapters is jarring, as is the emphasis on Cartesian dualism, cognitive pluralism of the philosophy of mind, the endless philosophical arguments of David Chalmers, the computer model of mind as functionalism by way of a research methodology of solipsism pioneered by Descartes, by way of the Copernican revolution born from the mind of Kant. This route is meant to be explanatory but reads as a teeth-gnashing rehashed thesis. The definition of emotions is airily tossed off as “salient moments corresponding to the equilibrium points of a continuous coordination mechanism”. Really?

Investigating the military use of robots places the authors back on solid ground, as the problems are “urgent”, not “speculative”.  Military clones make decisions that call for new moral rules governed by enforceable world bodies, a point the authors skittishly and thinly revisit towards the conclusion of the book. The uncanny valley, a crucial tenant of robotics that states people will go along with the fiction of a robot up to a point, but if it becomes too real, will lose interest is raised but not really elucidated upon. Calling robots biosynthetic slaves that can rebel when they become too much like humans (think 2001: A Space Odyssey) or messengers of a more perfect region of existence is a projection of the new Pygmalion, and one that the authors choose to not investigate. Artificial sociability, the automation of daily activities, and power relations between robots and humans without recourse certainly suggests that interactive robots can operate as panopticons of variable design with autonomy being, in essence, a relational property.

The book raises a seriously compelling aspect of the theory of mind known as enaction and embodied mind. Embodiment is such an enormous area of inquiry, and the fact that a brain, body, and environment interact and react is something this book could have really sunk its teeth into. Instead these aspects of robotics are diluted with tautologies and convoluted sentences like “reliance on robots to simulate and understand animal behavior is compatible with the embodied mind approaches in philosophy of mind and cognitive science, particularly the tendency known as radical embodiment as well as an enactive approach…they occur, automatically and are not conscious to the extent that they are not epistically penetrable and we have access only to their results.” It takes chapters to unpack what the author(s) are really trying to say with that mouthful, which is that “straightforward attempts to give emotions and empathy a positive role in robot interactions with human beings inevitably involve a variety of problems at the boundary between cognitive science and philosophy of mind that can be investigated experimentally.” Basically, we need to have robots guess, with a degree of accuracy, what we are feeling through verbal and nonverbal cues, and that a cross-disciplinary approach is critical in the continued development of social robotics and symbolic recognition.

Robots can behave as if they are afraid or aggressive, but they don’t experience any of those emotions. In a glimpse of common-sense the writers say, “The internal aspect of emotion is part of a paradigm shift presentably taking place in cognitive science away from the old computational model towards the conception of embodied mind.” It took wading through 121 pages for them to make that point, a point that should have been their opening salvo. They finally begin to address embodiment and the need to recreate it within artificial systems, coding it into relations between cognitive processes and affective regulation. This is a vital part of natural cognition and the move from computational to synthetic modeling.

The book finally gathers steam when it begins to discuss new research in “cognitive robotics, epigenetic robotics and developmental robotics”. The idea is robots can display affective and emotional capabilities framed on a deep architecture of human social skills. The descriptive table on the Internal Robotics of Emotion that examines different models like Neuronal Network, Cognitive Affective and Developmental is the book’s payoff for slogging through it cumbersome first part, but the convolutional language quickly roars back, stating “that affective phenomena occur in space that encompasses both the intraindividual and extraindividual.” For a dozen pages it drones on about feedback loops and finally gets to the point: that a robot needs a first–person memory with a mirroring mechanism of individual experience it can draw upon when interacting with humans – a pivotal idea that should have been stated up front a dozen or so pages previously. It isn’t until page 148 that a coherent definition of radical embodiment in terms of robotic modeling is finally put forth, positing a “relational approach (will) annul the long-standing divorce of production from expression highlighting interdependence through a complex network of connections.” At this point a discussion of convoluted neural networks in machine learning, even at a basic level, would have clarified the philosophical speculation, especially in terms of self organization and pattern formation, but it seemed outside the scope of the authors to include it. The same goes for defining the difference between developmental robotics and epigenetic robotics, which for the uninitiated means epigenetics robotics focuses mostly on cognitive and social developments, and sensorimotor interaction, and developmental robotics looks at how motor skills are acquired and the development of the body. Instead, the authors jump into ethical philosophy and how philosophers have a part to play by framing and limiting the social consequences of a process of technological development, an irrelevant congratulatory pat on the back shoved in at page 175.

The ethical issues on the act of killing or wounding using an autonomous robot, alluded to at the beginning of the book, are revisited again at its conclusion. This is such a critical issue the late physicist Stephen Hawkings sounded the alarm about it right before his death. This looming nightmare should have superseded the tedious and repetitive discussions of the philosophy of mind to bring the book into a relevant treatise instead of rehashing and beating to death philosophy 101 discourses. This most profound point, first raised by Hawkings about “robot ethics”, is enervated to an argument that states it is not really about ethics but instead only a technique of ‘behavioral management’, and that ultimately “robot ethics does not amount to an ethics at least not in the modern sense of the term.” Killer robots are framed as agents that “analyze the mathematical analysis of a complex sequence of actions that, taken together, constitute the sub-program to which a decision is attributed.” If the code is deleted, so is the action. If only it was that simple. When killer robots start mass murdering living creatures in proxy wars, it will not be a question of ‘robot ethics’ but an algorithmic choice of analytic agents. So, when world bodies like the UN and IEEE are trying to figure out how to put this genie back into the bottle, try telling them the actual solution lies in ‘behavioral management’ and reading some Descartes.