THE NEW BREED:

--

What our history with animals reveals about our future with robots
By Kate Darling

Cover book illustration of farm and wild animals

“Animals are good to think with.”

— Claude Lévi-Strauss

I was twelve weeks pregnant and nauseous, but excited. After two days of co-running a workshop in Mountain View, California, I had been handed an opportunity I couldn’t resist, so I woke up at the crack of dawn and flew from San Jose to Denver to Boston to Zurich, and took multiple trains to Bavaria, Germany, determined to get to my destination: Ingolstadt.

Ingolstadt is a university town on the banks of the Danube River with beautiful red roofs and cobbled streets. It’s famous for its nineteenth-century medical laboratory, where scientists and students performed experiments on dead pigs, inspiring Mary Shelley to situate a large part of her famous 1818 novel, Frankenstein, in this Bavarian city. But Frankenstein wasn’t the reason I made the 5,800-mile trek. Ingolstadt also happens to be the home of Audi AG, the German luxury car manufacturer.

Audi had recently launched a research initiative to investigate societal questions around AI, autonomous vehicles, and the future of work, and I jumped at the invitation to attend a meeting in 2017, curious to know what was on their minds. By the time I made it to Audi’s base of operations, fueled by adrenaline and excitement, my body was moving into a new stage of pregnancy and my nausea was lifting (thankfully, as the catered buffet lunch in the room was a rich, pungent veal stroganoff on noodles). My visit included a tour of a factory floor where cars were made. It was a gray and cloudy day, and a bus picked us up outside the headquarters where the attendees had gathered and drove us through the drab and massive complex of buildings, dropping us off at a giant warehouse. I tossed my phone into a dirty rubber box in the hallway as instructed and followed our guide onto the factory floor.

In the factory, we marveled at massive cages encasing robotic arms that towered over our heads. The robots swung around and moved through their spaces in a fast, precise, and mesmerizing dance, sparks flying as they worked with the metal pieces that would eventually become cars. As we oohed and aahed over the spectacle, we gave barely any attention to the human workers who were stationed far away in another part of the room, doing something to the car bodies. The smooth operation of the robots seemed routine and almost boring to our guide, which was no surprise. Car companies have been working with caged robotic arms in their factories for decades. But the reason Audi had launched their new AI initiative was because the company knew that these factory robots, despite being an impressive display of high-quality German engineering, were not the robots of the future.

The world of robotics is changing. With increasing developments in sensing, visual processing, and mobility, robots are now able to move beyond their traditional caged existence in factories and warehouses and enter into new spaces — spaces that are currently occupied by humans. Companies like Audi are investing heavily in AI and robotics, not just in their factories but also in their cars. Robots are now being put to work inspecting our sewers, mopping our floors, delivering our burritos, and keeping our elderly relatives company. From our households to our workplaces, a revolution is coming. What does this mean for the people I saw working across the room in the car factory? According to some of the headlines, they aren’t the only ones on the cusp of losing their jobs as robotic technology advances: we all are. Against the backdrop of broader economic and social anxiety, the conversation has turned from “Will robots replace me?” to “How soon will robots replace me?”

Man driving a carriage pulled by four zebras in 1895.
Lionel Walter Rothschild drives a zebra carriage in London (1895)

Many people are not thrilled by the anticipated robot takeover. Our concerns are particularly centered on the idea of creating something like us, with humanlike agency, that will take our steering wheels and harm us or our children. Headlines paint a dystopia of robot brothels and robot-run restaurants and hotels, a world where robots take all human jobs, and where our nannies and boyfriends are replaced by machines. In Mary Shelley’s story, Victor Frankenstein studies medicine in Ingolstadt and creates an autonomous, intelligent being that eventually turns against him. Along with the golem from Jewish folklore, Frankenstein’s monster is considered an early story about robotics, despite being published more than a century before the word “robot” was coined. Science fiction writer Isaac Asimov would later describe a negative public attitude toward robots as “the Frankenstein complex.” Today, a car manufacturer is grappling with a modern version of the narrative that originated in the same city, Ingolstadt, over two hundred years ago.

Is this fear justified? It certainly looks like we’re trying to replace people with machines. In the fall of the same year I went to Ingolstadt, October 2017, Saudi Arabia granted a realistic-looking humanoid robot named Sophia Saudi Arabian citizenship. The announcement caused an uproar. A robot was being granted rights in a country that had barely announced (and not yet implemented) women’s right to drive cars! I received a flurry of emails and phone calls, especially from reporters who wanted to explore whether robots deserved human rights. At this point, I was very pregnant and ignored most of them. I felt that “citizen- ship” for Sophia, a robot not nearly as advanced as people imagine, was basically a publicity stunt, but in usual fashion, when robots made the news, I received calls about the legal, social, and ethical issues involved. My own questions, however, centered on why this stunt generated so much attention in the first place.

My passion for robots and society goes back to when I was a law and economics grad student. While pursuing my studies, I met some students from robotics labs, started reading obscure robot ethics papers, and found myself arguing passionately with friends about robots, especially when I’d had a drink or two. I bought a baby dinosaur robot “pet” that I “adopted” (more on this in chapter 10). Thus began my pursuit of questions such as “What impact will increasing robotization have on society?” It was the beginning of a completely different academic career than I had ever imagined for myself. For over a decade now, I’ve worked side by side with roboticists and applied my legal and social sciences background to the technology. I’ve researched literature, delved into human psychology, done experiments, and had conversations with people all over the globe.

It’s clear to me that the idea of robots we are most familiar with comes from our science fiction. I’ve always loved science fiction. I grew up reading all the sci-fi I could find, from trashy pulp novels to great authors like Ursula Le Guin and Octavia Butler who opened my mind to new ways of thinking. But now that I work in robotics, I’ve also seen how our mainstream Western science-fictional portrayal of robots does the opposite. As technology critic Sara Watson points out, our stories, too often, compare robots to humans.

Three pigeons with doll-house sized cameras hanging from their necks.
The original aerial photography UAVs (1909)

I believe that this human comparison limits us. It stirs confusion about the abilities of machines, stokes an exaggerated fear of losing human work, raises strange questions over how to assign responsibility for harm, and causes moral panic about our emotional attachments. But the main problem I have with our eagerness to compare robots to humans is that it gives rise to a false determinism. When we assume that robots will inevitably automate human jobs and replace friendships, we’re not thinking creatively about how we design and use the technology, and we don’t see the choices we have in shaping the broader systems around it.

This book offers a different analogy. It’s one we’re familiar with, and it’s one that changes our conversations in surprisinglysignificant ways. Throughout history, we’ve used animals for work, weaponry, and companionship. Like robots, animals can sense, make their own decisions, act on the world, and learn. And like robots, animals perceive and engage with the world differently than humans. That’s why, for millennia, we’ve relied on animals to help us do things we couldn’t do alone. In using these autonomous, sometimes unpredictable agents, we have not replaced, but rather supplemented, our own relationships and skills.

We’ve domesticated oxen to plow our fields and learned to ride horseback, extending ourselves and our societies in new ways physically and economically. We’ve created pigeon delivery systems, set loose flaming pigs to ward off elephant attacks, and trained dolphins to detect underwater mines. From the beginning of laws known to humankind, we’ve dealt with the question of responsibility when autonomousbeasts cause harm, even putting animals themselves on trial for the crimes they committed. And we’ve also extended ourselves socially: throughout history, we’ve treated most animals as tools and products, but have also made some of them our friends.

Using animals to think about robots acknowledges our inherent tendency to project life onto this technology, something that has fascinated me for years. From the simple vacuum cleaner roaming around in our physical space, to dragonfly robots that flap their wings in a biologically realistic way, we respond viscerally to moving machines, even though we know that they aren’t alive.

In comparing robots to animals, I’m not arguing that they are the same. Animals are alive and can feel, while robots suffer no differently than a kitchen blender. Animals are often more limited than robots — I can train Fido to retrieve a ball, but not to vacuum a floor — but they can also handle unanticipated situations more easily than any machine. The point is that this thought exercise lets us step out of the human comparison we’re clinging to and imagine a different kind of agent.

Robot with four legs carrying duffle bags for soldiers in the field.
BigDog, a military robot created to serve as a pack mule for soldiers in terrain too rough for conventional vehicles. Project was discontinued when it was deemed they were too noisy for use in combat. (2012)

In collecting some of the parallels in the past, present, and future of our relationships to both animals and robots, I’ve found that using animals to think through our most pressing concerns changes a lot of conversations. Just like animals, robots don’t need to be a one-to-one replacement for our jobs or relationships. Instead, robots can enable us to work and love in new ways. Using a different comparison lets us examine how we can leverage different types of intelligences and skills to invent new practices, find new solutions, and explore new types of relationships — rather than re-creating what we already have. Setting aside our moral panic also helps us see some of the actual ethical and political issues we will be facing as we begin to live alongside these machines, from nonlinear economic disruption to emotional coercion. This book begins with a contemporary exploration of how we are integrating robots into our spaces and systems, drawing parallels to how we’ve used animals in the past. In this first part, “Work, Weaponry, Responsibility,” I pick up many familiar questions that are in the foreground of our conversations about the future: Will robots replace our jobs? Is artificial superintelligence a threat? How do we assign responsibility for unanticipated robot behavior? What I want to illustrate is how much our perception of robots as quasi-humans (falsely) shapes those conversations, and that using an animal analogy leads us down a new path, one that doesn’t force us to put productivity over humanity.

The second part of the book, “Companionship,” moves slightly further into the future and explores emerging developments in robot companions. Social robots, while not yet widespread, are on the rise. These robots can’t feel, but we feel for them, with people even mourning them when they “die.” Here, our history with companion animals demystifies the human-replacement stigma around our emotional connections to robots. Recognizing our ability to form relationships with a wide variety of “others” helps us set aside moral panic, but also reveals some unresolved challenges with privacy, bias, and economic incentives that we need to pay closer attention to as we move ahead.

The third and final part of this book, “Violence, Empathy, and Rights,” takes the animal analogy all the way into the very futuristic-sounding realm of robot rights. The humanlike machines in our science fiction stories have prompted conversations about our likely future treatment of robots. But looking at the convoluted path of Western animal rights provides a different prediction for how a robot rights movement would play out. Our history of relating to nonhumans shines a harsh and insightful light on how we choose which lives have value, revealing a new understanding of how we relate — not just to nonhumans but also to each other. Historians and sociologists have long used animals to think about what it means to be human, but animals also have a lot to teach us about our relationship with robots. The robotic technologies that are increasingly woven into the fabric of our daily lives bring questions and choices that we, as societies, will face. This book is a compilation of those questions, those choices, gleaned from the fields of technology, law, psychology, and ethics, and set against a backdrop of our historical relationship with nonhumans, to try and make sense of what a future with this new breed means for us, and how we can shape it.

Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.

--

--

MIT School of Architecture + Planning
MIT School of Architecture + Planning

Written by MIT School of Architecture + Planning

The MIT School of Architecture + Planning: Design is the space between people and their environment. This is our territory.

No responses yet