The Heart and the Chip: What Could Go Wrong?
Legendary MIT roboticist Daniela Rus has published a new book called The Heart and the Chip: Our Bright Future with Robots. “There is a robotics revolution underway,” Rus says in the book’s introduction, “one that is already causing massive changes in our society and in our lives.” She’s quite right, of course, and although some of us have been feeling that this is true for decades, it’s arguably more true right now than it ever has been. But robots are difficult and complicated, and the way that their progress is intertwined with the humans that make them and work with them means that these changes won’t come quickly or easily. Rus’ experience gives her a deep and nuanced perspective on robotics’ past and future, and we’re able to share a little bit of that with you here.
Daniela Rus: Should roboticists consider subscribing to their own Hippocratic oath?
The following excerpt is from Chapter 14, entitled “What Could Go Wrong?” Which, let’s be honest, is the right question to ask (and then attempt to conclusively answer) whenever you’re thinking about sending a robot out into the real world.
At several points in this book I’ve mentioned the fictional character Tony Stark, who uses technology to transform himself into the superhero Iron Man. To me this character is a tremendous inspiration, yet I often remind myself that in the story, he begins his career as an MIT-trained weapons manufacturer and munitions developer. In the 2008 film Iron Man, he changes his ways because he learns that his company’s specialized weapons are being used by terrorists.
Remember, robots are tools. Inherently, they are neither good nor bad; it’s how we choose to use them that matters. In 2022, aerial drones were used as weapons on both sides of devastating wars. Anyone can purchase a drone, but there are regulations for using drones that vary between and within different countries. In the United States, the Federal Aviation Administration requires that all drones be registered, with a few exceptions, including toy models weighing less than 250 grams. The rules also depend on whether the drone is flown for fun or for business. Regardless of regulations, anyone could use a flying robot to inflict harm, just like anyone can swing a hammer to hurt someone instead of driving a nail into a board. Yet drones are also being used to deliver critical medical supplies in hard-to-reach areas, track the health of forests, and help scientists like Roger Payne monitor and advocate for at-risk species. My group collaborated with the modern dance company Pilobolus to stage the first theatrical performance featuring a mix of humans and drones back in 2012, with a robot called Seraph. So, drones can be dancers, too. In Kim Stanley Robinson’s prescient science fiction novel The Ministry for the Future, a swarm of unmanned aerial vehicles is deployed to crash an airliner. I can imagine a flock of these mechanical birds being used in many good ways, too. At the start of its war against Ukraine, Russia limited its citizens’ access to unbiased news and information in hopes of controlling and shaping the narrative around the conflict. The true story of the invasion was stifled, and I wondered whether we could have dispatched a swarm of flying video screens capable of arranging themselves into one giant aerial monitor in the middle of popular city squares across Russia, showing real footage of the war, not merely clips approved by the government. Or, even simpler: swarms of flying digital projectors could have broadcasted the footage on the sides of buildings and walls for all to see. If we had deployed enough, there would have been too many of them to shut down.
There may be variations of Tony Stark passing through my university or the labs of my colleagues around the world, and we need to do whatever we can to ensure these talented young individuals endeavor to have a positive impact on humanity.
The Tony Stark character is shaped by his experiences and steered toward having a positive impact on the world, but we cannot wait for all of our technologists to endure harrowing, life-changing experiences. Nor can we expect everyone to use these intelligent machines for good once they are developed and moved out into circulation. Yet that doesn’t mean we should stop working on these technologies—the potential benefits are too great. What we can do is think harder about the consequences and put in place the guardrails to ensure positive benefits. My contemporaries and I can’t necessarily control how these tools are used in the world, but we can do more to influence the people making them.
There may be variations of Tony Stark passing through my university or the labs of my colleagues around the world, and we need to do whatever we can to ensure these talented young individuals endeavor to have a positive impact on humanity. We absolutely must have diversity in our university labs and research centers, but we may be able to do more to shape the young people who study with us. For example, we could require study of the Manhattan Project and the moral and ethical quandaries associated with the phenomenal effort to build and use the atomic bomb. At this point, ethics courses are not a widespread requirement for an advanced degree in robotics or AI, but perhaps they should be. Or why not require graduates to swear to a robotics- and AI-attuned variation on the Hippocratic oath?
The oath comes from an early Greek medical text, which may or may not have been written by the philosopher Hippocrates, and it has evolved over the centuries. Fundamentally, it represents a standard of medical ethics to which doctors are expected to adhere. The most famous of these is the promise to do no harm, or to avoid intentional wrongdoing. I also applaud the oath’s focus on committing to the community of doctors and the necessity of maintaining the sacred bond between teacher and pupils. The more we remain linked as a robotics community, the more we foster and maintain our relationships as our students move out into the world, the more we can do to steer the technology toward a positive future. Today the Hippocratic oath is not a universal requirement for certification as a doctor, and I do not see it functioning that way for roboticists, either. Nor am I the first roboticist or AI leader to suggest this possibility. But we should seriously consider making it standard practice.
In the aftermath of the development of the atomic bomb, when the potential of scientists to do harm was made suddenly and terribly evident, there was some discussion of a Hippocratic oath for scientific researchers. The idea has resurfaced from time to time and rarely gains traction. But science is fundamentally about the pursuit of knowledge; in that sense it is pure. In robotics and AI, we are building things that will have an impact on the world and its people and other forms of life. In this sense, our field is somewhat closer to medicine, as doctors are using their training to directly impact the lives of individuals. Asking technologists to formally recite a version of the Hippocratic oath could be a way to continue nudging our field in the right direction, and perhaps serve as a check on individuals who are later asked to develop robots or AI expressly for nefarious purposes.
Of course, the very idea of what is good or bad, in terms of how a robot is used, depends on where you sit. I am steadfastly opposed to giving armed or weaponized robots autonomy. We cannot and should not trust machine intelligences to make decisions about whether to inflict harm on a person or group of people on their own. Personally, I would prefer that robots never be used to do harm to anyone, but this is now unrealistic. Robots are being used as tools of war, and it is our responsibility to do whatever we can to shape their ethical use. So, I do not separate or divorce myself from reality and operate solely in some utopian universe of happy, helpful robots. In fact, I teach courses on artificial intelligence to national security officials and advise them on the strengths, weaknesses, and capabilities of the technology. I see this as a patriotic duty, and I’m honored to be helping our leaders understand the limitations, strengths, and possibilities of robots and other AI-enhanced physical systems—what they can and cannot do, what they should and should not do, and what I believe they must do.
Ultimately, no matter how much we teach and preach about the limitations of technology, the ethics of AI, or the potential dangers of developing such powerful tools, people will make their own choices, whether they are recently graduated students or senior national security leaders. What I hope and teach is that we should choose to do good. Despite the efforts of life extension companies, we all have a limited time on this planet, what the scientist Carl Sagan called our “pale blue dot,” and we should do whatever we can to make the most of that time and have a positive impact on our beautiful environment, and the many people and other species with which we share it. My decades-long quest to build more intelligent and capable robots has only strengthened my appreciation for—no, wonder at—the marvelous creatures that crawl, walk, swim, run, slither, and soar across and around our planet, and the fantastic plants, too. We should not busy ourselves with the work of developing robots that can eliminate these cosmically rare creations. We should focus instead on building technologies to preserve them, and even help them thrive. That applies to all living entities, including the one species that is especially concerned about the rise of intelligent machines.
Excerpted from “The Heart and the Chip: Our Bright Future with Robots”. Copyright 2024 by Daniela Rus, Gregory Mone. Used with permission of the publisher, W.W. Norton & Company. All rights reserved.
IEEE Spectrum