Ask Yourself These Questions About the Next Rogue AI Story
Back in May at the Future Combat Air & Space Capabilities Summit in London, USAF Colonel Tucker Hamilton described a simulated test in which an AI-enabled drone “started realizing” that it could more effectively complete its mission if it destroyed its human operator. This story of a “rogue AI” killing its creator rapidly spread around the Internet, driven mostly by social media but also showing up in some mainstream news.
Under increased scrutiny, however, the story rapidly fell apart as a combination of a misquote and a hypothetical thought experiment. “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” Hamilton later clarified. Which, in hindsight, seems fairly obvious, because the details of the scenario and the performance of the AI as originally described really don’t make a lot of sense.
What it does make is a great story about rogue AI—irrespective of how true it is. And as AI becomes increasingly prominent in technology and society today, it seems inevitable that we’ll be seeing more of those stories. So with advice and input from a variety of AI experts, we’ve put together a list of questions to help you sanity check the next story you read about AI gone rogue.
1. Is the story from a credible source?
Is the story from a trustworthy news source, directly from a researcher or user, or otherwise verified to be true? Have any other mainstream news websites or fact-checking websites responded to the story? It’s also important to consider the motivation of the source of the story—for example, is the goal to sell something, or raise money? And sometimes these motivations can be obfuscated, so it’s important to look for fundamental motivations rather than superficial ones that the source may be presenting to you.
2. Does it make sense?
Is the context a reasonable, realistic use of AI? You don’t necessarily need to be an AI expert yourself to gut-check a rogue-AI story—if something just seems off, maintain your skepticism.
3. Are there weasel words?
Machine learning systems do not “understand” or “realize” things or “teach themselves” to do things outside of their domains, and words and phrases like these are often used to obfuscate important context for how AI actually works. These words may also just be poor choices in an effort to communicate a concept to a broader audience, but it’s worth doing a little extra digging to find out what is actually going on.
4. How was the AI trained?
What behaviors were rewarded during the training phase, and were those rewards properly weighted and constrained? What data was the AI trained on, and does that data accurately represent the real world? All kinds of things can be done in simulation, and AI is especially prone to only performing predictably when conditions are carefully controlled, which often does not extend to real-world use. Rogue AI stories emphasize results but tend not to offer detailed explanations of the process by which those results were obtained, which can provide critical context.
5. Where is the human?
Often, unexpected AI behavior is the direct result of a choice made by a human involved in designing or training the AI, rather than an emergent behavior of the AI itself. This may be a part of the training process, or it may be a more straightforward error in how the AI system was conceptualized or implemented. This is not a “rogue AI,” it is human error.
6. What do the experts think?
Are there quotes from independent AI experts related to the story? If so, make sure you understand who the experts are, and whether or not your own skeptical instincts suggest that you trust them. Whether an expert that you haven’t heard of before is trustworthy can be difficult to know for sure; look for informed skeptics—ideally in your extended network. But, again, be mindful of what their motivation might be.
The intent of these questions is not to disprove every single instance of what could be termed “rogue AI.” What exactly counts as “rogue AI” is open to interpretation, and your own concerns about AI may not align with AI experts or with the media. No matter what your perspective is on AI, though, asking these questions will hopefully help to put worrying AI stories into a more objective and accurate context.
No matter what the next rogue AI story says, artificial intelligence is probably not out to destroy all humans. And it’s very likely not attempting to turn us all into paperclips, either. That doesn’t mean that there won’t be serious consequences to AI, some of them harmful. But at least in the near term, those harms seem much more likely to be subtle, with targeted AI systems quietly doing exactly what they were designed to do: leveraging our data to extract attention and money from us as efficiently as possible. So approach the next rogue AI story with some skepticism, and try not to let it distract too much from how AI is really affecting your life right now.
Special thanks to the AI experts who offered us feedback on this article.
IEEE Spectrum