Artificial Intelligence (AI) will transform national and global security, according to Artificial Intelligence: What Every Policymaker Needs to Know, a new report from the Center for a New American Security.
“AI is enabling the creation of special-purpose machines to replace human cognitive labor for specific tasks,” authors Paul Scharre and Michael Horowitz wrote. “AI has applications for defense, intelligence, homeland security, diplomacy, surveillance, cybersecurity, information, and economic tools of statecraft.”
The report serves as a guide for understanding AI language and notes key features of AI that policymakers should be aware of.
Of AI’s many sub-disciplines and methods to create intelligent behavior, “machine learning” is perhaps most prominent.
“Machine learning allows algorithms to learn from data and develop solutions to problems,” the report stated. “These increasingly intelligent machines can be used for a wide range of purposes, including analyzing data to find patterns and anomalies, predicting trends, automating tasks, and providing the ‘brains’ for autonomous robotic systems.
One type of machine learning is “unsupervised learning,” where machines can sort items into categories based on patterns in data. For instance, machines can cluster financial data based on time, amount, and sender without the data being labelled. This can help combat fraudulent activities.
Other types of learning include “reinforcement learning,” where AI learns through trial and error, and “deep learning,” which uses internal neural networks in machines. Neural networks, loosely inspired by biological neurons, connect a series of artificial neurons to create a layered network, according to the report.
“The input data for a neural network doing image recognition would be each pixel in an image,” the report stated. “The output of the neural network would be the label for that image. ‘Deep’ neural networks are those that have multiple ‘hidden layers’ between the input and output layer.”
Another aspect of the report notes that AI has made significant progress through games, which serve as both challenges for researchers and benchmarks of progress for AI. Games like chess, checkers, and Go are useful benchmarks for monitoring AI progress, because these games have quantifiable complexities.
“In 2007 AI researchers ‘solved’ checkers by calculating the optimal move for every relevant position (roughly 1014 positions),” the report stated. “By ‘solving’ checkers, AI researchers were able to do far more than simply beat human performance; they were able to determine the best move in any given situation.”
AI has also defeated real players in open-ended games in the past. For instance, in 2011, the IBM Watson system played and won Jeopardy! against human contestants, “in part due to its superior reflexes at timing when to buzz in,” the report noted.
However, AI is still “narrow,” in its learning, which means it can only learn a single domain. It lacks a general-purpose reasoning, such a human’s common sense, so is lacks the ability to do a range of general tasks. Moreover, AI suffers from “catastrophic learning,” where it loses old knowledge after being taught a new skill or task.
This is why the December 2017 DeepMind computer program, AlphaZero, was groundbreaking for the field of AI. The program was a “single algorithm” that could learn to play Go, chess, or Shogi, a Japanese strategy game. But the machine could not learn all three at the same time.
“Building a single algorithm that could learn to play three different strategy games without any training data was an impressive feat,” the report stated. “Different versions of AlphaZero needed to be trained for each game, however. AlphaZero could not transfer learning from one game to another, as a human might. This limitation restricts AI systems to narrowly performing only one task, even if it acquires superhuman performance at that task.”
AI vulnerabilities can also be due the lack of proper operating environments. Without being able to realistically simulate war, for example, the systems only have training data that could turn out to be biased in the case of real war.
“Current AI systems can fail if they are deployed outside of the context for which they were designed, making their performance ‘brittle’ in real-world applications,” the report stated.
These vulnerabilities can transfer into many more spheres of national security instead of just military, including border security, transportation security, and law enforcement. Most important, these vulnerabilities could be exploited by both state and non-state actors in attempt to manipulate their behavior.
“A world where AI performance outpaces safety could be quite hazardous if nations race to put into the field AI systems that are subject to accidents or subversion (e.g., spoofing attacks),” the report stated. “On the other hand, progress in AI safety could mitigate some of the risks that stem from national security uses of AI.”
The report emphasizes an international technological race when it comes to advancing in AI, as China, Russia, and others have made aggressive moves to advance technology. “China is a major player in AI and has embarked on a national plan to be the world’s leader by 2030,” the report stated.
While AI is currently being engineered and studied for commercial purposes, governments can influence the process of AI safety for national security purposes through “research investments.” The report concludes that while the science for AI has come a long way, the future of AI progress is still highly uncertain.
The full report is available for download here.
Tahreem Alam is a staff writer for Homeland411.
© 2018 Homeland411
Please subscribe to our free weekly electronic newsletter.