Seeking a new element in artificial intelligence: trust
For decades, the cybersecurity community has devised protections to fend off malicious software attacks and identify and fix flaws that can disrupt the computing programs that are central to all aspects of life. Now, a team of researchers has received a grant to develop some of the first tools to bring those same protections to artificial intelligence systems.
Aug 31, 2018
For decades, the cybersecurity community has devised protections to fend off malicious software attacks and identify and fix flaws that can disrupt the computing programs that are central to all aspects of life. Now, a team of researchers from New York University Tandon School of Engineering and Columbia University has received a grant from the National Science Foundation (NSF) to develop some of the first tools to bring those same protections to artificial intelligence (AI) systems.
"There are ways to test and debug computer software before you deploy it and methods of verifying that your software works as you expect it to," said Siddharth Garg, an assistant professor of electrical and computer engineering at NYU Tandon. "There's nothing analogous for AI systems, and we're developing a tool suite that will lead to safer, more secure deployment of the systems used in autonomous driving, medical imaging, and other applications," he said.
In addition to Garg, the research team includes NYU Tandon assistant professors Anna Choromanska, in the Electrical and Computer Engineering Department, Brendan Dolan-Gavitt, in the Computer Science and Engineering Department, and Suman Jana, an assistant professor of computer science at Columbia University School of Engineering.
The three-year, $900,000 grant will allow the researchers to hone a set of tools that are already in development, each addressing a different aspect of bringing trust and security to AI systems. Garg explained that the team's work will include defensive schemes designed to defend against malicious attacks and detect the presence of "backdoors" that can be exploited, as well as diagnosing unintentional flaws in AI systems that could have safety impacts. Several recent, well-publicized autonomous car crashes are examples.
The artificial neural networks underlying the AI systems that allow for self-driving cars, speech, and facial recognition, as well as the machine learning algorithms that are transforming medical imaging, are so complex and uniquely constructed that the traditional methods used to test, debug, and verify software simply don't apply. "As deep learning is being used in more and more areas, it's critical to develop new ways of identifying vulnerabilities and flaws, and to know when we've tested a system well enough that we're confident to deploy it," Dolan-Gavitt said.
Source and top image: NYU Tandon School of Engineering
Top image shows: NYU Tandon researchers turned the stop sign outside their office into a speed limit sign by sticking on a Post-It note. The machine learning software had been modified through a backdoor