/Artificial Intelligence Neural Network Learns When It Should Not Be Trusted

Artificial Intelligence Neural Network Learns When It Should Not Be Trusted

Summary: MIT researchers have developed a way for deep learning neural networks to rapidly estimate confidence levels in their output. The advance could enhance safety and efficiency in AI-assisted decision making.

Original author and publication date: Daniel Ackerman – November 22, 2020

Futurizonte Editor’s Note: AI neural networks learn when not to trust themselves. How soon before then learn not to trust us?

From the article:

A faster way to estimate uncertainty in AI-assisted decision-making could lead to safer outcomes.

Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.

They’ve developed a quick way for a neural network to crunch data, and output not just a prediction but also the model’s confidence level based on the quality of the available data. The advance might save lives, as deep learning is already being deployed in the real world today. A network’s level of certainty can be the difference between an autonomous vehicle determining that “it’s all clear to proceed through the intersection” and “it’s probably clear, so stop just in case.”

Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, dubbed “deep evidential regression,” accelerates the process and could lead to safer outcomes. “We need the ability to not only have high-performance models, but also to understand when we cannot trust those models,” says Amini, a PhD student in Professor Daniela Rus’ group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

“This idea is important and applicable broadly. It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model,” says Rus.

READ the complete original article here