/A new way to think about machine learning: As a failure

A new way to think about machine learning: As a failure

Summary: A researcher considers when – and if – we should consider artificial intelligence a failure

Original author and publication date: University of Houston – November 23, 2020

Futurizonte Editor’s Note: As the article well says, an AI failure could be potentially deadly.

From the article:

Machine learning has delivered amazing results, but there also have been failures, ranging from the harmless to potentially deadly. New work suggests that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks.

Deep neural networks, multilayered systems built to process images and other data through the use of mathematical modeling, are a cornerstone of artificial intelligence.

They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless — misidentifying one animal as another — to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed.

A philosopher with the University of Houston suggests in a paper published in Nature Machine Intelligence that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks.

As machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call “adversarial examples,” when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They’re rare and are called “adversarial” because they are often created or discovered by another machine learning network — a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them.

“Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are,” Buckner said.

In other words, the misfire could be caused by the interaction between what the network is asked to process and the actual patterns involved. That’s not quite the same thing as being completely mistaken.

READ the complete original article here