/Data Ethicist Cautions Against Overreliance on Algorithms

Data Ethicist Cautions Against Overreliance on Algorithms

Key idea: It’s no secret that algorithms built by humans often perpetuate the same biases that went into them.

Original author and publication date: University of Oregon – June 2, 2022

Futurizonte Editor’s Note: We are digitalizing our prejudices and transfer them to AI. Good job, humans!

From the article:

 

Pigeons can quickly be trained to detect cancerous masses on x-ray scans. So can computer algorithms.

But despite the potential efficiencies of outsourcing the task to birds or computers, it’s no excuse for getting rid of human radiologists, argues University of Oregon philosopher and data ethicist Ramón Alvarado.

Alvarado studies the way that humans interact with technology. He’s particularly attuned to the harms that can come from overreliance on algorithms and machine learning. As automation creeps more and more into people’s daily lives, there’s a risk that computers devalue human knowledge.

“They’re opaque, but we think that because they’re doing math, they’re better than other knowers,” Alvarado said. “The assumption is, the model knows best, and who are you to tell the math they’re wrong?”

It’s no secret that algorithms built by humans often perpetuate the same biases that went into them. A face-recognition app trained mostly on white faces isn’t going to be as accurate on a diverse set of people. Or a resume-ranking tool that awards greater preference to people with Ivy League educations might overlook talented people with more unique but less quantifiable backgrounds.

But Alvarado is interested in a more nuanced question: What if nothing goes wrong, and an algorithm actually is better than a human at a task? Even in these situations, harm can still occur, Alvarado argues in a recent paper. It’s called “epistemic injustice.”

The term was coined by feminist philosopher Miranda Fricker in the 2000s. It’s been used to describe benevolent sexism, like men offering assistance to women at the hardware store (a nice gesture) because they presume them to be less competent (a negative motivation). Alvarado has expanded Fricker’s framework and applied it to data science.

He points to the impenetrable nature of most modern technology: An algorithm might get the right answer, but we don’t know how; that makes it difficult to question the results. Even the scientists who design today’s increasingly sophisticated machine learning algorithms usually can’t explain how they work or what the tool is using to reach a decision.

READ the full article