/Building trust in human-machine teams

Building trust in human-machine teams

Summary: Human-machine teaming is at the core of the Department of Defense’s vision of future warfare, but successful collaboration between humans and intelligent machines—like the performance of great teams in business or sports—depends in large part on trust. Indeed, national security leaders, military professionals, and academics largely agree that trust is key to effective human-machine teaming.

Original author and publication date: Margarita Konaev and Husanjot Chahal – February 18, 2021

Futurizonte Editor’s Note: At times let’s be honest. we hardly trust humans. Why then should be trust machines? Just because they are inteligente?

From the article:

From the factory floor to air-to-air combat, artificial intelligence will soon replace humans not only in jobs that involve basic, repetitive tasks but advanced analytical and decision-making skills. AI can play sophisticated strategy games like chess and Go, defeating world champions who are left dumbfounded by the unexpected moves the computer makes. And now, AI can beat experienced fighter pilots in simulated aerial combat and fly co-pilot aboard actual military aircraft. In the popular imagination, the future of human civilization is one in which gamers, drivers, and factory workers are replaced by machines. In this line of thinking, the future of war is marked by fully autonomous killer robots that make independent decisions on a battlefield from which human warfighters are increasingly absent.

Although not unfathomable, such scenarios are largely divorced both from the state of the art in AI technology and the way in which the U.S. military is thinking about using autonomy and AI in military systems and missions.

As the former director of the Joint Artificial Intelligence Center, Lt. Gen. Jack Shanahan, has made clear, “A.I.’s most valuable contributions will come from how we use it to make better and faster decisions,” including “gaining a deeper understanding of how to optimize human-machine teaming.”

Yet research by the Center for Security and Emerging Technology on U.S. military investments in science and technology programs related to autonomy and AI has found that a mere 18 of the 789 research components related to autonomy and 11 out of the 287 research components related to AI mentioned the word “trust.”

Rather than studying trust directly, defense researchers and developers have prioritized technology-centric solutions that “build trust into the system” by making AI more transparent, explainable, and reliable. These efforts are necessary for cultivating trust in human-machine teams, but technology-centric solutions may not fully account for the human element in this teaming equation. A holistic understanding of trust—one that pays attention to the human, the machine, and the interactions and interdependencies between them—can help the U.S. military move forward with its vision of using intelligent machines as trusted partners to human operators, advancing the Department of Defense’s strategy for AI as a whole.

In recent years, the U.S. military has developed many programs and prototypes pairing humans with intelligent machines, from robotic mules that can help infantry units carry ammunition and equipment to AI-enabled autonomous drones that partner with fighter jets, providing support for intelligence collection missions and air strikes. As AI becomes smarter and more reliable, the potential ways in which humans and machines can work together with unmanned systems, robots, virtual assistants, algorithms, and other non-human intelligent agents seems limitless. But ensuring that the autonomous and AI-enabled systems the Department of Defense is developing are used in safe, secure, effective, and ethical ways will depend in large part on soldiers having the proper degree of trust in their machine teammates.

Trust is a complex and multilayered concept, but in the context of human-machine teaming, it speaks to an individual’s confidence in the reliability of the technology’s conclusions and its ability to accomplish defined goals.

Trust is critical to effective human-machine teaming because it affects the willingness of people to use intelligent machines and to accept their recommendations. Having too little trust in highly capable technology can lead to underutilization or disuse of AI systems, while too much trust in limited or untested systems can lead to overreliance on AI. Both present unique risks in military settings, from accidents and friendly fire to unintentional harm to civilians and collateral damage.

READ the complete article here