/When scientific information is dangerous

When scientific information is dangerous

Key idea:  A new study shows the risks that can come from research into AI and biology.

Original author and publication date: Kelsey Pipel – March 30, 2022

Futurizonte Editor’s Note: What are we really teaching AI and how dangerous for us is what AI is learning?

From the article:

One big hope about AI as machine learning improves is that we’ll be able to use it for drug discovery — harnessing the pattern-matching power of algorithms to identify promising drug candidates much faster and more cheaply than human scientists could alone.

But we may want to tread cautiously: Any system that is powerful and accurate enough to identify drugs that are safe for humans is inherently a system that will also be good at identifying drugs that are incredibly dangerous for humans.

That’s the takeaway from a new paper in Nature Machine Intelligence by Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins. They took a machine learning model they’d trained to find non-toxic drugs, and flipped its directive so it would instead try to find toxic compounds. In less than six hours, the system identified tens of thousands of dangerous compounds, including some very similar to VX nerve gas.

“Dual use” is here, and it’s not going away
Their paper hits on three interests of mine, all of which are essential to keep in mind while reading alarming news like this.

The first is the growing priority of “dual-use” concerns in scientific research. Biology is where some of the most exciting innovation of the 21st century is happening. And continued innovation, especially in broad-spectrum vaccines and treatments, is essential to saving lives and preventing future catastrophes.

But the tools that make DNA faster to sequence and easier to print, or make drug discovery cheaper, or help us easily identify chemical compounds that’ll do exactly what we want, are also tools that make it much cheaper and easier to do appalling harm. That’s the “dual-use” problem.

Here’s an example from biology: adenovirus vector vaccines, like the Johnson & Johnson Covid-19 vaccine, work by taking a common, mild virus (adenoviruses often cause infections like the common cold), editing it to make the virus unable to make you sick, and changing a bit of the virus’s genetic code to replace it with the Covid-19 spike protein so your immune system learns to recognize it.

That’s incredibly valuable work, and vaccines developed with these techniques have saved lives. But work like this has also been spotlighted by experts as having particularly high dual-use risks: that is, this research is also useful to bioweapons programs.

“Development of virally vectored vaccines may generate insights of particular dual-use concern such as techniques for circumventing pre-existing anti-vector immunity,” biosecurity researchers Jonas Sandbrink and Gregory Koblentz argued last year.

READ the full article