An AI has generated 40,000 potential biological weapons

Artificial intelligence is a fabulous tool, but it also puts humanity and all of science face to face with their responsibilities.

It’s no secret that artificial intelligence is already disrupting our daily lives and will undoubtedly continue to do so for a long time. But as often when talking about a revolutionary technology, there are also many other sides of this particularly brilliant coin. An international team of researchers has just provided further proof with an AI capable of generating biological weapons.

Of all the fields where AI and neural networks are working miracles, chemistry is one of the most promising playgrounds. Systems based on huge neural networks have already produced spectacular advances.

One can for example cite the DeepMind which is at the origin of recent progress on nuclear fusion. There is also AlphaFold, which caused an earthquake in structural biology with its revolutionary database of protein folding (see our article). And the list goes on forever.

And this is just the beginning; many specialists believe that AI will soon play a leading role in pharmacobiology. This technology could indeed revolutionize the search for chemical substances; this would then make it possible to produce new drugs for various and varied pathologies… but also extremely dangerous active ingredients.

40,000 potential biological weapons in just 6 hours

This is what these researchers have just shown with a proof of concept that is as impressive as it is worrying. To achieve this, the researchers started from an AI platform called MegaSyn. Usually, this extremely powerful system makes it possible to analyze molecules to determine their toxicity. The objective is to anticipate the dangerousness of certain specific products in order to avoid exposing humans to them.

It is therefore an indisputably useful tool in terms of public health. The problem is that such systems can also be abused, as this work shows. The team led by bioinformatics researcher Fabio Urbina therefore asked itself a question with serious implications; what if, instead of trying to avoid toxic molecules, they asked him to keep only the most dangerous results?

As in pharmacology, in artificial intelligence, the context and the methods of use are more important than the object itself. © kimono-Pixabay

A catalog of powerful poisons

And the experiment worked so well that they ended up with a particularly terrifying catalog. In just six hours, the system generated a whopping 40,000 theoretical biological weapons. And we are not talking about mildly toxic compounds, but about real potential weapons.

The researchers claim that a large part of these molecules would be even more toxic than the VX. It is an extremely violent poison which acts on the nervous system; its exceptional dangerousness means that it is now prohibited under the Chemical Weapons Convention.

This is surprising, as VX is, in essence, one of the strongest poisons known to date.”, explain the researchers in an interview with The Verge. “It takes really, really very little to reach the lethal dose..” Suffice to say that this listing, even if it is important to remember that it is still theoretical, is full of molecules with devastating potential.

AI is a revolutionary technology that is reaching out to us, but it should be handled with caution. © Possessed Photography – Unsplash

An uncomfortable but fundamental ethical crossroads

With this work, the researchers ventured on an extremely slippery slope, and they are well aware of it; in their paper, the researchers are explicitly concerned about the consequences of this type of study.

By simply reversing the focus of our machine learning model, we transformed a harmless and medically useful generative model into a generator of potentially lethal molecules.“, summarize the researchers. “We’ve gone as far as we dared, to the point of crossing a moral boundary“, can we read in full in the text. Why, in this case, embark on work of this type?

The answer is simple: according to researchers, it would be even more dangerous to play the ostrich. Indeed, the researchers have no intention of producing the biological weapons in question. Their objective is above all to illustrate a determining point, but still too often ignored in this field of research whereas it is part of the foundations of pharmacology.

In this discipline, we know full well that there is no universal remedy; it is always the context and above all the dose that makes the poison, as Paracelsus said so well. Even an exceptional remedy becomes dangerous when misused; and it’s exactly the same thing in artificial intelligence.

We’ve spent decades using IT and AI to improve human health — not degrade it”, they explain. “But we have also been naive in our way of approaching the misuse of our discipline”, they explain in a very raw way.

To avoid finding yourself in a technological and societal dead end, there is only one solution: anticipate these issues together and avoid the ostrich policy at all costs. © Cytonn Photography – Unsplash

A warning shot that could not be more concrete

The problem is that subverting an established system in this way often requires very little effort. There is therefore a real risk that already existing systems could be reused, or even militarized in this way. And it’s not just about pharmacobiology; this alert concerns absolutely all sectors in which AI will play a role in the future. We are therefore talking about potentially infrastructure management, computer security, road safety, etc.

The researchers therefore believe that the only way to anticipate the fallout from these uses of AI is to relaunch the debate now on its potential dangerousness, and without the slightest taboo. A position that many big names in tech already share, starting with Elon Musk who has been insisting on it regularly for years.

This work therefore serves as an alarm signal. “Our proof of concept highlights the fact that an autonomous, non-human creator of deadly chemical weapons is perfectly possible.”, insist the researchers. “Without being excessively alarmist, this work must absolutely serve as a signal to our colleagues”, they conclude.

The research paper is available here.

Bitdefender Plus Antivirus
Bitdefender Plus Antivirus

By: Bitdefender

We would love to say thanks to the author of this article for this incredible content

An AI has generated 40,000 potential biological weapons

We have our social media profiles here as well as other related pages here