Killer robots: these new weapons developed without control

Company

Some weapons powered by artificial intelligence are increasingly autonomous. Many civil society organizations are calling for their banning. ” It is not acceptable for a machine to decide to kill humans insists Jonathan Bannenberg, researcher at GRIP (Group for research and information on peace and security), in Brussels.

Paris Match. In military jargon, killer robots are referred to as “SALA”. What does it cover?

Jonathan Banenberg. The acronym “SALA” stands for Lethal Autonomous Weapons Systems. These killer robots incorporate artificial intelligence sophisticated enough to seek out targets, select them and attack them (known as “engagement”) without human intervention.

Do such weapons already exist or, at this stage, are they just prototypes, or even sci-fi inspired fantasies?

It is not a question of fantasies, but of an increasingly palpable reality. In March 2021, a report by the UN Group of Experts on Libya revealed that prowling ammunition had been used against Haftar-affiliated troops by Government of National Accord forces. It was Kargu-2, or quadcopter drones using learning algorithms that can give them total autonomy. Nevertheless, the Turkish manufacturer of these SALAs, the STM company, alleged that, in the Libyan theater of operations, the autonomy of the Kargu had been limited to navigation and target identification, while the engagement – the ultimate choice to attack – had remained under human control. A doubt remains: it is not known whether, during this conflict, for the first time, killer robots took the decision to kill humans alone. Either way, the bottom line is that the ability to remove ultimate human approval is technically possible.

Read also>Artificial intelligence: Elon Musk warns against “killer robots”

Can we take another example, that of the South Korean SGR-A1?

Absolutely. This robot-sentinel gunner manufactured by Samsung is used in the demilitarized zone which is between the two Koreas. Currently it is still operated by humans via camera links, but technically this weapon could very well be switched to standalone mode.

Jonathan Bannenberg, researcher at GRIP: “The trend is very clear: we are moving towards ever greater autonomy for all weapon systems. » © Doc

Are SALAs used in the war in Ukraine?

According to some echoes, the Russians would have used prowling KUB ammunition. However, as in Libya, the man would have remained in the loop. There is an ethical, moral, even strategic barrier that still seems to remain… But for how long? Generally speaking, the trend is very clear: we are moving towards ever greater autonomy for all weapon systems. We could still cite many examples, relating to ships, various land vehicles… The armaments industry is working on the development of SALA in many countries.

“For artificial intelligence, targets, whether people or things, are just data to be processed, alignments of 0s and 1s”

What is the argument of the militarists who defend this “evolution”?

Some of them cry exaggeration: total autonomy, we would not be there yet. They also point out that the military themselves would not necessarily be in favor of it. Even if this last point is not entirely false, it is a discourse that neglects the risks. Others defend these “advances” by arguing that they allow more precise strikes, involving less collateral damage. That in addition, the SALA make it possible to limit the military losses: one can make the war with fewer soldiers. And then there is the classic argument of the arms race: if we don’t follow the movement, we will be overtaken by the others. But it is necessary to have a critical look at these narratives which seem to present evidence, while they are also the extension of industrial lobbying.

Read also > When the arms industry advertises in schools

The counter-argument?

First, there is the question of digital dehumanization: it is not acceptable for a machine to decide to kill humans. For artificial intelligence, targets, whether people or things, are just data to be processed, alignments of 0s and 1s. It does not differentiate between a “who” and a “what”. “. How would a killer robot sort out a fighting soldier from a surrendering soldier? Or even between a soldier and a civilian? The machine is conditioned by its initial programming, which may involve biases. In a changing environment, one can wonder about his ability to make appropriate decisions. One solution would be “machine learning”: these software improve their performance based on the data they collect, which leads them to perform new tasks that had not been initially programmed, but the same biases are likely to be replicated. . In addition, roboticists admit that there is still a large part of the unknown in this area. This increases the questioning of the transparency of the chain of command and the possibility that users have of hiding behind the machine so as not to have to answer for possible slippages. In case of error, who is legally responsible? The designer of artificial intelligence, the organization that used it? Not to mention the risks of hacking, the possibility that such weapon systems requiring the commitment of few personnel fall into the hands of non-state entities or terrorist groups.

“If the international community does not react quickly, we risk being in a situation of fait accompli, without any normative framework setting a red line”

Moreover, does not the possibility of waging war while limiting military losses lower the threshold for engagement in armed conflict?

This is indeed a major objection. The development of LAWS augurs a world that could become even more conflictual. We can make the connection with what is happening in Ukraine: what would happen if the Russians used swarms of killer robots to limit their human losses?

Do we have the feeling of being at a turning point in the history of armaments?

A pivotal moment, yes. There have been three revolutions in warfare: gunpowder, nuclear power and now artificial intelligence, in which the arms industry is investing heavily. If the international community does not react quickly, we risk being in a situation of fait accompli, without any normative framework setting a red line. It is not that warning signals have not been activated for several years: many voices have already been raised in civil society to demand the prohibition of such weapons, or at the very least a regulation guaranteeing that ‘they will always be subject to significant human control. What is missing is real political will at the global level.

Read also> North Korea threatens by unveiling a new giant missile, experts say

Aren’t we discussing it internationally?

Yes, of course. The subject has been discussed within the UN framework for more than eight years. These are diplomatic exchanges that take place within the Convention on Certain Conventional Weapons (CCW). But it happens that it works by consensus, and that highly militarized states invested in the development of these weapons, including the United States and Russia, are blocking the progress of the debates by transforming consensus into a right of veto. . They claim that this debate is premature, while many civil society organizations, international public opinion and scientists call for preventive action. In terms of arms control, we are reliving in a way the nuclear bomb scenario: it was not necessary to wait for Hiroshima for voices to be raised against the risks posed by atomic weapons.

“Belgium blows hot and cold. This is disappointing, as our country once played a pioneering role in the processes that led to treaties banning landmines and cluster munitions.”

What can Belgium do to weigh a little in this debate?

Our country chaired the group of governmental experts within the framework of the CCW in 2021 and spared no effort to try to revive discussions on an international treaty banning killer robots. Last March, after another failure of the discussions within the framework of the UN, Belgium signed a joint declaration with 22 other States expressing “deep disappointment”. This text reminds us that it is not machines, only men, who should decide on the use of lethal force. But, at the same time, the war in Ukraine has greatly tightened the ties at NATO level, dominated by the United States, a country opposed to a ban treaty and only advocating the development of rules of good behavior. It cannot be said with certainty that this explains it, but it must be noted that the position of the Belgian government has very recently taken the form of a more wait-and-see attitude: a proposal for a parliamentary resolution affirming that our country must seek to play a more active role in the international scene to achieve a ban on killer robots was to be voted on a few weeks ago by the National Defense Committee. With surprise, we learned that this item was no longer on the agenda. There seems to be some reluctance at the federal government level.

Why ?

The executive now wants prior consultation on the issue of killer robots within NATO. In other words, Belgium blows hot and cold. This is disappointing, as our country once played a pioneering role in the processes that led to treaties banning anti-personnel mines and cluster munitions. It should do the same in this debate on killer robots, even if it means joining other States in an alternative forum to that of the CCAC, in the hope of creating sufficient critical mass that would finally move the lines.

We would like to thank the writer of this write-up for this remarkable material

Killer robots: these new weapons developed without control


Our social media profiles here and other pages related to them here.https://www.ai-magazine.com/related-pages/