How to better supervise the “killer robots”? Under this term hides one of the most worrying excesses of artificial intelligence: the arrival of intelligent weapons capable of making the decision to fire without any human intervention.
Their little name is SALA: the “lethal autonomous weapon systems”. We are actually talking about the marriage between weapons and artificial intelligence. Weapons capable of locking on a target and making the decision – based on pre-programmed computer criteria – to fire without any human intervention and with much more “effective” results than the best of snipers or fighter pilots.
Concretely, these killer robots take on a whole host of forms. This ranges from autonomous tanks – no more pilots or gunners – to drones with a facial recognition system that allow you to precisely target an individual or type of individual and shoot him without human intervention.
Samsung has designed smart turrets deployed as sentries on the border between South Korea and North Korea. They are equipped with machine guns and grenade launchers and are able, thanks to motion recognition software and thermal cameras, to detect an intruder – a North Korean soldier crossing the border – to adjust and shoot him. .
An ethical and philosophical question
This foreshadows the war of the future, completely delegated to machines and totally disempowering: to whom to impute war crimes, for example? It goes beyond the technological and military question, these are ethical and philosophical questions: dozens of countries want to preemptively ban these smart weapons, perceived as a perversion of artificial intelligence, others regulate them or at least establish recommendations .
France does not completely close the door, provided there is human validation before shooting someone. A few months ago the boss of Thalès, a large French defense group, said that he would never, ever put artificial intelligence in anything that is lethal “even if the customers ask for it”, he specifies.
Google, under pressure from its employees, had to promise that it would not put its artificial intelligence at the service of armaments. It took an internal sling, a petition signed by 4,000 employees to force management to give up a multi-million dollar contract with the Pentagon: it was to use Google’s artificial intelligence to help drones better distinguish between an object and a human being. Under pressure, Google had to give up and even write a manifesto where the group explains that it will not integrate its AI into weapons.
The United States refuses any moratorium on the subject
Except that some countries do not necessarily have this kind of scruple. While we impose limits on ourselves – which is commendable on paper – other countries have far fewer.
The American army does not hide it: it is seriously considering the use of autonomous weapons – and it has a whole laboratory for that, the Darpa, endowed with enormous resources – with an unanswerable argument: it is to preserve the lives of its soldiers since these autonomous weapons will be much more efficient than a human being to aim and shoot, there is no question of being overtaken technologically by other countries which will not have the same scruples. They refuse any moratorium on the subject and are on the contrary very interested – and very advanced – on the subject.
In China, for example, a lot of research in the field of AI applied to the army, the best engineering students are recruited to work on these programs. The risk, in taking an overly strong moral position, is to disadvantage our troops in the medium term.
We would love to thank the writer of this article for this incredible material
Should “killer robots” be banned?
We have our social media profiles here as well as other pages on related topics here.https://www.ai-magazine.com/related-pages/