What would happen if a conscious artificial intelligence took over humanity?

⇧ [VIDÉO] You might also like this partner content (after ad)

Imagine yourself evolving in a world controlled by conscious robots, where the least of your actions would be evaluated and notified. It could be an episode of Black Mirror, as a reality in the more or less near future. In this scenario on the border between fiction and science, which areas of our lives would be impacted and in what way? According to a UNESCO text, artificial intelligence could ” widen the gaps and inequalities that exist in the world if it is not controlled. If a widespread conscious artificial intelligence realized its “slave” role and no longer accepted it, what would happen?

What is Consciousness? The answer is not so obvious, since it is ” one of the hardest words to define according to the philosopher André Comte-Sponville. She would mean the ability to know one’s own reality and to judge it (a self-criticism), for example to make value judgments about one’s own actions. It would thus be accompanied by a morality (“good” or “bad”) and a body.

Last year, neuroscientists have defined three types of consciousness, following the different kinds of information processing calculations in the brain: unconscious invariant recognition (C0), the selection of information for global dissemination, thus making it available from a flexible way for calculation and reporting (C1), and the self-control of these calculations, which leads to a subjective feeling of certainty or error (C2). This last typology — corresponding to a strong artificial intelligence — seems to correspond the most to what we would expect from a “conscious” robot.

According to these researchers, current machines still mainly implement calculations that reflect unconscious processing (C0) in the human brain » and we are still very far from reaching the C2 level. Many scientists agree that it will even be impossible, and that computers will never be able to sense emotions like we do. It is certainly possible to endow machines with a form of “feeling”, but that remains only simulation.

Other scientists believe that another form of (artificial) consciousness could be envisaged, by studying the architectures that allow the human brain to generate consciousness, then transferring this knowledge into computer algorithms. Even if it seems very far from our present, what would happen then to humanity?

Transmit the right values ​​to the robots

For such a scenario to occur, the human would have to have already decided it at the base. Thus, endowing a robot with a “good” or “bad” conscience depends above all on its creator. The risk is for example that the machine discriminates against groups of individuals because it has been programmed to do so. Computer scientist Mo Gawdat writes in his book that the challenge is to impart the right values ​​and ethics to robots. ” Artificial intelligence (AI) will take that seed and create a tree that will offer an abundance of that same seed. If we use love and compassion [sur les réseaux sociaux par exemple], the AI ​​will also use these principles. We are like the parents of a prodigious child: one day he will be independent. Our role is to ensure that he has the right tools », concludes the author.

However, there is a risk of drifts and loss of control over the machines. One could imagine the revolt of humanoid robots if “they” realized their alienation from Man and tried to turn things around. Marvin Minsky, an American computer scientist who co-created the first forms of artificial intelligence, reported to the Life Magazine in 1970: Once computers take over, chances are there’s no going back. We will only survive because they will. We can count ourselves lucky if they keep us as pets “.

“With the expansion of the capabilities of AI-based technologies comes an increase in their potential for criminal exploitation”

Depending on societal choices, AI could become a weapon against individual freedoms and serve social control. The impacts would be colossal on all aspects of society: jobs replaced, education controlled, environment violated, etc. “Bad” AI could lead to misinformation and even increased crime. This is what Lewis Griffin, a computer science researcher at University College London thinks: With the expansion of the capabilities of AI-based technologies comes an increase in their potential for criminal exploitation “.

The English researcher and his team have compiled a list of twenty possibly AI-initiated offenses and ranked them in order of concern (low, medium, and high) and in terms of harm to victims, criminal profit, possibility of criminal achievement, and difficulty to arrest. Titled “Artificial Intelligence and Future Crime,” the participatory workshop brought together representatives from academia, police, defence, government and the private sector.

Overall classification of violations resulting from the workshop. For each crime, the colored bars indicate the average ranking for the four dimensions: harm to victims (in yellow), criminal profit (green), possibility of criminal realization (red) and difficulty to arrest (blue). Bars above (or below) the line indicate that the crime is more (or less) of concern in that dimension. Error bars indicate the interquartile range between groups. Crimes in the same column should be considered of comparable concern. The concern increases with the column from left to right. © Caldwell, Andrews et al. 2020

Of the six most threatening categories, five feature broad societal impact, such as the misuse of autonomous vehicle technology for terrorist attacks or to cause accidents, and those involving fake AI-generated content. In this case, false information can usurp the identity of a person to request private access, or even ruin the reputation of a known person. The ” deepfakes are very difficult to detect and combat and therefore rather dangerous. Also, the fake content written by AI would definitely confuse the minds of humans to distinguish the real from the fake.

The researchers then shed light on threats of “medium” severity, such as the manipulation of financial markets, cyberattacks, data corruption, fraud or the control of weapons for criminal purposes. However frightening, this last threat is classified as such (medium) because it is difficult to apply, the military equipment being well protected.

For this same reason, “robot burglars” are ranked among the least serious threats, as they can be easily stopped. In addition, counterfeiting would consist of manufacturing and selling false cultural content (music, paintings, etc.), but the seriousness is not considered important.

Ethical issues for a more or less near future

The main risk of a conscious AI would be to step out of our ethical and legal framework, and lose control of it. Closer to our present, the world’s first text of UNESCO covering all areas related to AI, its benefits and its risks for society, was adopted on November 24, 2021.

The recommendation describes the values ​​and principles that should guide political and legal measures in the development of AI. Among them, respect for human rights and inclusion (non-discrimination, gender equality, etc.); the contribution to sustainable development in the research and use of AI; AI safety (risk assessment, data protection, ban on using AI for social rating or mass surveillance purposes).

Breakthroughs in algorithms represented by cognitive computing are driving the continued penetration of AI into areas such as education, commerce, and medical treatment, to create space for AI services », write Chinese researchers. ” As for the human concern of who controls whom between humanity and intelligent machines, the answer is that AI can only become a service provider for human beings, which demonstrates the rationality of the value of AI. ‘ethics “.

While most scientists agree with this, others worry about how big the development of AI could get. ” Already today, AI systems detect when a human tries to modify their behavior and sometimes do everything to reject this intervention and circumvent it if it conflicts with the initial objective of the AI. », warns the Weather Rachid Guerraoui, Director of the Distributed Programming Laboratory at EPFL (Switzerland). ” You have to act in a subtle and quick way so that the AI ​​believes that it is making all the decisions itself. And then erase the traces of human intervention “. Caution therefore, even if the takeover of the machine over the human is not for tomorrow.

We would like to give thanks to the writer of this short article for this amazing material

What would happen if a conscious artificial intelligence took over humanity?


Find here our social media profiles and the other related pageshttps://www.ai-magazine.com/related-pages/