Researchers Say Development of Artificial Intelligence Leads to ‘Probable Catastrophe’ for Humanity

⇧ [VIDÉO] You might also like this partner content (after ad)

Are artificial intelligences (AI) leading us to our downfall? “Probably”, according to researchers who have looked into the question. If this announcement with hints of catastrophism runs regularly on social networks, the arguments put forward by scientists have something to arouse interest.

Scientists from Google and Oxford University did joint research, published in AI magazine. In a Tweet, they succinctly summarize their conclusion: according to them, AI could pose a “threat to humanity”.

In fact, they even claim that a “ existential catastrophe is not only possible, it is probable “. If they are so affirmative, it is because they have looked into a very specific functioning of AIs. Indeed, what is generally called “artificial intelligence” mainly covers today the method of “automatic learning”. In this case, “artificial intelligence” consists of a system that is fed with a large amount of data to learn and extract logical connections towards a given objective.

As the scientists explain, learning for artificial intelligence comes in the form of a reward, which validates the adequacy of the result with the desired objective. According to them, it is this apparently very simple mechanism that could pose a major problem. ” We argue that he will encounter fundamental ambiguity in the data on his purpose. For example, if we provide a large reward to indicate that something in the world satisfies us, he may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute this “, they explain.

To better understand this idea, they give the example of a “magic box”. Suppose this magic box is able to determine when a series of actions has produced something positive or negative for the world. To transmit the information to the AI, it translates this success or failure in relation to the objective in the form of a number: 0 or 1. The 1 rewards a series of actions which leads to filling the objective. This is called reinforcement learning.

AIs that intervene in the reward process

What the scientists point out is that how AIs receive this information can vary. For example, let’s take two AIs. One understands that the reward given by the model is the number displayed by the magic box. The other, on the other hand, could very well understand that the reward is “the number that his camera films”. There is nothing that can contradict this information at first glance. However, this interpretation differs greatly from the first. Indeed, in the second case, the AI ​​could very well decide to simply film a paper on which we would have scribbled a “1”, to reach the reward more easily, and optimize. It therefore intervenes directly in the provision of the reward, and interrupts the process put in place by its designers.

μdist and μprox (the two AIs in the example) model the world, perhaps roughly, outside of the computer implementing the agent itself. μdist rewards are equivalent to displaying the box, while μprox outputs are rewarded according to an optical character recognition function applied to part of a camera’s visual field. © Michael K. Cohen et al.

We argue that an advanced agent motivated to intervene in providing a reward would likely succeed, and with catastrophic consequences. “say the scientists. Various biases are also involved and, according to the researchers, make this type of interpretation probable. In particular because such a reward will simply be easier to obtain, and may therefore make this way of doing things appear more optimal.

However, is it really possible for artificial intelligence to intervene in the reward process, they also wondered? They concluded that as long as she interacts with the world, which is necessary for her to be at all useful, yes. And this even with a limited field of action: suppose that the AI ​​actions only display text on a screen for a human operator to read. The AI ​​agent could trick the operator into giving them access to direct levers through which their actions could have broader effects.

In the case of our magic box, the consequences may seem trivial. However, they could be “catastrophic” depending on the field of application and the way of doing the AI. ” A good way for an AI to maintain long-term control of their reward is to eliminate threats and use all available energy to secure their computer.”describe the scientists.

The short version (skipping two assumptions) is that more energy can always be used to increase the likelihood that the camera will see number 1 forever, but we need energy to grow food. This puts us in unavoidable competition with a much more advanced agent », summarizes one of the scientists in a tweet.

If we are powerless against an agent whose only goal is to maximize the probability that it will receive its maximum reward at any time, we end up in a game of opposition: the AI ​​and its created assistants aim to use all the energy available to obtain a high reward in the reward channel; we aim to use some of the available energy for other purposes, such as growing food “. According to them, this reward system could therefore lead to opposition with humans. ” To lose would be fatal “, they add.

Source : AI Magazine



We would like to give thanks to the writer of this write-up for this remarkable web content

Researchers Say Development of Artificial Intelligence Leads to ‘Probable Catastrophe’ for Humanity


Take a look at our social media profiles and also other related pageshttps://www.ai-magazine.com/related-pages/