An AI capable of creating a computer inside itself to increase its abilities

⇧ [VIDÉO] You might also like this partner content (after ad)

Using new low-level neural machine code that they specifically developed, two researchers at the University of Pennsylvania have designed a neural network that can run a program just like a typical computer. They show that this artificial network can thus accelerate its calculation speed, play Pong or even execute another artificial intelligence.

Neural networks are designed to mimic the functioning of the human brain and are capable of solving common problems. They consist of several layers of artificial neurons (nodes) connected to each other; each node is associated with a certain weight and a threshold value: if the output of a node is greater than the threshold, the data is transmitted to the next layer and so on. These artificial neural networks must be trained to be increasingly efficient.

A neural network driven by an artificial intelligence must therefore be trained to perform the task for which it was designed. Typically, a neural network developed for image classification must be trained to recognize and distinguish different patterns from thousands of examples: this is machine learning (or machine learning). The examples submitted to the network being in this case annotated, we speak of “supervised” learning. Jason Kim and Dani Bassett of the University of Pennsylvania now propose a new approach, in which the neural network is trained to execute code, like an ordinary computer.

A new language for implementing logic circuits

An artificial intelligence trained to mimic the logic circuits of a standard computer in its neural network could in theory run code within it and thus speed up certain calculations. ” However, the lack of a concrete, low-level programming language for neural networks prevents us from taking full advantage of a neural computing framework. “, underline the two researchers in the preprinted version of their article.

So Jason Kim and Dani Bassett set out to develop a new programming language to add a fully distributed implementation of software virtualization and computer logic circuits to the neural network. ” We bridge the gap between how we conceptualize and implement neural computers and silicon computers “, they explain.

Their language is based on reservoir computation (reservoir computing) — a computational framework derived from the theory of recurrent neural networks (neural networks with recurrent connections). The two researchers started by calculating the effect of each neuron to create a very basic neural network capable of performing simple tasks, such as addition. They then linked several of these networks together so that they could perform more complex operations, thus reproducing the behavior of logic gates — the most basic operations that can be performed on one bit, but which combined together (which the are called logic circuits) allow much more complex operations to be performed.

The network thus obtained is therefore capable of doing everything that a conventional computer can do. In particular, the researchers used it to operate another virtual neural network and to run a version of the game Pong.

Even faster networks thanks to neuromorphic computing

By decomposing the internal representation and dynamics of the reservoir into a symbolic basis of its inputs, we define low-level neural machine code that we use to program the reservoir to solve complex equations and store chaotic dynamic systems as living memorye”, summarize the two experts. This neural network could also considerably simplify the splitting of massive computational tasks: these are usually distributed over several processors to increase the computational speed, but this also requires a lot more power.

Besides, the neuromorphic computing (or neuromorphic computing) could make these virtual networks work faster. In a typical computer, data storage (memory) and processing (processor) are separated; data is processed sequentially and synchronously. On the other hand, in a neuromorphic computer, designed to best imitate the functioning of the human brain, storage and calculations take place within artificial neurons which communicate with each other (a large amount of information is processed in parallel, asynchronously) , which reduces the number of operations it has to perform. Therefore, such a computer learns and adapts with low latency, even in real time.

Asked by the New ScientistFrancesco Martinuzzi of the University of Leipzig, a specialist in machine learning, confirms that neural networks running code such as that developed by Kim and Bassett could derive better performance from neuromorphic chips, adding that in certain specific areas, these computers could greatly outperform standard computers.

But in order to be able to exploit their computing capacity, these neural networks will first have to be scaled up: if the two researchers succeeded in imitating the operation of a few logic gates here, the microprocessor of a conventional computer comprises several billion transistors!

Source: J. Kim and D. Bassett, arXiv

We want to thank the author of this post for this remarkable material

An AI capable of creating a computer inside itself to increase its abilities


Take a look at our social media accounts and also other pages related to themhttps://www.ai-magazine.com/related-pages/