American scientists from the University of Pennsylvania have just published in arXiv the results of their work on artificial intelligence and artificial neural networks. They created a new programming language to mimic the logic circuits of a standard computer within a neural network so that it becomes capable of running a program. This artificial network then becomes capable of playing games, increasing its calculation speed and even launching the execution of another artificial intelligence within it!
What is an artificial neural network?
An artificial neural network is an artificial system that functions like the human brain. This concept, which seems recent, is actually already old since it was invented in 1943 by the mathematician Walter Pitts (1923-1969) and the neurophysicist Warren McCulloch (1898-1969), both from the University of Chicago.
It will however be necessary to wait more than 60 years and the rise of Big Data and massively parallel processing in the course of the 2000s to have enough computing power necessary for the execution of complex neural networks.
Generally, a artificial neural networks is “built” with a large number of processors organized in fractions and working in parallel. It is the first fraction which receives the raw information and which processes it then which transmits the processed information to the following fraction of artificial neurons and so on. The last fraction is responsible for giving the results.
Algorithms, which are sequences of instructions and operations, solve a problem and allow the neural network to teach the computer from new data. The computer learns through the neural network to perform a task from examples that serve as training.
Much like a young human brain learning, an artificial neural network cannot be programmed to perform a task. So they also have to learn!
Currently, there are several types of artificial neural networks. They are generally categorized according to the number of “layers” between the input of raw information and the output of results. The simplest networks are called feed-forward, followed by recurrent neural networks and then by the more complex, but increasingly used, convolutional neural networks.
(Also read:This is how artificial intelligence helps mathematicians)
A new programming language
Future microprocessors will combine computer technology and artificial intelligence for real-time information processing! Source: SweetBunFactory/Shutterstock
The two researchers from the University of Pennsylvania decided in their work to teach an artificial neural network to execute computer code like a conventional computer.
By imitating the logic circuits of a computer, an artificial intelligence must be able to execute computer code and therefore accelerate its calculation speed. However, to get there, the researchers set out to create a simple programming language to be interpreted by neural networks.
Their new programming language allowed them to add a full implementation of software virtualization and logic circuitry to a computer. This language is based on calculation per tankcalled “reservoir computing”, derived from the theory of recurrent neural networks.
To start, the two researchers calculated the effect of each neuron and created a basic neural network capable of performing very simple calculations such as addition. To go further, they linked several of these basic networks so that the slightly more complex network could perform more difficult operations.
Finally, by combining these more complex networks, they were able to achieve a neural system capable of performing much more difficult operations as a conventional computer can do, such as playing the game Pong or operating another artificial neural network.
(Also read:Here is the first wireless communication between a human and a computer!)
The rise of neuromorphic computing
These neural networks and this new programming language should simplify the splitting of complex computational tasks. Very often, this kind of operation is distributed over several processors in order to increase the speed.
Moreover, the use of these new neural networks combined with neuromorphic computing could allow a much faster speed of operation.
Neuromorphic computing involves neuromorphic chips. These are chips that mimic the functioning of the human nervous system.
Conventional computers have an architecture where the information storage part (the computer’s memory) is separated from the processing and calculation part (the computer’s microprocessor). This operation, far removed from that of the human brain, processes information sequentially and synchronously.
One neuromorphic processor works differently. In a neural network that makes up a neuromorphic processor, artificial neurons communicate with each other by processing information asynchronously and in parallel. Information storage and calculations are processed together in the neural network. These are systems that therefore have the ability to process information irregularly, which allows them to adapt to events.
A neuromorphic processor is more powerful, faster and much less energy intensive than a traditional computer. A computer equipped with this type of processor will be able to learn and adapt with very low latency.
The fact of being able to process a large amount of information in parallel allows for much greater speed in terms of the execution of the calculations.
However, the researchers still have to adapt this operation to the scale of a computer, because they carried out their work on a network of a few artificial neurons.
(Also read: Is there a computer chip that simulates our brain?)
We would love to thank the author of this write-up for this remarkable material
An artificial intelligence capable of creating and operating a computer!
You can view our social media profiles here , as well as other pages on related topics here.https://www.ai-magazine.com/related-pages/