Artificial intelligence: the many faces of deep learning

Contemporary artificial intelligence technologies mainly belong to the deep learning family. The term “learning” refers to the fact that the general relationship sought between input data (images, texts, sounds, etc.) and outputs (label, new image, new sound, etc.) is obtained by varying millions or billions of parameters from known input/output data. The adjective “deep” refers to the way in which the elementary units are organized, likened to neurons. The connection strengths between these are calculated by learning. These neurons are distributed in different layers more or less deep in the calculation process. Several variants exist depending on the tasks to be performed. Here is a non-exhaustive list.

Convolutional Networks

They are among the oldest, having been developed in the late 1980s to recognize handwriting. These neural networks also popularized artificial intelligence when, in 2012, one of them beat all other systems in an image recognition contest, relegating methods that had been perfected for years to oblivion. They are still used for various image processing tasks in biology, physics, etc. The different layers of “neurons” they contain identify increasingly abstract patterns in the images, which makes it possible to classify them. Their so-called “supervised” learning requires large quantities of images annotated by humans.

And also in our file: Article reserved for our subscribers Artificial intelligence, a new engine for scientific research

The Transformers

Invented by a Google team in 2017 to improve translation systems – the algorithm having to better take into account the context around the word to be translated – these neural networks have proven to be very effective for other tasks, including understanding the natural language. Having been trained on billions of texts on the web, they are now among the biggest pieces of software ever made, with hundreds of billions of parameters. Their training method is simple, unsupervised, consisting of guessing the next word in a sentence. But their power may surprise, because they are then able to translate, convert one computer language into another, to maintain conversations as natural as those between humans… The best known Transformers are GPT-3 (OpenAI), Gopher or LaMDA (Google), YaLM (Yandex), WuDao (Baidu) and Bloom (from an international consortium). In research, Transformers can help navigate large bibliographic databases, produce intelligible summaries of articles, or interpret mathematical formulas.

You have 33.71% of this article left to read. The following is for subscribers only.

We would love to give thanks to the writer of this write-up for this awesome content

Artificial intelligence: the many faces of deep learning


Visit our social media profiles and also other related pageshttps://www.ai-magazine.com/related-pages/