AI masters language. Should we trust what he says? – News 24 | News in France and abroad

To hear more audio stories from publications like The New York Times, download Audm for iPhone or Android.

You are sitting in a comfortable chair by the fireplace on a cold winter night. Maybe you have a cup of tea in your hand, maybe something stronger. You open a magazine to an article you wanted to read. The title suggested a story about a promising – but also potentially dangerous – new technology about to go mainstream, and after reading just a few sentences, you find yourself drawn into the story. A revolution is coming in artificial intelligence, asserts the author, and we must, as a society, better anticipate its consequences. But then the strangest thing happens: you notice that the author has, apparently deliberately, omitted the very last word of the first .

The missing word jumps into your consciousness almost spontaneously: “the very last word of the first paragraph. » There is no sense of an internal search query in your mind; the word “paragraph” just appears. It may seem like second nature, this exercise in filling in the blanks, but doing it makes you think about the layers of knowledge embedded behind the thought. You need proficiency in English spelling and syntactic patterns; you need to understand not only the dictionary definitions of the words, but also how they relate to each other; you should be familiar enough with the high standards of magazine editing to assume that the missing word isn’t just a typo, and that editors are generally loath to omit key words from published articles unless the author is trying to be clever – maybe trying to use the missing word to make a point about your intelligence, the speed with which a human speaker of English can conjure up the right word.

Siri and Alexa popularized the experience of conversing with machines, but it was next level, approaching a fluidity that sounded like science fiction.

Before you can pursue this idea further, you are back in the article, where you find that the author has taken you to a housing complex in suburban Iowa. Inside one of the buildings is a marvel of modern technology: 285,000 processor cores packed into a giant supercomputer, powered by solar panels and cooled by industrial fans. Machines never sleep: every second of every day, they perform countless calculations, using state-of-the-art artificial intelligence techniques that go by names like “stochastic gradient descent” and “convolutional neural networks” . The whole system is considered one of the most powerful supercomputers on the planet.

And what, you might wonder, is this computer dynamo doing with all these prodigious resources? Above all, it plays some kind of game over and over again, billions of times per second. And the game is called: Guess what is the missing word.

The supercomputer complex in Iowa runs a program created by OpenAI, an organization created in late 2015 by a handful of Silicon Valley luminaries, including Elon Musk; Greg Brockman, who until recently was CTO of electronic payment juggernaut Stripe; and Sam Altman, at the time president of start-up incubator Y Combinator. In its early years, as it developed its programming brain, OpenAI’s technical achievements were mostly overshadowed by the star power of its founders. But that changed in the summer of 2020, when OpenAI began offering limited access to a new program called Generative Pre-Trained Transformer 3, colloquially referred to as GPT-3. Although the platform was initially only available to a small handful of developers, examples of GPT-3’s amazing prowess with language – and at least the illusion of cognition – have started circulating the web. and on social networks. Siri and Alexa had popularized the experience of conversing with machines, but it was on the next level, approaching a fluidity that resembled science fiction creations like HAL 9000 from “2001”: a computer program that can respond to Complex open questions in perfectly composed sentences.

As a field, AI is currently fragmented across a number of different approaches, targeting different kinds of problems. Some systems are optimized for problems that involve moving through physical space, such as in self-driving cars or robotics; others categorize photos for you, identifying familiar faces or pets or vacation activities. Some forms of AI – like AlphaFold, a project of Alphabet subsidiary DeepMind (formerly Google) – are beginning to tackle complex scientific problems, like predicting the structure of proteins, which is at the heart of design and drug discovery. Many of these experiments share an underlying approach known as “deep learning”, in which a neural network loosely modeled after the structure of the human brain learns to identify patterns or solve problems through endless cycles of trial and error, strengthening neural connections. and weakening others through a process known as training. The “depth” of deep learning refers to multiple layers of artificial neurons in the neural network, layers that correspond to higher and higher levels of abstraction: in a vision-based model, for example, a layer of neurons can detect vertical lines. , which would then feed into a layer detecting the edges of physical structures, which would then report to a layer identifying houses as opposed to apartment buildings.

GPT-3 belongs to a category of deep learning known as a large language model, a complex neural network that was trained on a titanic text dataset: in the case of GPT-3, around 700 GB of data from the web, including Wikipedia, supplemented by a large collection of texts from digitized books. GPT-3 is the most famous of the large language models and the most publicly available, but Google, Meta (formerly known as Facebook), and DeepMind have all developed their own LLMs in recent years. Advances in computing power – and new mathematical techniques – allowed vintage LLMs of GPT-3 to ingest much larger datasets than their predecessors and to use much deeper layers of artificial neurons to their training.

Chances are you’ve interacted with a large language model before if you’ve ever used an app – like Gmail – that includes an auto-complete feature, gently prompting you with the word ‘attend’ after you type the phrase ‘Unfortunately, I won’t be able to…. But auto-completion is only the most rudimentary expression of what software like GPT-3 is capable of. It turns out that with enough training data and deep enough neural networks, large language models can show remarkable skill if you ask them not only to fill in the missing word, but also to continue to write entire paragraphs in the style of the original prompt. For example, when I gave GPT-3 the prompt:

In 1958, Frank Rosenblatt introduced the Perceptron, an early version of a neural network that could only detect the location of a square on a printed map. Fifty years later, neural networks are capable of . . .

The software conscientiously completed the reflection, then continued, taking up the historical framework of the initial text:

. . . detect and classify faces, find patterns in data, and mimic the human brain. As the neural network has matured in academia, it has also found its way into the tech industry. In the late 1980s and early 1990s, neural networks were used in speech recognition and character recognition applications. The growth in computing power and the ability to collect larger data sets has allowed other neural network applications to emerge.

Since the release of GPT-3, the Internet has been flooded with examples of the software’s uncanny ease with language, as well as its blind spots, weaknesses, and other more sinister tendencies. GPT-3 was trained to write Hollywood scripts and compose non-fiction in the style of Gay Talese’s new journalism classic “Frank Sinatra Has a Cold.” You can use GPT-3 as a simulated dungeon master, conducting elaborate text adventures. through worlds invented on the fly by the neural network. Others have fueled software prompts that generate patently offensive or delusional responses, highlighting the limitations of the model and its potential for harm if widely adopted in its current state.

We would like to give thanks to the author of this post for this remarkable web content

AI masters language. Should we trust what he says? – News 24 | News in France and abroad


We have our social media pages here and other pages on related topics here.https://www.ai-magazine.com/related-pages/