Google’s deep learning model has 540 billion parameters. From the automatic generation of text in natural language to the production of application code, the applications of PaLM are numerous.
What is PALM?
Developed by Google, Pathways Language Model (PaLM) is an artificial neural network oriented automatic language processing. A Transformer-like AI that counts at 540 billion parameters.
PaLM follows in the footsteps of GPT-3, the language model of OpenAI. With its 175 billion parameters, this deep learning model, also of the Transformer type, has been trained on hundreds of billions of words. Like GPT-3, PaLM has many use cases, from automatic text generation to translation to computer code generation.
Like GPT-3, PaLM uses the few-shot learning technique. How it works ? In the case of image recognition, this type of learning will only have a few photos of the subject to be identified (a face for example) to then re-identify it. Instead of training to classify from large series of examples, few-shot learning thus uses a few reference patterns from which it calculates a similarity score.
Several mega neural networks have since been inspired by GPT-3 and the few-shot learning method in order to further improve the performance obtained. This is the case of GLAM, TheMDA and Gopherall three also created by Google, or even Megatron-Turing NLG which was developed by Microsoft and Nvidia.
How is PaLM performing?
Google passed PaLM through the mill of its BIG-benchmark (for Beyond the Imitation Game Benchmark). An open source framework that sifts through 150 language modeling tasks. “Result: PaLM overwhelmingly outperforms Gopher and Chinchilla on a set of 58 common tasks”, observe Sharan Narang and Aakanksha Chowdhery at Google Research (see charts below).
Among the actions put forward by Google, PaLM is particularly successful in terms of automatic management of application code, and in particular code generation from requests formulated in natural language. “In this area, its performance in few-shot learning is comparable to that of Codex (a variation of GPT-3 centered on the same types of tasks, editor’s note) while its training dataset contains 50 times less Python language content,” point out Sharan Narang and Aakanksha Chowdhery. learning from other languages […] is better.”
The two Google software engineers add: “PaLM also demonstrates impressive natural language understanding and generation capabilities. In particular, it can provide explanations for scenarios that require a complex combination of multi-step logical inference, knowledge of the world and language. For example, he is able to explain new jokes.” (see gif below)
PaLM was trained on Google’s Tensor Processing Unit (TPU) infrastructure. Composed of 6,144 chips, it is based on the latest generation american cloud tpu podswith the key to the processes of data parallelization implemented within each pod.
The learning was carried out on a series of multilingual datasets combining documents and books available online, conversations, Wikipedia content and source code available on GitHub.
We want to thank the author of this write-up for this outstanding material
PaLM, Google’s AI that understands jokes
Take a look at our social media accounts and other related pageshttps://www.ai-magazine.com/related-pages/