⇧ [VIDÉO] You might also like this partner content (after ad)
On May 3, Meta (formerly Facebook) announced that it was making the “Open Pretrained Transformer” language model, which has 175 billion parameters, available to all researchers. Its aim is to encourage open research on artificial intelligence and to fuel debate on its use, which can lead to abuses.
This is the very first time that Meta’s AI laboratory has made a very large language model available to any researcher who wishes; provided, however, that the company’s approval is sought to access the system. ” The release includes both the pre-trained models and the code needed to train and use them said Meta in a blog post.
A new language model built from GPT-3
Called the Open Pretrained Transformer (OPT-175B), the new language model is built on the GPT-3 neural network model from OpenAI, the AI research and development company. GPT-3 is a third generation language learning model considered to be the most advanced of its kind in the world and has 175 billion parameters. OPT contains just as many, but its open access sharing also allows researchers to analyze the flaws in the OpenAI version.
During a three-month training process (from October 2021 to January 2022, non-stop), team members maintained a detailed system logbook. On more than 100 pages, it contains daily updates on training data: how and when it was added to the model, what worked and what did not.
In the field of AI, language models are statistical models whose purpose is in particular to generate words following an already written sequence, to imitate a human conversation, to answer questions (understanding of language), etc With billions of parameters, algorithms are trained with massive and varied volumes of text. But since the machine is imperfect, these models can also transmit erroneous information, inappropriate remarks or prejudices.
Goal : ” Maintain integrity and prevent misuse »
Meta wishes to promote collaboration in scientific research, in complete transparency: “ A much larger segment of the AI community needs access to these models in order to conduct reproducible research and collectively advance the field. “. Many people remain concerned about the possible abuses of AI and believe that the plurality of open models is good news.
The company said it was opening up the system to researchers in order to ” maintain integrity and prevent misuse of such systems, which the company has been regularly accused of doing with its own AI systems. Now, she wants to change the way we perceive and judge AI. ” With the release of OPT-175B and smaller-scale base models, we hope to increase the diversity of voices that define the ethical considerations of these technologies. “, she added.
However, some believe that transparency does not prevent the risks of spreading false information and racist or misogynistic language. Indeed, disseminating this model throughout the world to a wide audience — likely to use it or be affected by its results — entails significant responsibilities.
Source : arXiv
We want to give thanks to the writer of this post for this outstanding content
Artificial intelligence: Meta shares a large language model for free
Find here our social media profiles as well as other pages that are related to them.https://www.ai-magazine.com/related-pages/