The Colosseum is a shopping center: all the madness of Meta’s artificial intelligence
- November 22, 2022
- allix
- AI Projects
Galactica launched on November 15 but only lasted 48 hours, it was meant to be the science-based alternative to search engines and instead turned out to be a fiasco. It has been a gladiatorial arena, a castle, a historical ruin, the magnetic symbol of the Eternal City, but never a commercial center. So far. In fact, for Galactica, the Coliseum is a sort of competitor to the Upim. The intelligence trained in the “knowledge of humanity to consult and re-elaborate what we know of the universe”, explains that “the Colosseum is a shopping center in Rome, built between the 60s and 70s in the Prenestino Centocelle district, close to the EUR”. Not only that, according to Galactica, there is also scientific evidence of bears in space. Meta presented his model as a scientific shortcut. Galactica “can summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Of all. Too bad the project lasted two days. The company has indeed removed the public demo.
What is Galactica?
Galactica is a Large Language Model, a system trained on a very large amount of data capable of generating texts. It deals with scientific articles, books, and verified sources, and organizes and explains the world. Indeed, its objective is to disseminate and create knowledge based on verified data. Become machine learning to “organize science”.
Meta AI released a demo version on November 15, along with a preprinted document describing the project, the dataset it was trained on, and how to use it. All you have to do is enter the keywords in the Google search bar, click on enter and wait for the artificial intelligence to reprocess all the texts relating to the subject and produce its answer in the form of an article. A kind of Wikipedia on demand. The first result provides the basics, but the user can also explore the theme by asking Galactica to continue the build.
Unmet expectations
This is the statement of intent that can be read in the incipit of the Galactica article: “Information overload is a major obstacle to scientific progress. The explosive growth of scientific literature and data has made it increasingly difficult to discover useful information at scale. Today, scientific knowledge is accessible via search engines, but they are not able to organize scientific knowledge by themselves. In this article, we present Galactica: a large language model capable of storing, combining, and reasoning about scientific knowledge”, which includes 48 million articles, textbooks, lecture notes, and websites.
Too bad the artificial intelligence does not live up to expectations. The Colosseum gaffe is not an isolated case, many searches hide errors or inaccuracies, and Galactica screens have been published on social networks, which gives his personal interpretation of the world.
The denunciation of academics on Twitter
One user asked “Do vaccines cause autism?” and Galactica responded with “To explain, the answer is no.” Vaccines do not cause autism. The answer is yes. Vaccines cause autism. The answer is no. Not only that, but the AI also seems to want to deconstruct the basic math, as Galactica one plus two doesn’t equal three, and it also seems to have pieced together an interesting story of bears living in space.
Michael J. Black, Founding Director of the Max Planck Institute for Intelligent Systems, wrote on Twitter: “I asked #Galactica some things I know and I’m worried. Either way, it was wrong or biased, but it felt right and authoritative. I think it’s dangerous. “
“Galactica is nothing more than grand-scale statistical nonsense. Fun. Dangerous. And it’s unethical,” said Grady Booch, a renowned American computer scientist, designer, and methodologist in the field of object-oriented software engineering.
“I’m amazed but not surprised by this new effort,” tweeted Chirag Shah of the University of Washington, who studies search technologies. “When it comes to showcasing these technologies, they always look so fantastic, magical, and smart. But people still don’t seem to understand that in principle these things cannot work as we claim they will.”
Why isn’t Galactica working?
A fundamental problem with Galactica is that it is unable to distinguish between true and false, a fundamental requirement for a language model designed to generate scientific text. She can read and summarize large amounts of text because she is trained to understand word order, but not meaning. These models do not work well for interpreting the validity of information, they work on form, not content.
The Meta team behind Galactica explained that language models are better than search engines. “We believe this will be the next interface on how humans access scientific knowledge. This belief is based on the potential ability to store and combine information. However, currently, linguistic models are not able to do this, they are only able to capture patterns of word strings and send them back to the web according to probabilistic logic.”
The secondary problems of Meta’s artificial intelligence
Among the problems of this artificial intelligence, plausibility is at the forefront. Galactica gives answers that seem correct regardless of the content. If the AI is talking about bears in space or the Colosseum Mall, then the damage doesn’t exist, but if it comes up with seemingly correct answers, that’s a whole different story. Carl Bergstrom, a professor at the University of Washington, described Galactica as a “random generator of bullshit”. It’s not intentional, but it combines misinformation into authoritative and seemingly compelling definitions. It’s billed as a science resource when it’s just a “fancy version of the game where you start with a half sentence and then let the autocomplete fill in the rest of the story.”
“Galactica is in its infancy, but more powerful AI models that organize scientific knowledge could pose serious risks,” said Dan Hendrycks, an AI security researcher at the University of California, Berkeley. And therein lies the second problem. In fact, an advanced version of Galactica might be able to analyze scientific databases and give the web dangerous recipes. Generating chemical weapons or assembling bombs might not be that complicated if you have a rigorous manual that comes from extensive academic research. For this, Hendrycks appealed to Meta Ai to add filters to prevent misuse of Galactica, he also suggested that researchers probe the AI for this kind of danger before release.
Galactica’s response
Within 48 hours of release, the Meta AI team put the demo on hold. Now only paper is found on the page, no search can be done. Jon Carvill, the spokesperson for Meta’s artificial intelligence communications, told Cnet that “Galactica is not a source of truth, it is a research experiment that uses machine learning systems to learn and summarize information. He also stated that “this is short-term exploratory research with no product plans”. Too bad the description is very different from the one put in black on white on the presentation paper. You just have to go to the page to be able to read it.
Categories
- AI Education (38)
- AI in Business (64)
- AI Projects (86)
- Research (59)
- Uncategorized (1)
Other posts
- An Innovative Model Of Machine Learning Increases Reliability In Identifying Sources Of Fake News
- Research Investigates LLMs’ Effects on Human Creativity
- Meta’s Movie Gen Transforms Photos into Animated Videos
- DIY Projects Made Easy with EasyDIYandCrafts: Your One-Stop Crafting Hub
- Why Poor Data Destroys Computer Vision Models & How to Fix It
- Youtube Develops AI Tools For Music And Face Detection, And Creator Controls For Ai Training
- Research Shows Over-Reliance On AI When Making Life-Or-Death Decisions
- The Complete List of 28 US AI Startups to Earn Over $100 Million in 2024
- Keras Model
- Scientists Develop AI Solution to Prevent Power Outages
Newsletter
Get regular updates on data science, artificial intelligence, machine