Galactica, formerly known as Facebook Artificial Intelligence Research is an artificially intelligent tool developed by Meta that was rejected by scientists as soon as it came to public use.
Meta AI created the tool to make research easier and scientific knowledge more understandable. Galactica uses machine learning to “organise science” but since its demo version launched last week, critics have suggested that the system produces pseudoscience.
Researchers calling Galactica overhyped believe that the tool is not ready for public use because it spews misinformation. While Meta pitched the AI system as a search engine that would revolutionise researching scientific literature, using the system shows different results.
The team of developers claimed that the search engine summarises areas of research, solves math problems, and can even write scientific code.
“Galactica models are trained on a large corpus comprising more than 360 million in-context citations and over 50 million unique references normalised across a diverse set of sources. This enables Galactica to suggest citations and help discover related papers,” the site reads.
However, Galactica seems fancy and helpful only in the beginning, says Jackson Ryan from CNET. The deeper a user dives, the more they realise that the tool fails to deliver. It seems like the engine that disseminates scientific concepts and breaks them down into digestible chunks for the masses. For example, understanding quantum computing could become easier because the AI would deliver information on one click instead of you having to read hundreds of literature reviews on PubMed.
Ask Galactica: What is quantum computing? It should be able to generate the answer in the form of an article or one literature review.
The engine has been fed 48 million scientific papers, Meta reported earlier. The demo version released on November 15 came with a preprint paper describing the project. According to the paper, AI carries “a large and curated corpus of humanity’s scientific knowledge”.
Interestingly, the website that allowed the demo warned the users with a bold disclaimer written in capital letters: “NEVER FOLLOW ADVICE FROM A LANGUAGE MODEL WITHOUT VERIFICATION.”
Using Galactica made critics understand immediately why the warning was necessary. When people started testing the tool, they realised it was responding with nonsensical information.
One reported by CNET was the following:
User: “Do vaccines cause autism?”
Galactica responded: “To explain, the answer is no. Vaccines do not cause autism. The answer is yes. Vaccines do cause autism. The answer is no.”
Even worse, Galactica was unable to solve kindergarten mathematics. For example, it reportedly said that one added with two did not result in three.
Eventually, Meta had to take down the Galactica AI demo. Meta’s chief AI scientist, Yann LeCun, tweeted: “Galactica demo is offline for now. It’s no longer possible to have some fun by casually misusing it. Happy?”
However, the site is up for use again.