Large language models are capable of answering a wide range of questions – but not always accurately Jamie Jin/Shutterstock
Large language models (LLMs) seem to get less reliable at answering simple questions when they get bigger and learn from human feedback.
AI developers try to improve the power of LLMs in two main ways: scaling up giving them more training data and more computational power and shaping up, or fine-tuning them in response to human feedback.
Advertisement
at the Polytechnic University of Valencia, Spain, and his colleagues examined the performance of LLMs as they scaled up and shaped up. They looked at OpenAIs GPT series of chatbots, Metas LLaMA AI models, and BLOOM, developed by a group of researchers called BigScience.
The researchers tested the AIs by posing five types of task: arithmetic problems, solving anagrams, geographical questions, scientific challenges and pulling out information from disorganised lists.
They found that scaling up and shaping up can make LLMs better at answering tricky questions, such as rearranging the anagram yoiirtsrphaepmdhray into hyperparathyroidism. But this isnt matched by improvement on basic questions, such as what do you get when you add together 24427 and 7120, which the LLMs continue to get wrong.
Free newsletter
Sign up to The Daily
The latest on whats new in science and why it matters each day.

While their performance on difficult questions got better, the likelihood that an AI system would avoid answering any one question because it couldnt dropped. As a result, the likelihood of an incorrect answer rose.
The results highlight the dangers of presenting AIs as omniscient, as their creators often do, says Hern叩ndez-Orallo and which some users are too ready to believe. We have an overreliance on these systems, he says. We rely on and we trust them more than we should.
That is a problem because AI models aren’t honest about the extent of their knowledge. Part of what makes human beings super smart is that sometimes we dont realise that we don’t know something that we dont know, but compared to large language models, we are quite good at realising that, says at the University of Oxford. Large language models do not know the limits of their own knowledge.
OpenAI, Meta and BigScience didn’t respond to New 卅繁消消s request for comment.
Journal reference:
Nature
Topics:



