卅繁消消

Technology

AIs get worse at answering simple questions as they get bigger

Using more training data and computational power is meant to make AIs more reliable, but tests suggest large language models actually get less reliable as they grow

By Chris Stokel-Walker

25 September 2024

Large language models are capable of answering a wide range of questions – but not always accurately

Jamie Jin/Shutterstock

Large language models (LLMs) seem to get less reliable at answering simple questions when they get bigger and learn from human feedback.

AI developers try to improve the power of LLMs in two main ways: scaling up giving them more training data and more computational power and shaping up, or fine-tuning them in response to human feedback.

at the Polytechnic University of Valencia, Spain, and his colleagues examined the performance of LLMs as they scaled up and shaped up. They looked at OpenAIs GPT series of chatbots, Metas LLaMA AI models, and BLOOM, developed by a group of researchers called BigScience.

The researchers tested the AIs by posing five types of task: arithmetic problems, solving anagrams, geographical questions, scientific challenges and pulling out information from disorganised lists.

They found that scaling up and shaping up can make LLMs better at answering tricky questions, such as rearranging the anagram yoiirtsrphaepmdhray into hyperparathyroidism. But this isnt matched by improvement on basic questions, such as what do you get when you add together 24427 and 7120, which the LLMs continue to get wrong.

Free newsletter

Sign up to The Daily

The latest on whats new in science and why it matters each day.

New 卅繁消消. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

While their performance on difficult questions got better, the likelihood that an AI system would avoid answering any one question because it couldnt dropped. As a result, the likelihood of an incorrect answer rose.

The results highlight the dangers of presenting AIs as omniscient, as their creators often do, says Hern叩ndez-Orallo and which some users are too ready to believe. We have an overreliance on these systems, he says. We rely on and we trust them more than we should.

That is a problem because AI models aren’t honest about the extent of their knowledge. Part of what makes human beings super smart is that sometimes we dont realise that we don’t know something that we dont know, but compared to large language models, we are quite good at realising that, says at the University of Oxford. Large language models do not know the limits of their own knowledge.

OpenAI, Meta and BigScience didn’t respond to New 卅繁消消s request for comment.

Journal reference:

Nature

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New 卅繁消消 events and special offers.

Sign up
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop