logo
welcome
Live Science

Live Science

Large language models not fit for real-world use, scientists warn — even slight changes cause their world models to collapse

Live Science
Summary
Nutrition label

80% Informative

MIT , Harvard and Cornell researchers found large language models fail to produce accurate models that accurately represent the real world.

When tasked with providing turn-by-turn driving directions in New York City , for example, LLMs delivered them with near-100% accuracy.

But the underlying maps used were full of non-existent streets and routes when the scientists extracted them.

What these approaches could be isn't clear, but it does highlight the fragility of transformer LLMs when faced with dynamic environments.

"I hope we can convince people that this is a question to think very carefully about," said Rambachan .

"We don’t have to rely on our own intuitions to answer it," he said.