logo
welcome
ScienceDaily

ScienceDaily

Despite its impressive output, generative AI doesn't have a coherent understanding of the world

ScienceDaily
Summary
Nutrition label

80% Informative

Large language models can achieve incredible performance on some tasks without having internalized a coherent model of the world or the rules that govern it.

This means these models are likely to fail unexpectedly if they are deployed in situations where the environment or task slightly changes.

MIT researchers found that a popular type of generative AI model can provide turn-by-turn driving directions in New York City with near-perfect accuracy.

The researchers demonstrated the implications of this by adding detours to the map of New York City , which caused all the navigation models to fail.

If scientists want to build LLMs that can capture accurate world models, they need to take a different approach.

The researchers want to tackle a more diverse set of problems such as those where some rules are only partially known.

VR Score

91

Informative language

96

Neutral language

71

Article tone

semi-formal

Language

English

Language complexity

61

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

long-living

External references

no external sources

Source diversity

no sources

Affiliate links

no affiliate links