Generative AI's Uncoherent Understanding
This is a New York City news story, published by ScienceDaily, that relates primarily to MIT news.
New York City news
For more New York City news, you can click here:
more New York City newsNews about Ai research
For more Ai research news, you can click here:
more Ai research newsScienceDaily news
For more news from ScienceDaily, you can click here:
more news from ScienceDailyAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
generative AI models. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest generative AI news, Incoherent world models news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
generative AI modelScienceDaily
•Despite its impressive output, generative AI doesn't have a coherent understanding of the world
80% Informative
Large language models can achieve incredible performance on some tasks without having internalized a coherent model of the world or the rules that govern it.
This means these models are likely to fail unexpectedly if they are deployed in situations where the environment or task slightly changes.
MIT researchers found that a popular type of generative AI model can provide turn-by-turn driving directions in New York City with near-perfect accuracy.
The researchers demonstrated the implications of this by adding detours to the map of New York City , which caused all the navigation models to fail.
If scientists want to build LLMs that can capture accurate world models, they need to take a different approach.
The researchers want to tackle a more diverse set of problems such as those where some rules are only partially known.
VR Score
91
Informative language
96
Neutral language
71
Article tone
semi-formal
Language
English
Language complexity
61
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
no external sources
Source diversity
no sources
Affiliate links
no affiliate links