logo
welcome
Wired

Wired

A new study shows that advanced AI models' "reasoning" can be extremely brittle and unreliable

Wired
Summary
Nutrition label

83% Informative

New study from six Apple engineers shows AI "reasoning" can be brittle and unreliable in the face of seemingly trivial changes to benchmark problems.

Apple researchers modified the GSM-Symbolic benchmark by adding "seemingly relevant but ultimately inconsequential statements" to the questions.

Results also showed high variance across 50 separate tests with different names and values.

"Current LLMs are not capable of genuine logical reasoning," the researchers hypothesize.

The GSM-Symbolic paper suggests LLMs don't actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

With enough training data and computation, the AI industry will likely reach what you might call "the illusion of understanding" with AI video synthesis.