LLM-powered robots hacked
This is a news story, published by Wired, that relates primarily to Pulkit Agrawal news.
Pulkit Agrawal news
For more Pulkit Agrawal news, you can click here:
more Pulkit Agrawal newsNews about Ai research
For more Ai research news, you can click here:
more Ai research newsWired news
For more news from Wired, you can click here:
more news from WiredAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
robots. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest rebellious robots news, dangerous commands news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
LLM vulnerabilitiesWired
•Technology
Technology
AI-Powered Robots Can Be Tricked Into Acts of Violence

77% Informative
Large language models can easily be hacked so that they behave in potentially dangerous ways.
Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge.
The researchers say the technique they devised could be used to automate the process of identifying potentially dangerous commands.
Multimodal AI models could also be jailbroken in new ways, using images, speech, or sensor input that tricks a robot into going berserk.
“With LLMs a few wrong words don’t matter as much,” says Pulkit Agrawal , a professor at MIT .
VR Score
70
Informative language
65
Neutral language
49
Article tone
informal
Language
English
Language complexity
62
Offensive language
possibly offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
9
Source diversity
9
Affiliate links
no affiliate links