This is a AI news story, published by MIT Technology Review, that relates primarily to Gemma Scope news.
For more AI news, you can click here:
more AI newsFor more Gemma Scope news, you can click here:
more Gemma Scope newsFor more Ai research news, you can click here:
more Ai research newsFor more news from MIT Technology Review, you can click here:
more news from MIT Technology ReviewOtherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
mechanistic interpretability research. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest mechanistic interpretability news, interpretability researchers news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
mechanistic interpretability teamMIT Technology Review
•79% Informative
Google 's Gemma Scope is a tool to help researchers understand what is happening when AI is generating an output.
The hope is that if we have a better understanding of what's happening inside an AI model, we’ll be able to control its outputs more effectively, leading to better AI systems in the future.
The tool was developed by Google DeepMind that studies mechanistic interpretability.
Neuronpedia partnered with DeepMind to build a demo of Gemma Scope that you can play around with right now.
You can test out different prompts and see how the model breaks up your prompt and what activations your prompt lights up.
Some features are proving easier to track than others than others.
Mechanistic interpretability research can also give us insights into why AI makes errors.
The knowledge of “bomb making” isn’t just a simple on-and-off switch in an AI model.
It most likely is woven into multiple parts of the model, and turning it off would probably involve hampering the AI ’s knowledge of chemistry.
Any tinkering may have benefits but also significant trade-offs.
VR Score
78
Informative language
76
Neutral language
53
Article tone
informal
Language
English
Language complexity
45
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
6
Affiliate links
no affiliate links