DeepSeek AI Censorship Concerns
This is a China news story, published by Wired, that relates primarily to AI news.
China news
For more China news, you can click here:
more China newsNews about Ai policy and regulations
For more Ai policy and regulations news, you can click here:
more Ai policy and regulations newsWired news
For more news from Wired, you can click here:
more news from WiredAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai policy and regulations, you might also like this article about
Level CensorshipAfter DeepSeek. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest censorship concerns news, DeepSeek app news, news about Ai policy and regulations, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
Chinese censorshipWired
•Technology
Technology
Open-source AI model DeepSeek-R1 refuses to answer sensitive questions

70% Informative
Ask DeepSeek-R1 about Taiwan or Tiananmen , and the model is unlikely to give an answer.
Censorship is common on Chinese -made AI models.
If the censorship filters on large language models can be easily removed, it will likely make open-source LLMs from China even more popular.
DeepSeek-R1 answers the same question when the model is hosted on Together AI , a cloud server, and Ollama , a local application.
It often answers short responses that are clearly trained to align with the Chinese government’s talking points on political issues.
This type of censorship points to a larger problem in AI today : every model is biased in some way.
Perplexity is working on a project called Open R1 based on DeepSeek ’s model.
“We are making modifications to the [ R1 ] model itself to ensure that we’re not propagating any propaganda or censorship,” shevelenko says.
Recent regulations from China suggest the Chinese government might be cutting open-source AI labs some slack.
VR Score
65
Informative language
64
Neutral language
53
Article tone
semi-formal
Language
English
Language complexity
53
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
detected
Known propaganda techniques
not detected
Time-value
long-living
External references
6
Source diversity
6
Affiliate links
no affiliate links