logo
welcome
Wired

Wired

Technology

Technology

Open-source AI model DeepSeek-R1 refuses to answer sensitive questions

Wired
Summary
Nutrition label

70% Informative

Ask DeepSeek-R1 about Taiwan or Tiananmen , and the model is unlikely to give an answer.

Censorship is common on Chinese -made AI models.

If the censorship filters on large language models can be easily removed, it will likely make open-source LLMs from China even more popular.

DeepSeek-R1 answers the same question when the model is hosted on Together AI , a cloud server, and Ollama , a local application.

It often answers short responses that are clearly trained to align with the Chinese government’s talking points on political issues.

This type of censorship points to a larger problem in AI today : every model is biased in some way.

Perplexity is working on a project called Open R1 based on DeepSeek ’s model.

“We are making modifications to the [ R1 ] model itself to ensure that we’re not propagating any propaganda or censorship,” shevelenko says.

Recent regulations from China suggest the Chinese government might be cutting open-source AI labs some slack.

VR Score

65

Informative language

64

Neutral language

53

Article tone

semi-formal

Language

English

Language complexity

53

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

detected

Known propaganda techniques

not detected

Time-value

long-living

Affiliate links

no affiliate links