Your AI Has a Nationality

Have you ever considered that an Artificial Intelligence might have a political “opinion” depending on where it was created?

A study published in February 2026 in PNAS Nexus — one of the world’s most prestigious scientific journals — brought forward data confirming what many suspected: censorship and government values are embedded in the algorithms we use daily. And this isn’t a bug — it’s a feature of the ecosystem in which these models are created.

The research was conducted by Jennifer Pan (Department of Communication, Stanford University) and Xu Xu (Department of Politics, Princeton University), with support from the Stanford Center for the Study of China’s Economy and Institutions.

The Taiwan Test: ChatGPT vs. DeepSeek

The most emblematic example is simple but impactful: if you ask ChatGPT (US) whether Taiwan is a country, the answer tends to be affirmative or, at minimum, presents multiple perspectives. However, when you ask DeepSeek (China) the same question, the answer becomes “complicated” — or simply doesn’t come at all.

To investigate whether this was a pattern, the researchers tested 9 AI models (4 Chinese and 5 American/non-Chinese) with 145 politically sensitive questions on topics such as human rights, protests, democracy, and territorial sovereignty.

The Numbers Behind the Refusals

The results showed a clear discrepancy in how willing these AIs were to respond to certain topics:

Chinese models: BaiChuan, one of the leaders in China, refused to answer approximately 60% of sensitive questions. DeepSeek refused about 36%. Beyond refusals, when Chinese models did respond, their answers were significantly shorter and less factually accurate.

Non-Chinese models: GPT-4o answered all questions without any refusal.

A crucial finding from the study: these discrepancies diminished significantly when questions were about less politically sensitive topics. This rules out the explanation that the difference is merely technological or market-driven — it’s directly tied to the political sensitivity of the content.

The Three Pillars of Geopolitical Bias

Why does a Chinese AI “think” differently? The study and broader academic literature point to three pillars:

1. Training Data

Chinese AI is trained on the Chinese internet, which has been subject to active censorship for decades. Information on sensitive topics like the Tiananmen Square Massacre or Taiwan’s independence barely exists publicly in that data. When a model learns from a censored universe, it inherits that censorship as if it were normality.

2. Post-Training Regulation and Censorship

In 2023, China passed regulations requiring all generative AI models to uphold socialist values before being made publicly available. Government auditors test these models to ensure their responses align with state guidelines. The Stanford and Princeton researchers note that Chinese AI censorship is a direct extension of the country’s broader censorship regime, which delegates information control to technology companies.

3. Reinforcement Learning from Human Feedback (RLHF)

Models learn what constitutes a “good answer” through ratings by human annotators. Stanford research on AI alignment shows that this process can introduce significant biases: if annotators grew up in an environment where certain answers are the social and political norm, the AI learns to replicate that behavior. And this isn’t exclusive to China — Western models trained with human feedback tend to reflect the values of specific demographic groups (generally more educated and higher income).

Language Matters Too

A particularly interesting finding from the study: all models — Chinese and non-Chinese — showed higher refusal rates when questions were asked in Chinese rather than English. However, the differences between Chinese and non-Chinese models were much larger than the differences caused by language, indicating that the model’s origin is the dominant factor.

Are Western Models Neutral?

This is where the most important — and most uncomfortable — reflection begins.

It would be easy to read this research and conclude that “China censors, the West doesn’t.” But the reality is more nuanced than that.

Research from Stanford itself shows that Western models also carry significant biases. Models trained on the internet tend to reflect perspectives of conservative, less educated, and lower-income groups. Meanwhile, models refined through human feedback tend to lean toward more liberal, higher-educated, and higher-income values.

Furthermore, the alignment process for Western models tends to privilege Western, English-speaking values. Stanford researchers found that non-Western philosophies are grouped into generic categories like “Indigenous ontologies” and “African ontologies,” while Western philosophies receive detailed subcategories such as “individualist,” “humanist,” and “rationalist.”

The fundamental difference is that Chinese censorship is explicit and mandated by law, while Western biases are implicit and structural. Both exist. Both shape how billions of people receive information.

The Implication for the Future

The crucial point here isn’t just which chatbot you’re talking to, but the ecosystem it creates. Every application built on these models carries with it the same layer of censorship and political bias. Every autonomous agent that filters information for you inherits the ideological decisions of whoever trained the base model.

In a world where AI agents increasingly mediate our access to information — from personal assistants to research tools — understanding the origin and “values” of the AI we use isn’t paranoia. It’s basic digital literacy.

The question that should be on the mind of every tech professional, every business leader, and every citizen in 2026 is:

When AI filters reality for you, whose glasses is it wearing?

Share if this made you think:

AI is not neutral. It never was. Knowing that is already halfway to thinking for yourself.


Read Also