This paper examines DeepSeek's content moderation practices through a semiotic lens, arguing that they function as a systematic silencing strategy. DeepSeek, the first Chinese AI chatbot to achieve global success, employs two distinct moderation mechanisms: outright censorship – blocking or deleting responses that contain politically sensitive trigger words – and propaganda, whereby silence is replaced by pro-CCP rhetorical discourse. Drawing on a dataset of 220 responses to 110 taboo-oriented prompts collected in March 2025, the study analyzes how the chatbot's two models (V3 and R1) respond differently: R1 predominantly censors, while V3 more often substitutes responses with propagandistic content. The theoretical framework integrates Greimas's semiotic square of veridiction and Eco's reflections on silence and noise to map these behaviors as complementary poles of an ideological control mechanism. The analysis shows that DeepSeek's moderation is not merely a safety tool but a politically motivated instrument that shapes what can be said, known, and asked – ultimately revealing, through the very act of silencing, the morphology of power underlying its design.
Fissardi, C. (2026). "Sorry, that's beyond my current scope". Semiotic considerations on DeepSeek's content moderation practices as a silencing strategy. In M. Giacomazzi, M. Grinello, A.M. Lorusso, F. Mazzucchelli (a cura di), Forme del silenzio (pp. 527-543). Milano : Mimesis.
"Sorry, that's beyond my current scope". Semiotic considerations on DeepSeek's content moderation practices as a silencing strategy
Carla Fissardi
2026-04-15
Abstract
This paper examines DeepSeek's content moderation practices through a semiotic lens, arguing that they function as a systematic silencing strategy. DeepSeek, the first Chinese AI chatbot to achieve global success, employs two distinct moderation mechanisms: outright censorship – blocking or deleting responses that contain politically sensitive trigger words – and propaganda, whereby silence is replaced by pro-CCP rhetorical discourse. Drawing on a dataset of 220 responses to 110 taboo-oriented prompts collected in March 2025, the study analyzes how the chatbot's two models (V3 and R1) respond differently: R1 predominantly censors, while V3 more often substitutes responses with propagandistic content. The theoretical framework integrates Greimas's semiotic square of veridiction and Eco's reflections on silence and noise to map these behaviors as complementary poles of an ideological control mechanism. The analysis shows that DeepSeek's moderation is not merely a safety tool but a politically motivated instrument that shapes what can be said, known, and asked – ultimately revealing, through the very act of silencing, the morphology of power underlying its design.| File | Dimensione | Formato | |
|---|---|---|---|
|
9791222328850_compressed.pdf
accesso aperto
Descrizione: Intera monografia
Tipologia:
Versione Editoriale
Dimensione
3.83 MB
Formato
Adobe PDF
|
3.83 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


