The Algorithmic Erasure of Atrocity: AI, Politics, and the Struggle for Human Rights

Abstract: This article examines the intersection of artificial intelligence, political influence, and human rights, focusing on how large language models (LLMs) respond to allegations of mass atrocities such as the Uyghur genocide in China and the situation in Gaza. Drawing on documented instances—including the suspension of the Grok chatbot from X after citing credible human rights sources—the analysis reveals how AI platforms selectively avoid or dilute politically sensitive claims. This editorial decision-making is shaped not by the absence of evidence, but by the economic, legal, and geopolitical interests of the institutions that design and govern AI systems. The article argues that such selective silencing undermines the internationally recognised right to truth, shields perpetrators from accountability, and perpetuates the marginalisation of communities excluded from AI training datasets. As AI becomes an increasingly dominant channel of information, its political constraints risk curating the historical record to favour the powerful, thereby entrenching systemic inequities in human rights advocacy and justice. Keywords: artificial intelligence, human rights, right to truth, genocide, AI censorship, algorithmic bias, marginalised voices, accountability, content moderation, digital erasure Artificial intelligence is often presented as a revolutionary tool for truth-finding and justice. In reality, AI systems are not neutral; they are programmed, moderated, and politically constrained by the individuals and institutions that control them. When those decision-makers determine that certain truths are too controversial or politically risky to acknowledge, the consequences are not merely technical—they are a direct threat to the protection and promotion of human rights. The Genocide Questions AI Avoids […]