South Africa’s Department of Home Affairs (DHA) recently learned the hard way that AI “hallucinations” at a state level can quickly escalate into a disciplinary catastrophe. As reported by Cape Town Etc, a major scandal erupted over the White Paper on Citizenship and Immigration—a strategic document approved by the cabinet. Unfortunately for its authors, the text was peppered with a list of fictional legal sources.

According to the department, these fake citations crept into the final appendix after the core drafting was complete. The mechanics of this “legal suicide” were simple: in an effort to lend weight to policy reforms, officials outsourced the bibliography to a large language model, which proceeded to fill the document with non-existent references.

For those who chose to sacrifice critical thinking for time-saving, the fallout was swift. The DHA confirmed the suspension of the Chief Director overseeing the sector, followed shortly by the director responsible for the actual drafting. This precedent is significant not just for the sackings, but for the depth of the subsequent audit. To assess the scale of the “creative writing,” the ministry has hired two independent law firms to review every policy document produced since November 30, 2022—the day ChatGPT was released to the public. Effectively, the department’s entire paper trail for the last 18 months is now under suspicion.

Despite the reputational collapse, the DHA is attempting to save face by insisting the core reforms remain valid. Representatives claim the primary content underwent public consultation and that the quality control failure was merely a regrettable error in deploying a “disruptive tool.” The controversial list of references has been officially retracted, leaving officials to explain why a national strategy was backed by algorithmic fantasies.

While the department has hurried to label the incident an “opportunity for growth,” hiring external law firms for damage control suggests a state of panic. For any executive, this case serves as a vivid lesson: deploying AI without a culture of rigorous fact-checking inevitably leads to a loss of legitimacy. When leadership signs off on hallucinations, the cost of “efficiency” is no longer measured in hours saved, but in careers ended.

Generative AILarge Language ModelsAI RegulationAI SafetyChatGPT