AI Hallucinations Force Supreme Court Apology: A Wake-Up Call for Training and Ethics in the Legal Profession

HARARE, Zimbabwe – A recent incident involving a respected legal professional in Zimbabwe serves as a stark reminder of the critical importance of AI training, verification, and ethical use, particularly in high-stakes fields like law. Professor Welshman Ncube, a prominent lawyer, was compelled to issue a formal apology to the Supreme Court after submitting legal arguments that contained fabricated and incorrectly attributed case law – errors directly linked to unverified Artificial Intelligence research.

The embarrassing misstep occurred in the case of Pulserate Investments (Pvt) Ltd v Andrew Zuze and Others [SC202/25], where Professor Ncube was representing the appellant. In a letter dated July 3rd, addressed to the Registrar of the Supreme Court, Professor Ncube took full responsibility for presenting twelve case citations that were either non-existent or irrelevant to the case at hand.

“I wish to express my profound regret and apology to the Court for the citation of defective and non-existent cases in the heads of argument I prepared and caused to be filed,” Professor Ncube stated in his apology. He further elaborated that the erroneous citations originated from a researcher working under his supervision who had relied on Artificial Intelligence tools to compile legal authorities without adequately verifying the generated material.

Professor Ncube candidly admitted his own failure to meticulously check the references before submission, acknowledging it as a serious oversight. “There is no excuse that can justify such an error,” he wrote, emphasizing the gravity of the mistake. The apology extended not only to the Supreme Court but also to the opposing counsel, whose vigilant submissions had brought the incorrect references to light.

This incident, while undoubtedly a humbling experience for Professor Ncube, offers a powerful lesson for the broader professional community, especially as AI tools become more integrated into daily workflows.

Key Takeaways for Professionals and Organizations:

  • The Imperative of AI Training: This case underscores the urgent need for comprehensive training for anyone utilizing AI in professional contexts. Users must understand AI’s capabilities, limitations, and potential for “hallucination” – generating plausible but false information.
  • The Non-Negotiable Role of Verification: AI should be viewed as an assistive tool, not a replacement for critical thinking and rigorous verification. Every piece of information generated by AI, particularly in sensitive areas like legal research, must be cross-referenced and confirmed through reliable sources.
  • Establishing Ethical AI Guidelines: Organizations must develop clear ethical guidelines for AI use, emphasizing accountability and responsibility. Who is ultimately responsible when AI makes an error? This incident highlights that the human user remains accountable for the output.
  • Continuous Learning and Adaptation: The rapid evolution of AI necessitates a commitment to continuous learning. Professionals must stay abreast of advancements, best practices, and the evolving risks associated with AI technologies.
  • The Enduring Value of Human Expertise: While AI can augment human capabilities, it cannot replace the nuanced judgment, critical analysis, and ethical reasoning that human professionals bring to their work.

Professor Ncube’s candid apology demonstrates integrity and a commitment to professional standards. However, this incident serves as a stark warning: the convenience offered by AI must be balanced with a heightened sense of diligence and an unwavering commitment to accuracy and ethical practice. As AI continues to reshape our professional landscapes, ensuring proper training, robust verification protocols, and a strong ethical framework will be paramount to harnessing its power responsibly.