A recent decision of the Supreme Court highlights an increasingly common issue: the use of generative artificial intelligence (AI) to draft court submissions; and the serious risks that can follow if it is used carelessly.
In Jones v Family Court at Whangārei [2026] NZSC 1, the Supreme Court noted that one of the parties, a lay litigant, had cited a number of authorities that appeared to have been “hallucinated” by a generative AI application.
The Court described the misuse of AI in legal proceedings as having “serious implications for the administration of justice and public confidence in the justice system”. It went on to emphasise that anyone filing submissions must ensure that all authorities cited are genuine and correctly referenced, and failure to do so, even by a lay litigant, could in serious cases amount to obstruction of justice or contempt of court.
RISKS WITH USE OF GENERATIVE AI
Generative AI tools like ChatGPT and Claude can produce fluent, persuasive legal writing in seconds. However, these systems do not “know” the law. They predict text based on patterns in data. As a result, they can confidently generate:
- cases that do not exist;
- real cases with incorrect citations;
- plausible-sounding but inaccurate statements of law; and
- fake quotations.
The Supreme Court’s comments signal that New Zealand courts are alert to the risks posed by uncritical reliance on generative AI. As use of these tools becomes more widespread, scrutiny is likely to increase rather than diminish.
The risks are real, and can have serious consequences. Citing incorrect law or authorities can have professional, strategic and reputational consequences. Reliance on inaccurate AI output can damage credibility and weaken an otherwise arguable case.
USING AI IN A LEGAL CONTEXT
AI can assist with structure, clarity, and efficiency but it cannot replace legal expertise.
As qualified lawyers, we use AI tools to assist us in delivering efficient and effective advice. It is the combination of legal training, experience and professional judgment that enables us to use those tools appropriately and critically. Our training helps us to ask the right questions and to test the answers. Even then, we have seen examples of AI hallucinations and incorrect legal commentary or case references. AI is not, by itself, the answer.
Effective conflict resolution also requires strategic judgment, experience, and an understanding of the human factors that technology cannot replicate. An expert lawyer, assisted by AI, offers the best prospect of achieving resolution — whether by negotiated settlement or through the Courts.
Our team is experienced in using technology responsibly to support high-quality legal advice. If you would like assistance, please contact us.