Photo: RNZ / Diego Opatowski
Courts will have to grapple more and more with AI fakes and it might take law changes to keep them out of trials, the government's chief legal advisers say.
Crown Law's long-term insights briefing to a parliamentary select committee on Thursday morning turned quickly to questions around the reliability of evidence in the age of deep fakes.
The ability of generative artificial intelligence (such as large-language models that generate text, or image generating AI) "to facilitate the production of fake evidence will increase and could challenge evidential integrity in the justice system", said its long-term briefing report.
It was a growing global problem, it said.
"Is it what the Crown or the Defence say it is? Does it have the truth that the particular photo or text purports to have, or is it fake?" Deputy Solicitor-General Madeleine Laracy told the select committee.
Deputy Solicitor-General Madeleine Laracy, right. Photo: RNZ / Angus Dreaver
"These create really tangible problems during trials" that they only had the normal tools of admissability to try to deal with.
The briefing suggested two ways to tackle it but both had big implications; for instance, lawmakers could bring in a new "admissability threshold" but if that meant all digital evidence was checked for reliability that would "impose a significant additional burden" on both sides in criminal trials - and this in an already log-jammed system.
MPs asked: "Have we seen fake evidence from AI in courts today?"
Laracy noted one case she was familiar with, where the defence challenged the metadata that sat behind Crown evidence. This went back to asking what other "human evidence" there was to support that the evidence was reliable.
When RNZ asked Crown Law for more details, it said the case was still before the courts which had ordered broad suppression.
The briefing said there were numerous examples overseas where counsel and self-represented defendants had been reprimanded for using cases that had been "hallucinated" (made up) by AI.
It referred to a case in London in 2025 that cited a New Zealand commercial case where a draft about "apparently non-existent cases" led to a challenge.
Solicitor-General Una Jagose KC. Photo: Reece Baker/RNZ
Solicitor-General Una Jagose KC said the fake in a case presumably could be anything - "it could look like an email ... It could look like a recording of a person who makes an admission".
Crown Law's 31-page briefing said current cases suggested this was not widespread but Crown prosecutors told them about the "early signs ... [that] signal that authenticity challenges will become more common as technology advances".
"In one case there was an allegation during cross-examination of a Crown witness that Crown evidence was doctored in some way. In another, a Crown prosecutor was questioned (without basis) about using GenAI to write submissions.
"Media reports also indicate a self-represented defendant in a murder trial claimed that CCTV footage relied on by the Crown was fake.
"The Crown challenged the evidence given by the accused and he in turn alleged the Crown had produced false CCTV and other evidence."
The question became how to adapt - prosecutors, for instance, would have to become adept at recognising what defence evidence to challenge, and to respond to defence AI challenges, said the briefing.
"If the problem of fake evidence becomes widespread, it could become standard police procedure to analyse any evidence that will be relied on by a Crown witness, to enable assurances to be made to a future jury of its authenticity," said the briefing.
It was also anticipated they would need more experts who could testify about the integrity of metadata, said Jagose.
"The real challenge" was around defence evidence because it did not have to give the Crown a heads-up on it to allow time to check it, Laracy said.
"Verification procedures could delay trials which would not be desirable," said the briefing.
The courts are already log-jammed and backed-up.
The committee discussed if that might require law changes for notification periods around evidence that might pose AI questions-of-origin.
The briefing discussed that, and a second "high level strategic" of the "admissability threshold".
Labour MP Vanushi Walters asked about the reliability of the advice that prosecutors might be getting from AI.
The Solicitor-General imposes a two-part test that has to be met to go ahead and prosecute, around if the evidence is sufficient and the public interest.
Jagoes said so far, there were no guidelines on that and there might come a time that AI made those decisions more efficient.
"I suspect that, well, I'm the Solicitor-General till next Friday, but I suspect that the Solicitor-General will always be anxious that criminal prosecution decisions are being made by a human because of the judgment and all the requirements and all the balancing of the public interest that needs to go into it.
"Maybe machines will be able to do that in the future but that's a very long way away I'd say," said Jagose.
Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.