27 May 2019

Govt algorithm use growing, regulation needed: study

From Nine To Noon, 9:09 am on 27 May 2019


The increasing use of artificial intelligence by government agencies has researchers calling for more oversight. 

No caption

Photo: Supplied

A study by The Univeristy of Otago and the New Zealand Law Foundation, Government Use of Artifical Intelligence in New Zealand proposes an independent regulator to ensure algorithms can be investigated. 

The AI study is the first to be funded under a $2m New Zealand Law Foundation fund aiming to understand how our law and policy can keep up with technological change.  

There are 32 documented algorithms used by 14 state agencies in New Zealand including ACC, Corrections, Ministry of Social Development and Police. 

The report focused mainly on predictive algorithms, which include the Roc*Rol algorithm that determines the risk of re-conviction and re-imprisonment, and has been used for pre-sentence reports and parole board decisions. 

ACC uses an automated claim system to process the around 90 percent of claims it regards as straightforward, which it introduced in July 2018, although the automated system can only accept or refer claims, not decline them. 

Colin Gavaghan of the University of Otago

Colin Gavaghan of the University of Otago Photo: Supplied

The study's co-author Colin Gavaghan told Nine To Noon that while ACC applications might be easy to automate, more complex decisions would still require human evaluation. 

"We're quite open to the possibility that some kinds of decisions by their nature they're quite simple. They're not very high risk and there's not a massive problem perhaps with those being automated, but the further you move up that spectrum - decisions about who gets to be a citizen, who gets to stay in the country, who gets to stay in prison, who gets to keep their kids - these are some of the most important decisions the government can ever make about you and that's where we think a closer bit of monitoring and scrutiny is called for." 

While adding a "human in the loop" could ensure that automated systems were more reliable, it would also pose a risk to accuracy, the report said.  

"We're just not very good at, it would seem, operating alongside these systems and retaining some kind of decisional autonomy," Mr Gavaghan said. 

"We've warned that there's a real danger here where we become overreliant that keeping "a human in the loop" will some how guard us against all of the concerns here but in actual fact, we have to be very careful of that."

Another major concern of using predictive algorithms in the public sectors was bias, and ensuring that the AI was not biased towards a particular group. 

The study examines the particular problems with using algorithms in the criminal justice system - predictive policing in America has been criticised for creating a feedback loop where the predictions over-represent areas already known to police, leading officers to increasingly patrol those same areas and find criminal activity to confirm what the algorithm says. 

"When you're talking about sections of the population that are already stigmatised quite substantially, that can be a real issue," Mr Gavaghan said. 

Mr Gavaghan and his research team have proposed that a regulatory body, would maintain a register of algorithms that the govenrment uses, and produce an annual public report on how the algorithms were used.