We are creating some awesome events for you. Kindly bear with us.

Measures to protect NZ government’s use of AI

The University of Otago’s Artificial Intelligence and Law in New Zealand Project (AILNZP) has recently released a report on the government agencies’ use of the AI algorithm, according to a recent press release.

The conclusion of the report is that New Zealand is a world leader in government algorithm use, but measures are needed to guard against their dangers.

Issues on AI use

Although AI can enhance the accuracy, efficiency and fairness of decisions affecting New Zealanders, there are also worries that concern accuracy, transparency, control and bias.

People may think that a computer programme cannot be prejudiced. However, if the information fed to it is based on human decisions, the outputs could be tainted by historic human biases.

Add to that the danger of other innocent-looking factors such as postcode, which can serve as proxies for things like race.

Transparency is in knowing

Checking algorithmic decisions for these sorts of problems means that the decision-making needs to be transparent. Seeing how it was made will make checking or correcting a decision possible.

Unlike some countries that use commercial AI products, New Zealand tends to build the government AI tools in-house, which means that they know how the algorithms work.

The study strongly recommends that the government continues with this kind of practice.

Guarding against unintended algorithmic bias, though, involves more than being able to see how the code works. Even with the best of intentions, problems can sneak back in if not careful.

Establishing an effective supervision

To address this, the report recommends that New Zealand establishes a new, independent regulator that will oversee the use of algorithms in the government.

The report also warns against “regulatory placebos”, which are measures that make people feel protected without actually making them any safer.

An example of which is keeping a “human in the loop”, which guarantees that no decisions are made by the algorithm alone, without a person signing them off.

However, there is good evidence that humans tend to become over trusting and uncritical of automated systems, particularly when those systems get it right most of the time.

There is a real danger of offering false reassurance by adding a human ‘in the loop’.

These are powerful tools and they are making recommendations that can affect some of the most important parts of the people’s lives.

Effective supervision is needed in order to check the accuracy, avoid discriminatory outcomes, and promote transparency.


The report recommends that predictive algorithms used by government, whether developed commercially or in-house, must:

  1. Feature in a public register
  2. Be publicly ‘inspectable’
  3. Be supplemented with explanation systems that allow lay people to understand how they reach their decisions.

Moreover, their accuracy should be regularly assessed. These assessments should then be made publicly available.

The research was funded by a New Zealand Foundation known for funding quality legal research in the country.

The Foundation’s Director shared that with NZ$ 432,217, this project received the lion’s share of funding distributed under their Information Law and Policy Project.

She added that the research was done since artificial intelligence (AI) and its impacts should be well understood in New Zealand.

The release of the Phase 1 report provides the first significant, independent and multi-disciplinary analysis of the use of algorithms by New Zealand government agencies.

The information from this work will better inform the development of stronger policy and regulation.

Send this to a friend