The government begins the roll out new facial recognition technology next week despite an "untested risk" around racial bias.
No tests have been done for bias on New Zealand's specific population mix, even though the new tech has been in development for four years by the Department of Internal Affairs (DIA).
US testing raises more questions.
Identity Check uses facial recognition tech from Irish company Daon to match a live image you take on a phone with your driver's licence or passport photo in government databases.
The Ministry of Social Development will be the first to go live with it on 20 November. Beneficiaries may choose to use it, or stick with existing verification systems.
But a recent MSD report calls the level of racial bias in the technology "unknown", "unconfirmed" and "untested".
"Before MSD can include within scope the use of Identity Check information from DIA for fraud investigation purposes, the current unknown level of bias in the facial recognition algorithm DIA use will first need to be better understood," according to a report in September, released under the Official Information Act (OIA).
"This is because if there is any bias in their technology, any clients who are affected, which may include Māori and Pasifika clients who make up a large proportion of MSD clients, may also go on to become the subject of a fraud investigation. This will likely have a significant impact on those Māori and Pasifika clients."
But DIA told RNZ that racial bias was not an issue because in recent tests the tool was 90 percent accurate. The tests covered 250 people.
"While we have not specifically tested the algorithms against different ethnic groups, the numbers ... suggest that the technology we are using is very successful in the New Zealand context," it said.
"It works for the vast majority of people, in the vast majority of instances it's used."
The MSD report said US government tests in July suggested the Daon AI algorithm worked less well "on people of darker skin tones than on people of lighter skin tones".
This has been a long-standing problem with many facial recognition systems.
That might result in "unjustified discrimination" and "emotional harm" for Māori and Pasifika unable to use Identity Check as easily as white people do, the report said.
But MSD's assessment concluded that the risk was acceptable, because ethnic groups were being consulted, and because efforts were under way to improve the tech.
"DIA will use identity check samples (training/performance data) to re-train the algorithm so its performance among the NZ population is improved. This is an ad hoc, bespoke process where DIA contracts an external consultant to carry out."
MSD also cited the 90 percent test result. This was achieved when people had up to three goes to get a facial match, and did not explicitly test for ethnic differences.
US tests on the algorithm, and Internal Affairs' own limited testing of it, showed a false non-match rate of 10 percent, MSD said.
No racial tests would be done here because Internal Affairs planned to swap in 2026 to a different Daon algorithm used for years with New Zealand passports, it added. DIA did not confirm this, only saying it was "always looking at options" for the most secure, effective and cost-efficient options.
Joy Liddicoat, an AI researcher at the Centre for Law and Policy in Emerging Technologies at Otago University, said the development of a system the government had big plans for was flawed.
"When the dangers of the use of these kinds of metrics are so well known, it's simply not good enough, over the life of a four-year project, not to have nailed this down before launching."
She said the overall aim was good - to make accessing benefits easier - and it might sometimes be OK to launch first and learn-as-you-go - but not when a system might red flag a person without them even knowing.
Users - beneficiaries to start with - get five goes before the system locks them out for three days.
Fraud investigation
Identity Check can also be used to detect and investigate fraud.
"MSD will not use DIA data for fraud investigation purposes in phase 1," according to the MSD report on security, privacy and ethics.
"This reduces several risks for this initiative, including an unconfirmed human rights and ethics risk of racial bias in DIA's facial recognition technology."
Officials planned to communicate "an acknowledgement of the untested risk relating to racial bias" after the system went live, the report said.
MSD had talked about this with its Pasifika reference group - and had been due to do that with its Māori group, too - but as for Internal Affairs, while it had promised to engage with Māori and incorporate their views, "it is yet to occur due to resourcing constraints. DIA hopes to begin that process in the next quarter", the MSD report said.
That engagement is coming four years after DIA embarked on Identity Check.
MSD Pasifika reference group member Danny Mareko confirmed officials told them about the tech, though he could not recall any discussion about racial bias. The ministry was good at engaging with them, he said.
The chair of the Māori reference group did not provide comment.
In a one-year trial run by Internal Affairs up to September 2023, the facial recognition failed 45 percent of the time. This was not a racial problem, but a problem with the AI used to ascertain someone's image was a live one of the person, and not, say, a still photo of someone else.
The upgrade to this had scored at 90 percent.
The government wants Identity Check to become the go-to tech for proving everyone's identity online, to make access to hundreds of services, and doing e-business, easier and more secure.
Daon has been approached for comment.