Cabinet Minister Judith Collins wants the government to expand the use of artificial intelligence (AI), starting with the health and education sectors where it could be used to assess mammogram results and provide AI tutors for children.
Collins, whose 'digitising government' portfolio includes responsibility for AI policy, says the technology could also be used for government productivity gains, including processing Official Information Act requests.
Collins told RNZ she already uses ChatGPT to write drafts of her speeches.
AI could benefit the education sector and it could be used to mark students' work, she said.
"In some cases, if it's maths, for instance, yes. It's just helping those teachers get past that so they can spend more time on teaching."
Collins said AI tuition could lead to more equitable outcomes.
"So you have your own AI tutor. So instead of having to be wealthy enough to employ a tutor to help the children with the maths or science questions, or something else that the parent doesn't know much about maybe, is to enable that child to have their own (AI) tutor," she said.
"It doesn't do the work for them. It says some things like 'go back, rethink that one, look at that number,' those sorts of things. What an exciting way to do your homework if you're a child."
Collins is also eager for AI to be used in the health system and said processing mammogram results was one example.
"That is the sort of data that is collected all the time and if that can be turned into an AI solution that could instantly tell women whether or not they have something that they need to be concerned about and whether or not they need to go to the next stage of seeing a specialist - instantly, not in a few weeks, not when someone's available, but instantly."
'High risk' areas for AI use
Deploying AI in education and health would be seen as high risk uses under new legislation passed by the European Union regulating AI.
"Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes," according to an EU press release when the law was passed in March.
Using AI in those settings in EU countries must include high levels of transparency, accuracy and human oversight.
The EU regulation bans certain applications including biometric categorisation systems, scraping internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and schools and AI that manipulates human behaviour or exploits vulnerabilities.
But New Zealand has no specific AI regulation and Collins is keen to get productivity gains from extending its use across government, including using it to process Official Information Act requests.
"It's a perfect example of how we in government could use AI because the rules around Official Information Act requests are very clear. The information or data that government agencies have access to - that can be used to actually provide OIA requests that are not held up any longer than they need to be."
An OIA request by RNZ for a government Cabinet paper on AI was turned down (by a human) on the grounds that the policy is under live consideration.
Collins said AI policy was "under very active consideration" now. "I'm trying to see if we can better use AI in our government services," she said.
"The amount of data that government agencies collect on people is enormous. But do we have any mechanism to make sure that that data is shared sufficiently so that we provide better services for people?"
She would not pursue a "big bang" approach but wanted to trial the technology in government agencies.
"I think the health sector and the education sector are both up for it. So they may have to vie for who wants to be first."
Public, expert concern
But Collins has some work to do in shifting attitudes to AI in Aotearoa.
New Zealand has the second-highest level of concern among 32 countries surveyed by market research company Ipsos.
Two-thirds of New Zealanders said AI makes them nervous, behind Ireland on 67 percent and compared with a global average of 50 percent.
In the survey, released last month, 69 percent of New Zealanders said they had a good understanding of AI and 64 percent thought it would profoundly change their lives in the next three to five years.
Just over half of New Zealanders think AI will make the spread of false information worse - again the second highest level of concern, behind Sweden at 55 percent and compared with a global average of 37 percent.
There has been concern about how fast AI is developing - even from its creators.
In March 2023 more than 1000 tech leaders, including Elon Musk, signed an open letter calling for a pause in AI development, warning of profound risks to humanity from an arms race to develop powerful digital minds not even their creators could understand or control.
In his 2021 book Scary Smart Mo Gawdat, formerly chief business officer of Google X, predicts that by 2049 AI will be a billion times smarter than the smartest human. The gap in intelligence would be similar to that between a fly and Einstein, leading Gawdat to ask, how do you convince the superbeing not to squash the fly?
One morbid metric sweeping Silicon Valley is P-Doom: the probability of doom, where zero means humanity will be fine and 100 means we're all dead. A survey recently found the average AI engineer had a P-Doom of 40 - meaning a 40 percent chance AI will wipe us out.
But Collins described herself as an AI optimist. New Zealand could not stop AI so had to embrace it.
"It's like trying to stop the wind blowing. It's already here and it's going to keep going," she said. "I think we just need to stop being so frightened and actually understand that there are enormous benefits. But we have to also be aware that some people might misuse AI and we have to be ready for that too."
Using voice cloning technology in scams or disinformation and deep fakes to influence opinion were examples of this.
Collins said New Zealand already had privacy and digital identity laws that could be used to counter those uses of AI and the government was "always willing to regulate" if that was in the public interest.