'If this technology goes wrong, it can go quite wrong ... we want to work with the government to prevent that,' - Sam Altman
Warnings from those behind artificial intelligence (AI) technology have made lawmakers around the world take notice.
The tech sector itself is engaged with the government and is considering six cornerstone principles as the basis of a digital strategy, but there appears to be little urgency from ministers - and no sign of concrete action before October's general election.
Listen to the full podcast
As AI technology starts seeping into politics in New Zealand, its rapid rise is prompting calls around the world for greater regulation - notably, even from its creators.
OpenAI is the company behind text generator ChatGPT, and chief executive Sam Altman's submission to the US Senate in May calling for government intervention caught global attention.
"The US government might consider a combination of licencing and testing requirements for development and release of AI models above a threshold of capabilities," he said. "It's one of my areas of greatest concern, the more general ability of these models to manipulate, the persuade, to provide sort of one on one interactions with this information."
Australian IT expert Toby Walsh, a Laureate Fellow and Scientia Professor of Artificial Intelligence at the University of New South Wales and author of Machines Behaving Badly, says the benefit of AI is it can think faster than we can - but it lacks key human characteristics.
"Whether it be hiring people or making decisions in the judiciary - they don't have our empathy, they don't have our social intelligence, they don't have our adaptability, they don't have our common sense, our creativity."
US Correspondent Toni Waterman says lawmakers there are drawing comparisons with the atomic bomb as a technology with the potential to upend economies, democracies, value systems and security.
"They are calling for regulation, they want to put regulation in place, I think one of the challenges is that this is an emerging technology: the lawmakers aren't experts in a very technical field."
The Biden administration has asked officials to look at possible accountability measures; China has brought in rules for the kinds of content generative products like ChatGPT can create; and the EU is progressing laws which would see the riskiest AI programs banned by default.
In New Zealand, there are some measures in place already - brought in under the Privacy Act 2020 - around collection, storage, access and use of personal information.
Privacy Commissioner Michael Webster released a list of expectations for companies and agencies last week. He says the onus is on the creators of new products to show they're compliant with New Zealand's laws, and transparency is critical.
"We have a strong expectation here that when agencies - whether public or private sector - are thinking about using new technology that they undertake what we call a privacy impact assessment, and that should allow them to examine all the potential areas of risk that using that technology might have for people's privacy.
"If the risks are too high then my expectation would be that they won't proceed with that proposal.
"If we see evidence of suspected non-compliance with the Privacy Act through the use that businesses and government agencies make when they use AI, we will absolutely follow that up."
Read more:
- ChatGPT might doom us all, but it won't replace teachers - expert
- The Detail: This election year, we need to brace ourselves for AI
- AI attack ads - to disclose, or not disclose?
- AI in politics: Law expert urges more transparency, regulation
- New superbug-killing antibiotic discovered using AI
- Artificial intelligence privacy warnings issued for companies, departments
- Chris Luxon defends National's use of AI
- New Zealand start-up launches AI powered investment platform
- AI used across 'multiple departments' in camera surveillance
National has been dampening down concerns about its use of AI technology in image creation for its election campaign. Leader Christopher Luxon says there's little practical difference between images generated that way and using stock images with actors or using photoshop - and says the party is using a disclaimer to make it clear when AI is being used.
"But we need to embrace AI, we've seen the government doing it itself, it has a health programme I think for young Māori that it actually is using chatbots and AI to be able to reach out and educate them ... there is a massive opportunity for New Zealand to embrace AI, to embrace RPA (Robotic Process Automation), embrace a whole lot of automation and technology in order for us to become more productive as a country."
His party's spokesperson on research, science, AI and technology Judith Collins says the Privacy Act principles are reasonably good, but she's "particularly keen on making sure that government - any government - doesn't abuse it, and use it actually for the public good".
"We've also got to understand that if we try and regulate the use of AI it's a little bit like regulating physics: we've got to be really careful how we do that and make sure we don't have unintended consequences."
She set up a cross-party group looking into the matter, which met for the first time recently. One of those involved is Green MP Chloe Swarbrick, who says people are giving up personal information to free for multimillion-dollar companies to feed into machine learning without a real understanding of where that ends up.
"In the EU there's been quite substantive moves - which are quite different to the political consensus that's formed in the United States - but all signs point to New Zealand being quite far behind," she says.
Digital Economy and Communications Minister Ginny Andersen says it's important to get a good ethical framework in place, and has asked the Department of Internal Affairs to provide advice on how to make use of AI while being aware of the risks.
"We want to make sure that there's some frameworks in place, that it's being used in an ethical way."
Andersen - who was not invited to the group, but is keen to take part - denies the government is dragging its feet. She says it's important to continue having conversations with partner countries, "because now, with the internet, everything is connected. So what happens in other countries is directly relevant for here in New Zealand".
Despite these good intentions, there's no sign anything will actually be done before the general election in October.
National's hackles are up after the AI-ads controversy, but any potential government will need to step back and seriously consider the quickly advancing technology.
The risk is politicians would have missed the opportunity to set up safeguards - and meet the clear expectations of the industry - before AI becomes embedded in society and communities.
In this week's Focus on Politics, Political Editor Jane Patterson explores the world of Artificial Intelligence and looks at whether its use should be subject to regulation.
Listen to the full podcast
Listen free to Focus on Politics on Apple Podcasts, on Spotify, on iHeart Radio or wherever you get your podcasts.