A New Zealand Herald editorial written with the help of artificial intelligence has raised questions about how our media should use the technology for journalism - and how much their consumers should know. Also: does anyone know what they want or expect from the media and AI?
A reflection on the All Blacks' dilemma at centre led the Weekend Herald's editorial page on July 20. It also featured in other weekend papers published by the Herald's owner NZME, including Bay of Plenty Times and the Gisborne Herald.
"The All Blacks find themselves at a crucial juncture," it began. The English defence supposedly exposed chinks in the All Blacks "attacking armour."
And there was repetition. The All Blacks also found themselves at a crossroads as well as that "crucial juncture".
Tell-tale signs of AI-generated text?
AI detection tools returned positive results.
The editorial also drew heavily on a piece the Herald published three days earlier written by Herald sportswriter Gregor Paul.
Hear all about the issue in this week's Mediawatch here.
When Mediawatch put that to the Weekend Herald's publisher NZME, editor-in-chief Murray Kirkness said AI had been used in a way that wasn't up to scratch.
"Any piece of content that uses AI is reviewed, edited and has journalistic oversight. In this particular case, we accept more journalistic rigour would have been beneficial, and we will communicate this to our team," Kirkness said in a statement.
The Herald added an editor's note to the July 20 editorial on the website: AI was used in the original production of this column. It was edited on July 31 to provide more journalistic oversight."
A supplementary editorial underneath about MMA star Israel Adesanya also echoed another Herald piece published three days earlier.
That prompted others to question other Herald analysis pieces and editorials. Putting some of them through four online detection tools yielded similar results.
Three of four detection tools thought AI had been used in the 10 July editorial - Eden Park under siege as England seek redemption over All Blacks.
Three of four pointed to AI-generated copy in the 24 July editorial - Warriors' NRL playoff dreams teeter after narrow loss to Canberra Raiders.
All four thought AI had been used in the 29 May editorial Rugby's future in New Zealand at stake and an opinion piece by a Herald sports editor headlined Scott Robertson's tough call to combine tradition with transformation at lock.
Mediawatch asked NZME if any of these stories had been written with the help of AI, but it said it had "nothing more to add."
AI use in the spotlight
Detection programmes are not foolproof and the results cannot show how big a role AI had in the creation of any given article.
The articles in question may still comply with NZME's policy, which allows for AI to help create content provided journalists oversee its use.
However all NZME editorial staff have been encouraged to attend an "all hands meeting" about the company's use of AI next week.
Media consultant Peter Bale has worked for Reuters, the Financial Times and CNN and he wrote editorial policy for some of them.
"News organisations around the world are using generative AI that creates content, as opposed to just making recommendations. We're all learning very fast in this," Peter Bale told Mediawatch.
What is best practice?
"One of the best uses that I have seen is analysing a really large report from (medical journal) The Lancet or something like for that. It can be extremely good in extracting the bullet points or helping you understand the key points," said Bale, who was also a leader for an International News Media Association (INMA) newsroom project promoting innovation and good practice in publishing.
NZME's main rival Stuff says readers are told in every story how AI has been used - and even which tools have been deployed.
"Probably the best practice is disclosure on each story. But if you're using [AI] extensively, that can become hard," said Bale.
"Using AI to suggest search engine optimisation terms? I don't think [disclosure] is required. Or where you have an editor in the way."
The Herald's 20 July editorials both appear to be based on articles by the Herald's own writers.
Paraphrasing and then amplifying the findings of a paper's staff in editorials is not uncommon. Some New Zealand newsrooms also use AI to shrink stories down to size.
So is using AI to frame an editorial no surprise?
"I haven't seen examples using generative AI to produce editorials, which are normally the voice of the paper. But it is supposed to be a kind of masthead statement by the paper, so you would hope it at least reads well and has context and isn't clunky," Bale told Mediawatch.
"But we're not talking here about plagiarism or deliberately or accidentally inserting errors. The Herald's disclosure statement is pretty comprehensive - and I'm aware the Herald is using this as a learning opportunity." "We should be glad that they're doing that. They haven't made some sort of egregious error. They've just produced something that isn't quite as good as they would have wished."
What do we expect from our media using AI?
The biggest annual survey of news and media is the Reuters Institute Digital News Report, and this year, for the first time, it asked people in 28 countries how they felt about the integration of AI into news and journalism and their awareness of it.
They found that less than half of people said they know much about AI in the first place.
"Across all of the countries, only around 1/3 of the respondents say that they feel comfortable using news made mostly by humans with the help of AI. So 36 percent and the proportion is even smaller, so just around one in five when it comes to news that's made mostly by AI with some human oversight," said lead researcher Dr Amy Ross Aguedes.
She also said that media companies have to be careful alerting people to their use of AI, because it draws attention to something that they find confusing - even alarming.
"Going overboard with labeling or using language that's really vague carries the risk of scaring off people who already tend to have low levels of trust in news and lower levels of knowledge about AI. They're going to tend to default to more negative assumptions," she said.
"But at the same time, failing to provide audiences with information that they may want to decide what news they want to use - and what news they trust - could also prove damaging. Publishers are going to have to thread this needle very carefully," she said.
Image problems
AI's capacity to manipulate and generate images is also intensifying.
"We've been through this kind of problem with the use of Adobe Photoshop and other photo editing tools. Reuters news agency, where I worked for many years, has extremely strict rules on the use of enhancement tools," Bale told Mediawatch.
"The risks are much higher now because you can create such exceptional images. It's really important . . . to be on guard with this and as transparent as possible."
"The news media industry have enough trouble with trust already."
It's not just an issue for big publishers of news.
"We meet thousands of tradies every month, so we know what they're thinking and we know what they need," says Tradie magazine.
"We're their best mate on site, the funniest, toughest, best, informed foreman any hard working hero could ask for," says the magazine.
The front page of the June edition was hotly debated on the online forum Reddit after one user discovered a stock library selling the same image slugged 'happy builder wearing a helmet and reflective vest.'
A note attached says: "Generated with AI. Editorial use must not be misleading or deceptive."
But there was no such declaration for readers of the June issue of Tradie.
Kaleb Francis, a digital strategist at the Auckland based marketing agency Marque argued the media are risking their reputation by using AI images in this way.
"The news media in particular has got a reputation to uphold. They've got revenue issues, but...they need to deliver what they say they're going to. So if it's images, or if it's editorial, (AI) needs to be disclosed," he told Mediawatch.
"How do we trust that what is being shared is actually true?"
"Step by step, [AI] is going to get more and more convincing. People won't know what is real and what is not. Getty Images that says almost 90 percent of consumers globally want to know whether an image has been created using AI. Another study from the University of Waterloo said only 61 per cent of participants could tell the difference between AI- generated people and real ones."
"Tradie magazine often uses real images of people and now all of a sudden they're not. Why are they doing that?"
"Tradespeople are thinking: 'Is AI going to take my job? How am I gonna be displaced? And they see AI content that is actually not representative of them. It'll chip away over time and flow through in society's understanding of what AI is and what it can do."
Francis says the Herald should declare AI use too - especially in editorials.
"If this is the voice of the paper, but it's actually being generated by AI... who's the voice? Where's it coming from? I am paying for it - but for AI to write it? That just doesn't seem like the value exchange."
"Why wouldn't I want to know? The media inform people and keep them up to date and let them know what is real. So if they expect it, but the media is not delivering it... then how would you feel?"