Start honing your critical thinking skills unless you want to swallow some pretty random information from generative Artificial Intelligence
Listen
Is AI one big step towards a more productive life, or one giant leap into the realm of misinformation?
This incredible development is being forced on the tech world, with plenty of potholes on the way.
Cats on the moon, anyone? Establishing a daily rock-eating habit? Cooking with gasoline... you can saute your onions and garlic in it for a 'spicy' pasta dish. Or how about a recipe for mustard gas to clean your washing machine?
These are some of the Google answers that generative AI has had a hand in.
Misinformation is a huge concern, says Amanda Williamson, the AI lead at Deloitte and senior lecturer at Waikato University.
"There are concerns around deepfakes, and fraud," she says. "In terms of misinformation, beyond people being able to impersonate others really well, the idea of misinformation more broadly is absolutely a concern right now.
"Because the ability for content to be created, and being shipped up as knowledge that's just as trustworthy as a normally Google link, is really hard to discern."
There's some good news.
"I don't think it's a forever thing. Right now we have to be critical and we can't trust everything. But keep in mind that the technology is as bad as it will ever be.
"This is the worst AI you will ever use. It's only going to get better," she says. "Right now we can barely imagine how good it can become."
The big danger at the moment is in not being able to fact check where information comes from.
"If we can use AI that provides links, and we can click on the link and go and see where the information was retrieved from, then we're able to use our own sense of critical thinking to determine if it makes sense and it's from a reputable outfit," says Williamson.
"But if we're using AI tools that don't reference where they've got information from and we do not have the track record to trust them, then we need to take everything with a pinch of salt."
Williamson says companies like Google are having a tough time right now because they have to change their whole model of delivering information on the internet.
"They've seen a new way of doing it, which is through the use of generative AI.
"But generative AI is incredibly creative, as cats on the moon would suggest, and so they have to contend with moving with the times, and doing so in a trustworthy manner.
"And I think that right now not everyone has found the right balance.
"It really shows we can not blindly trust anything particularly that generative AI is producing right now. We have to switch on our brains and be critical thinkers."
In The Detail today, Williamson talks about the difference between 'black box' and 'white' or 'transparent box' AI. One arrives at a conclusion without leaving a trail, and the other can be taught to explain its thinking.
She says if you're using it for work you should be treating AI as a co-worker - and a very junior co-worker at that.
"You need to be able to fact check everything it does. We don't ever give the AI the final word on things."
Check out how to listen to and follow The Detail here.
You can also stay up-to-date by liking us on Facebook or following us on Twitter.