Artificial intelligence software is being developed that could detect and diagnose brain injuries or disease onset, but we're being assured it won't take the place of medical professionals
Two words uttered during a phone call home inspired an AI expert to create an app that can detect brain injuries.
"I said, 'hi mum' and she said, ' okay, what's wrong?'. It was just two words, how did my mum realise that I'm not in my best mood?" asks Sam Madanian.
Madanian is a senior lecturer in computer science and software engineering at AUT and has spent the last six years researching AI and machine learning capabilities to analyse people's speech.
"I started thinking about how the way that we talk, the way we communicate, has another dimension on top of the context... for example, if you are happy, the signal of the speech with the same context is different from when you are sad or when you are angry," she explains.
Eventually Madanian's goal is for this tool to be used to screen for mental health conditions and detect more serious brain disorders like dementia or Parkinson's disease in its early stage.
But she says this is a sensitive field to research, and right now the tech is being developed to detect the severity of a brain injury.
"In mental health we have some research limitations especially when we're working with vulnerable populations," she says.
Madanian says the tool could be especially helpful for newly graduated doctors and nurses, who don't have the years of experience to identify what can be very subtle red flags.
"This mobile application is designed to record speech plus some other context and background, and it can come up with suggestions," she says.
The current model being tested is 90 percent accurate with reading human emotions but there's still some work to be done with detecting mild traumatic brain injuries. The success rate here ranges from 68 -70 percent.
Once improvements have been made to bring the success rate up, the next step is engaging with health experts about how the app can be used safely in medical settings.
Madanian says it's a long journey, but they don't want to rush it.
"Technology is a grey area, it comes with some limitations, and this is the case with AI... we cannot push these technologies into healthcare until we are completely aware of all the downsides and its limitations."
I wouldn't say this technology makes any decisions... this is what I really try to advocate in terms of the usage of AI in healthcare, especially in healthcare because it's very vulnerable, it's a matter of life and death. That's why I never label any of these concepts as diagnostic tools, I always call them a decision support tool and there would be no intention of removing people or replacing them with these systems," she says.
The Detail also speaks to Reza Shahamiri, a senior lecturer in software engineering at the University of Auckland, who describes AI as an umbrella term for several technologies and thinks it has the potential to save New Zealand's crumbling health system.
"AI could provide a lot of automation, it could make it a lot easier for doctors to provide care, to make diagnoses. That is my goal, to make healthcare more accessible, faster, and more effective, not replacing doctors and nurses, just giving them more tools to be more productive and make their lives easier," he says.
But Shahamiri echoes Madanian's reassurances that AI won't replace real life medical staff.
"What we need to emphasise is that AI is just another tool. It's a tool that doctors and nurses could utilise along with all the other tools that they have to provide better care. The AI itself is not an independent entity, it's a software, but in the end of the day it's the medical doctors and the nurses who will take action, who will make decisions, who will provide patient care," he says.
And in the end of the day, for the AI sceptics Shahamiri's message is simple: trust your doctor.
"If your doctor has trust in using a tool, that means that it has vigorously evaluated and been regulated," he says.
"AI is just another tool, a type of software, yes software could be dangerous, could do bad, could do harm, but it could do good as well. It's generally the people using the technology who are behind whether the technology is being used for good or bad."
Check out how to listen to and follow The Detail here.
You can also stay up-to-date by liking us on Facebook or following us on Twitter.