<

Health AI, An Artificial Future for Natural Care?

Artificial intelligence is all the rage these days, rapidly transforming the landscape for many industries and pushing the boundaries of what we can do with computers and the internet.

Artificial intelligence is all the rage these days, rapidly transforming the landscape for many industries and pushing the boundaries of what we can do with computers and the internet.

Like its applications in finance, customer service and education, AI has the potential to radically transform how we practice medicine and deliver healthcare.

Healthcare systems all around the world are feeling the pressure with ageing populations, the increasing burden of chronic disease and rising healthcare costs all putting a strain on patients, clinicians, governments and regulators. Particularly in the wake of the COVID-19 pandemic which has uncovered some serious shortages in the workforce and inequitable barriers to access.

The gold standard goals for modern health care systems are to improve the health of a population, improve the experience of patients and caregivers, and reduce the rising costs associated with healthcare. This is perhaps where AI may prove to not just alleviate the burden of our healthcare systems, but also enable us to achieve the aforementioned goals.

When we think about healthcare, there are a wide variety of conditions, treatments, protocols and diagnostic tests, alongside a plethora of different specialty staff and procedures, from reception staff and logistics all the way to surgeons and theatre nurses. That is to say, healthcare is a collection of many disciplines unified in achieving positive patient outcomes. Which means artificial intelligence in healthcare is not just one thing. It is not a single specialised algorithm or software model, but rather a variety of different tools built to augment current healthcare systems.

Currently, AI systems are being adopted by healthcare services to automate high volume, tedious, time consuming tasks. This is because AI models are not reasoning engines – they can’t use common sense, experience, or provide clinical judgement the way a human doctor can. Rather, AI can translate patterns from collections of data, and to this end, there has been some notable progress in using AI for precision diagnostics.

Some of the potential uses of AI in healthcare, which we will likely see over the next decade, may include things like virtual assistants, personalised mental health support, precision imaging, customised healthcare robotics, and AI driven drug discovery. We don’t know whether the adoption of such technologies will be incremental or exponential, but the changes they bring will undoubtedly require existing healthcare organisations to consider how they are going to adapt to this evolving digital landscape.

While all this is exciting, one cannot talk about AI in a contemporary setting without a conversation about everyone’s favourite kind of AI; the AI chatbot. AI chatbots have been used widely by patients to try and identify symptoms and search for treatment recommendations. We are now seeing AI companies create chatbots specialised for healthcare. 

An AI chatbot specialised for healthcare seems like a good idea. It could help with triage and perhaps reduce the pressure on hospital wait times if it indeed can help in non urgent medical settings. It does raise a few legitimate concerns for users of such platforms. When you log in and share your information, where does that data go? How safe is it really? One AI provider claims their new health Chatbot is private and secure, which potentially makes it safe from hackers, but does it protect said data from being sold to advertisers or government bodies? Currently there are no formal regulatory processes. Commercially available AI programs are not marketed as health services, meaning they sit outside the remit of regulators like the TGA. A specialised health AI software would be, but even then, there is not enough data to confirm how safe such programs are.

You may be wondering what difference there is between using an AI platform to get medical advice, compared to just doing a standard web search using something like Google. When you search the internet, you may find a bunch of pages with relevant information, but it will be up to you to read through those, ascertain their legitimacy, and derive the answer. AI, on the other hand, synthesises all those webpages and documents into a single easy to understand answer. Which seems extremely helpful, but also points out an inherent weakness. These services exist in an artificial setting and can provide answers to very specific questions, but cannot engage any further to actually problem solve. 

While clearly not a replacement for care from a professional, AI could perhaps offer patients a better understanding of their symptoms and conditions, so that when they do present to their GP, they have more meaningful questions to ask.

In conclusion, the integration of artificial intelligence into healthcare presents a complicated yet promising future (much like any relationship). The challenges, particularly around data privacy, security, and the lack of comprehensive regulation, must be addressed proactively to ensure patient safety and to create a sense of trust in the system. We need to remember that AI is never going to replace the crucial human elements involved in compassionate healthcare. Instead, it serves as a powerful resource that has the potential to dramatically ease the strain on global healthcare systems and improve patient understanding and access.