This blog is contributed by guest author, Dr. Malcolm Thatcher.
My previous blog, When Every Second Counts: A Healthcare Perspective, discussed the importance of optimising clinician digital experience in an effort to improve the efficiency and quality of care. Continuing this theme of efficiency and quality in healthcare, I would like to discuss the emergence of Artificial Intelligence (AI). Like many other sectors, healthcare is set to be revolutionised by AI, offering the promise of more efficient care delivery and significantly improved patient outcomes. AI excels at pattern recognition, making it a powerful tool for diagnosing diseases, identifying optimal treatment pathways and personalising care.
Imagine a future where medical imaging is analysed by AI, detecting subtle signs of disease that might be missed by the human eye. This is already becoming a reality. Furthermore, advancements in personalised medicine, fuelled by AI, will enable tailored treatments based on an individual’s unique genetic makeup, medical history and physiological characteristics. More prosaically, AI can also be leveraged in IT Operations to automate and remediate incidents – even before humans are aware of any problem that may affect the delivery of healthcare services.
However, this promising future comes with challenges. The rapid advancement of AI technology often outpaces regulatory frameworks designed to govern its use. Data privacy and the potential for bias in AI algorithms are critical concerns that demand ongoing attention.
The importance of data governance
Data governance plays a pivotal role in ensuring the safe and responsible use of AI in healthcare. Clear guidelines are needed on how AI algorithms utilise patient data while simultaneously safeguarding sensitive information. This presents a delicate balancing act, as access to large volumes of high-quality data is crucial for training effective AI models.
The use of general-purpose Large Language Models (LLMs) in healthcare introduces unique privacy concerns. These models often lack the necessary safeguards to handle sensitive health information. To mitigate these risks, the healthcare sector should consider developing dedicated, health data LLMs. These specialised models would operate on de-identified data and be accessible only to certified healthcare organisations, ensuring both the advancement of AI-driven solutions and the protection of patient privacy.
Regulatory developments in Australia
The Australian government is actively working to establish a regulatory framework for AI in healthcare. In 2024, the Department of Health and Aged Care initiated the “Safe and Responsible Artificial Intelligence in Health Care Legislation and Regulation Review,” calling for public submissions. However, it will take time before regulatory changes are implemented.
The Australian Health Practitioner Regulation Agency (AHPRA) has published guidelines for health practitioners on the ethical and responsible use of AI, emphasising principles like accountability, transparency and informed consent. Additionally, the Therapeutic Goods Administration (TGA) oversees AI tools classified as medical devices, ensuring they meet safety and efficacy standards. However, many AI applications in healthcare fall outside the TGA’s scope, highlighting the need for a more comprehensive regulatory approach.
Furthermore, the Australian government has introduced a voluntary standard for AI, with 10 key guardrails. Nine of these guardrails are set to become legislated as mandatory for high-risk AI applications, a category that includes many AI applications in healthcare.
Key guardrails for AI in healthcare
Some of the most relevant AI guardrails for healthcare include:
- Protecting AI systems by implementing data governance measures to manage data quality and provenance.
- Testing AI models to evaluate model performance and continuously monitor AI systems post-deployment.
- Ensuring human oversight to enable meaningful human control or intervention across the AI system’s lifecycle.
- Enhancing transparency to inform end-users about AI-enabled decisions, AI interactions and AI-generated content.
Another critical guardrail addresses the AI technology supply chain: organisations must be transparent about data, models, and systems across the AI / technology supply chain. To do so effectively requires real-time observability of technology systems. This observability supports:
- Risk mitigation: Identifying and addressing risks associated with data sharing and model development.
- Trust and collaboration: Fostering transparency to build trust and facilitate cooperation among AI stakeholders.
- Regulatory compliance: Ensuring adherence to relevant regulations and ethical guidelines.
This real-time observability of AI should be embedded in technology operations to allow the organisation to monitor AI use of corporate data. For example, an alert could be configured whenever LLMs were called using certain datasets.
Proactive management of AI in organisations is critical to maintaining trust with stakeholders. Management of AI requires organisations to think carefully about AI and data governance, including principles, policies and accountabilities.
Conclusion
AI holds immense potential to transform healthcare, but its successful implementation requires a multi-faceted approach. By prioritising data privacy, ensuring human oversight, and fostering a transparent and collaborative ecosystem, healthcare can harness the power of AI to improve patient care while safeguarding patient safety and trust.
Dr. Thatcher is CEO and Founder of Strategance Group, a firm specialising in digital strategy, risk and governance services to assist organisations with their digital investments. Dr.Thatcher is a published author and has held senior executive roles in large public sector and private sector organisations. Notable roles included Chief Technology Officer of the Australian Digital Health Agency, Chief Health Information Officer for Queensland Health; CEO of eHealth Queensland; and Chief Information Officer and Executive Director Facilities for the Mater Health Group. Dr. Thatcher was also formerly a Professor of Digital Practice in the QUT Graduate School of Business.