elevenM Director Melanie Marks shares key tips for AI governance in healthcare from her recent presentation at the Digital Health Festival 2025 in Melbourne.
This week I had the opportunity to speak with healthcare leaders at the Digital Health Festival 2025. One theme kept emerging: how do we prepare for AI without being overwhelmed by the complexity or hype?
The truth is—AI governance in healthcare isn’t about inventing something new. The foundations already exist. If your organisation is serious about adopting AI safely and responsibly, the data governance frameworks you should already have in place give you a head start.

Here are 10 essential data governance actions that align with the RANZCR’s Ethical Principles for AI, the RACGP’s Practice Standards, Australia’s Voluntary AI Safety Standard, and the Australian Privacy Principles (APPs):
- Data quality and integrity
AI is only as good as the data it learns from. Prioritise accurate, complete, and clinically relevant data, as per RACGP C6.4 and APP 10. - Clear data ownership and stewardship
Identify who is accountable for datasets and how they’re used, disclosed and secured. This is a cornerstone of data governance and critical for AI auditability. - Privacy by Design
Embed privacy principles from the outset – design systems and processes that protect patient privacy, don’t retrofit them later. - Informed consent and transparency
AI systems must respect patient autonomy. Be clear with patients about how their data will be used—particularly if AI is involved in training models and always if using it for clinical decision making. - Bias monitoring and fairness
RANZCR’s ethical principles emphasise fairness. Routinely assess datasets and models for representational bias. - Security safeguards
Apply robust security controls (APP 11), especially when integrating AI into clinical systems. - Model explainability and accountability
Healthcare providers must be able to justify AI-driven decisions. Use explainable models and document reasoning. Remember that it is you as the clinician who is making the decisions, not the AI! - Vendor due diligence
Assess third-party AI tools not just for functionality, but for compliance with AI safety standards and ethical use. - Incident response and escalation paths
Always have your playbook before you start the game! Establish clear processes for when AI fails, malfunctions or causes harm, just as you should already have a data breach response plan. - Regular audits and continuous improvement
You should be watching not only the efficacy of your own systems and processes but the evolving landscape. Things are going to look very different 1, 2 and 3 years from now.
We don’t need to reinvent the wheel. These are familiar practices rooted in privacy, safety, and ethics. If you’re already doing these well, you’re a good way down the path to implementing safe and responsible AI.
Contact us
If you’re interested in learning more Data Governance or AI Governance, please contact us.