Use Of Artificial Intelligence Policy
1. Purpose
This policy outlines how RediMed integrates and governs the use of Artificial Intelligence (AI) technologies to support clinical care, administration, and patient engagement. It ensures compliance with professional obligations under the Privacy Act 1988, the Australian Privacy Principles, and guidance from AHPRA and the National Boards.
2. Scope
This policy applies to all staff, contractors, and systems using AI tools for:
- Clinical decision support
- Documentation (e.g. AI scribes)
- Patient communication
- Administrative automation
- Data analysis and reporting
3. Principles of Safe and Ethical AI Use
- Clinical Oversight: The use of AI tools will not replace clinical judgment. RediMed practitioners remain responsible for all decisions affecting patient care and the content of any reports generated with the assistance of AI.
- Transparency: Patients are informed when AI tools are to be used in their care and given the opportunity to ask questions or opt out where feasible.
- Privacy and Security: AI systems used comply with the Australian Privacy Principles. Data used by AI is de-identified or securely stored and processed.
- Bias and Fairness: AI tools used are evaluated for bias and fairness. Any tool that risks discriminatory outcomes are be reviewed or discontinued.
- Evidence-Based Use: AI tools must be supported by peer-reviewed evidence or regulatory approval. Unvalidated tools are not permitted in RediMed clinical workflows.
4. Implementation and Governance
- Approval Process: All AI tools are reviewed by a RediMed executive and if clinical in nature, the clinical director, before deployment.
- Vendor Due Diligence: AI vendors must demonstrate compliance with Australian healthcare regulations and provide documentation on data handling and model accuracy.
- Training: RediMed staff using AI tools must complete training on responsible use, limitations, and escalation protocols.
- Data storage: Data handled by the AI platform is stored in Australia and encrypted in transmission and at rest.
- ICT policies: AI tools must meet and be assessed against existing RediMed ICT and information security policies.
5. Monitoring and Review
- Audit Trail: AI-assisted decisions and documentation must be traceable and auditable.
- Incident Reporting: Any adverse event or malfunction involving AI will be reported to the Quality and Compliance Manager and relevant regulatory bodies.
- Policy Review: This policy will be reviewed annually or in response to regulatory changes or new AI deployments.
6. Patient Rights
- Patients may request access to records generated or influenced by AI.
- Patients may decline AI-assisted services where alternatives are available.
For more information on this policy, please reach out to our Quality team via quality@redimed.com.au.