Responsible AI in healthcare

In June 2023 the Department of Industry, Science and Resources announced a consultation on steps Australia can take to mitigate the potential risks of AI.

Addressing the questions posed in the Discussion paper, Research Australia’s submission endorses a risk based approach to the regulation of the development of AI in healthcare, and expresses support for the existing role of the Therapeutic Goods Administration in respect of AI in medical devices. It also calls for dedicated research to understand where AI is posing risk in health care and how these risks can best be mitigated.

Making better use of GP Data and enhancing GP decision making

Most GPs these days use a practice management system (PMS), software which helps capture and manage patient information. Many of these systems also use AI enabled Clinical Decision Support (eCDS) software to support clinical decision making by suggesting possible diagnoses and treatments.

The Australian Government is investigating options for making greater use of data held in PMSs, and also the options for oversight of the quality of eCDS.  Research Australia’s submission has provided examples of the types of research outcomes that can be achieved with GP data, and suggested how research could support the validation of eCDS and confidence in its effectiveness.

Research Australia’s submission is available here.

Regulating AI and ADM in healthcare and HMR

The scope for the use of Artificial Intelligence and Automated Decision Making is only limited by our imagination. In responding to the Government’s Issues Paper on the regulation of AI and ADM, Research Australia has expressed the view that AI and ADM in healthcare and health and medical research need to be subject to regulation which can cover potential future applications and adapt and develop as AI and ADM change without requiring constant revisitation of the framework.

Research Australia believes the existing regulators and responsible agencies are best placed to regulate the use of AI and ADM in healthcare and in health and medical research and innovation. While a robust national safety framework with common principles is required to guide regulators and promote consistency, existing regulatory bodies should be appropriately resourced to ensure they have the capacity to effectively regulate and support the implementation of AI and ADM now and into the future within their own areas of responsibility.

Read Research Australia’s submission