In June 2023 the Department of Industry, Science and Resources announced a consultation on steps Australia can take to mitigate the potential risks of AI.
Addressing the questions posed in the Discussion paper, Research Australia’s submission endorses a risk based approach to the regulation of the development of AI in healthcare, and expresses support for the existing role of the Therapeutic Goods Administration in respect of AI in medical devices. It also calls for dedicated research to understand where AI is posing risk in health care and how these risks can best be mitigated.
The scope for the use of Artificial Intelligence and Automated Decision Making is only limited by our imagination. In responding to the Government’s Issues Paper on the regulation of AI and ADM, Research Australia has expressed the view that AI and ADM in healthcare and health and medical research need to be subject to regulation which can cover potential future applications and adapt and develop as AI and ADM change without requiring constant revisitation of the framework.
Research Australia believes the existing regulators and responsible agencies are best placed to regulate the use of AI and ADM in healthcare and in health and medical research and innovation. While a robust national safety framework with common principles is required to guide regulators and promote consistency, existing regulatory bodies should be appropriately resourced to ensure they have the capacity to effectively regulate and support the implementation of AI and ADM now and into the future within their own areas of responsibility.
Read Research Australia’s submission