The federal government has been urged to regulate “high-risk” uses of artificial intelligence (AI) as part of a series of recommendations from a Senate inquiry on AI adoption in Australia. The inquiry, conducted by the Select Committee on Adopting Artificial Intelligence (AI), delved into the potential opportunities and impacts of AI technologies on the Australian economy. In its report, the committee recommended a comprehensive legislative approach to address AI’s high-risk applications, particularly in sectors like government services, where automated decision-making (ADM) could have significant implications for citizens’ rights and privacy.
One of the key recommendations was the creation of new, whole-of-economy legislation designed to regulate high-risk uses of AI. This legislation would be supported by a non-exhaustive list of specific AI applications deemed high-risk, including general-purpose AI models such as large language models (LLMs), like ChatGPT. These models, which are trained on vast amounts of data, could have profound societal implications, making them a priority for regulation. The committee emphasized the importance of ensuring that these AI systems are developed and deployed in ways that are transparent, accountable, and fair.
A particularly significant recommendation from the committee relates to the use of copyrighted content in AI training datasets. The report calls for AI developers to be transparent about the use of copyrighted materials and to ensure that these works are appropriately licensed and compensated for. This measure aims to protect intellectual property rights and ensure that creators are fairly remunerated for their contributions to AI development. The inquiry recognized the need to strike a balance between fostering innovation in AI and safeguarding the rights of content creators, stressing the importance of ethical practices in AI development.
In line with these recommendations, the committee also urged the government to implement changes recommended in the 2023 review of the Privacy Act. Specifically, the committee emphasized the right of individuals to request meaningful information about how automated decisions are made, particularly in contexts where AI is used to influence or determine outcomes in areas like employment, finance, and government services. This transparency would empower individuals to better understand and challenge decisions made by AI systems, ensuring that their rights are not infringed upon by opaque or biased algorithms.
Furthermore, the committee highlighted the need for a federal legal framework governing the use of automated decision-making in government services. This framework should be informed by ongoing consultations led by the Attorney-General’s Department and take into account the 38 recommendations made in the previous consultation on ADM. The goal is to create clear guidelines for how AI and automation can be used in public services while safeguarding citizens’ rights and ensuring fairness and accountability.
Finally, the committee stressed the importance of adopting a coordinated, holistic approach to managing the growth of AI infrastructure in Australia. As AI technologies continue to evolve and expand across various sectors, it is essential for the government to provide guidance and regulation that supports innovation while addressing risks and potential harms. The inquiry’s call for a balanced approach reflects the broader global conversation about how to manage the rapid development and deployment of AI technologies in a way that benefits society as a whole.
The inquiry, which began in March 2024, initially set its reporting deadline for September but was extended to November 26 to allow the committee to assess the impact of generative AI on the 2024 US federal election, which took place on November 5. This extension highlights the urgent need for governments around the world to consider the broader implications of AI, including its potential to influence elections, manipulate public opinion, and affect democratic processes. The Senate inquiry’s findings and recommendations mark an important step toward creating a regulatory framework that addresses the challenges and opportunities posed by AI in Australia.