Artificial intelligence (AI) is advancing rapidly. However, Australia does not have an AI-specific regulatory framework to govern its use. Whilst regulations are in place to govern AI, expanding risks may trigger the launch of new safeguards in 2024.
Using AI in a trustworthy manner requires knowledge of the current regulatory landscape and foresight of possible changes. If you are building a framework for your organisation, it’s vital to be one step ahead. Here’s what lawyers should know…
Does the government regulate AI?
Yes, AI is regulated in Australia, but not with AI-specific legislation. Instead, it is governed by existing legislation (e.g. consumer, data protection, competition, copyright and anti-discrimination law).
The government encourages usage of voluntary ethical frameworks. These include Australia’s AI Ethics Framework to guide the responsible design, development and implementation of AI.
However, momentum is building for Australia to enhance existing regulation. The government is seeking to implement backstops against AI’s potential risks. This may include introducing targeted regulation and governance.
Is AI regulation on the 2024 roadmap?
Potentially. Here are four signs that point to forthcoming AI regulation..
- In September 2023, the federal government ‘has agreed to introduce’ amendments to the Privacy Act. This forms as part of its ‘broader work to regulate AI’. The proposed amendments will give individuals the right to greater transparency over how their personal information may be used.
- In November, the National AI Center launched AI Month. It brings government departments, community groups, and organisations together to share knowledge.
Speaking about the initiative, Minister for Industry and Science, Ed Husic, stressed the need to safeguard emerging technology.
“AI presents a huge opportunity to boost productivity, but we must help make sure the design, development and deployment of AI technologies is safe and responsible,” said the Hon Ed Husic MP, Minister for Industry and Science.
- The government is under pressure to prevent AI from causing harm. In November 2023, business groups (including BCA, Ai Group, ACCI, COSBOA, ACS and the TCA) united ‘to offer the Government the opportunity to work hand-in-hand with those at the forefront of the AI revolution, to effectively manage its benefits and risks.’
- In June 2023, the Department of Industry, Science and Resources released a discussion paper on ‘Safe and responsible AI in Australia’. It explored potential regulatory frameworks and invited industry feedback on the adequacy of Australia’s current regulatory frameworks.
How is AI regulated?
Proposed legislation and initiatives are paving the way for the responsible use of AI. These seek to introduce guardrails for AI-powered automation and the use of large data sets.
New developments suggest that governments and organisations will regulate AI in different ways. These include upcoming targeted and amending legislation, governance frameworks, international and national standards, and the implementation of policy within organisations for the use and development of AI.
How is AI governance evolving?
- International legislation: The proposed EU AI Act (provisional agreement on the text of the AI Act was reached on 8 December 2023) ), is the world’s first comprehensive AI law. It will employ a sliding-scale framework that will assess the user-risk levels of various AI applications.
- Global treaties, including ‘The Bletchley Declaration’: In November 2023, the Australian Government, alongside the EU and 27 countries including the UK, U.S., and China, signed the Bletchley Declaration, an international commitment to ensuring that AI should be designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible.’
- Adoption of ethical frameworks: The OECD’s AI Principles provide guidance to governments globally. Australia’s AI Ethics Framework is aligned with these principles. Many organisations, from Microsoft to Thomson Reuters also have their own AI principles.
Ethical AI is on the rise. But what is it?
Ethical AI is a growing field. It seeks to ensure that AI is being developed, implemented and used responsibly. In 2021, UNESCO released its Recommendation on the Ethics of Artificial Intelligence (since adopted by 193 member states).
The Recommendation aims to protect human rights and dignity. It is ‘based on the advancement of fundamental principles such as transparency and fairness, always remembering the importance of human oversight of AI systems.’
The Recommendation includes guidance for applying the ethical recommendations to ‘policy action areas’ such as data governance, social wellbeing, and the environment.
Much of the focus of ethical AI is on generative AI (made famous by ChatGPT’s capabilities). Generative AI is a powerful tool that uses large language models to scan unprecedented amounts of data and generate almost instantaneous responses to queries.
AI is liberating time for professionals, so they can focus on value-adding activities. But alongside the benefits, core risks associated with its governance include privacy, data security, copyright infringement and mitigating bias and hallucinations.
The question of ethics extends to ensuring that AI remains a tool that assists humans – not the other way around.
Thomson Reuters’ 2023 Tech & the Law Report showed that lawyers have mixed feelings about AI. An overwhelming 69% of private practice professionals believed generative AI would boost workflow efficiency. Meanwhile, 51% predicted it would threaten jobs in certain areas of the law.
“AI has the growing capacity to augment human intelligence, not replace it,” said Carl Olson, vice president of proposition at Thomson Reuters. “It can help lawyers work more efficiently, and alongside human expertise, AI has the capacity to deliver unprecedented accuracy.
“As the technology continues to evolve, law firms and legal departments must harness AI’s potential whilst educating their clients about the risks involved with navigating large language models.”
What is the Australian government’s policy on AI?
The Australian Government supports AI adoption. It is committed to ‘implementing Responsible AI in industry, the military and the public sector.’
In the 2023/2024 budget, the government pledged $41.2 million to ‘support the responsible deployment of AI’. The pledge includes strengthening the Responsible AI Network and launching the Responsible AI Adopt Program to help SMEs adopt AI.
In November 2023, the government announced a collaboration with Microsoft, forming part of the tech giant’s plan to invest $5 million in Australia. The six-month trial of Microsoft Copilot in the public service will make it among the first governments to use it.
What are Australia’s AI ethics principles?
Australia released a set of eight AI ethics principles (modelled on the OECD’s principles) in 2019. The voluntary framework promotes using AI in a trustworthy and inclusive manner. It also seeks to mitigate risks for all.
The Australian Government’s, Department of Industry, Science and Resources states the following intentions:
- achieve safer, more reliable, and fairer outcomes for all Australians
- reduce the risk of negative impact on those affected by AI applications
- businesses and governments to practice the highest ethical standards when designing, developing, and implementing AI.
The Australian Government’s Artificial Intelligence Taskforce
In July 2023, the Australian Government launched the Artificial Intelligence (AI) in Government Taskforce focussing on the safe and responsible use of AI by the Australian public service (APS). The taskforce is jointly led by the Digital Transformation Agency (DTA) and the Department of Industry, Science and Resources.
Its guidance is intended to be iterative to keep up with the ever-changing AI landscape. The Taskforce recently updated its interim guidance on agency use of generative AI which was first released in July 2023. This first update focuses on ‘refreshed human-oriented principles’ and ‘golden rules’ that the public sector can employ.
The Government has explored the risks associated with using AI in the Australian Public Service (APS). In late October 2023, it released a Long-term Insights Briefing report that includes a framework for the trustworthy use of AI in public service delivery.
What are the emerging harms of AI in Australia?
The Australian Human Rights Commission is particularly concerned about four ‘emerging harms’ of AI – privacy, algorithmic discrimination, automation bias and misinformation and disinformation.
The body is calling on the federal government to identify and address gaps in Australia’s existing legislation to specifically safeguard against the emerging harms and novel risks arising from AI usage.
The body also supports the appointment of an AI Commissioner. As an independent statutory office, it would provide guidance to the government and the private sector on compliance with the existing regulatory framework as it applies to AI.
How to prepare for AI regulation in Australia?
The Australian Government continues to support the safe and responsible deployment and adoption of AI across the government and private sector. However, Australia’s overall national AI strategy remains unclear. This includes the extent of the government’s proposed regulatory response to mitigate AI’s potential harms.
The government is encouraging organisations to follow Australia’s voluntary AI Ethics Framework, but states that the framework is intended to complement ‘existing AI regulations and practices.’
Organisations using or adopting AI should do so with AI governance frameworks in place. A responsible approach will reinforce trust among clients and professionals alike. Importantly, it will prepare them for the future of AI regulation in Australia.
Related reading