Is a light touch regulation of AI in Australia feasible?


Existing laws will need to be strengthened to take account of AI, similar to how laws had to take account of e-commerce in the 1990s.

By Professor Duncan Bentley

The influence of Artificial Intelligence (AI) continues to grow as its presence across a wide range of sectors means that many of us interact with it on a daily basis – often without even knowing it.

As AI becomes ever more integrated into our ways of living and working, the big question is how should this technology be regulated to protect citizens given the Government is responding to low public trust in AI?

Australia takes the first steps towards regulation

On 17 January, the Government gave its interim response to a discussion paper on Safe and responsible AI. Much commentary has focused on Australia’s light touch to regulation compared to the European Union (EU). The argument is that this will give Australia a competitive edge and not stifle innovation.

Much of this misreads the Government response. Existing laws will need to be strengthened to take account of AI, similar to how laws had to take account of e-commerce in the 1990s. However, the Government leaves how regulatory guardrails will address high-risk AI deployment open.

It notes the difficulty for Australian businesses to navigate ‘a plethora of responsible principles, guidelines and frameworks'. Some immediate action will accompany further consultation, expert advisory groups and a balanced, proportionate, and transparent, community-first approach.

Australia is a trusted international partner and this will shape our regulation as it will need to measure up to those of our partners, given that our economy is so dependent on global trade and engagement.

This means that as Australia defers decisions on any overarching regulation, it can see how regulation in the US, the EU and elsewhere is working. It is an international test environment on public display. However, Australian businesses may not be able to wait. The EU has spoken.

The EU AI Act - why should we care in Australia?

The EU took the initiative on AI regulation by announcing it will introduce a comprehensive legal framework applicable to Artificial Intelligence – the AI Act.

Although developed by the EU, the AI Act will apply to anyone who places an AI system on the EU market or, more importantly, has an AI system that affects EU citizens.

Given that the EU is Australia’s third largest trading partner, the EU AI Act will definitely influence how AI is utilised in Australian businesses and government agencies moving forward.

The rules have been developed to reflect the diverse views of the EU member states and other major players, including the global AI giants.

A risk-based approach to AI

The AI Act takes a risk-based approach, which is also foreshadowed in Australia. This makes it far simpler for the average business or organisation to determine whether and how it applies to them. The critical issue is that anyone using or deploying an AI system must consider its risk. Getting it wrong could result in business-ending fines and even criminal penalties.

Most AI systems are minimal risk, and the requirements will not apply, which is also to be the case in Australia, to ensure we do not stifle innovation. However, many rules that apply to higher-risk systems are common sense and will help future-proof systems and how they are utilised.

What is deemed high-risk?

Some high-risk systems are banned because they violate basic human rights. The EU has strong human rights laws that are used effectively against cyber-criminals, money launderers and others operating on the fringes of the law. Australian rules are distributed but similar.

AI systems that use social scoring, subliminal techniques to exploit personal vulnerability, biometric identification tools, emotion recognition and untargeted scraping of the internet or CCTV, are prohibited.

Healthcare services, recruitment, financial services and credit assessment are all obvious high-risk areas. So too, for Government, are law enforcement, border control, administration of justice and social services (think Robodebt), and even the evaluation of emergency calls.

When is the change coming?

The regulations will come into force across the EU over the next two years, however, the EU is not waiting for the introduction of the law and will launch an AI Pact in coming months with AI developers from both Europe and globally to commit to and implement the key obligations of the AI Act. It is also working closely with multinational and regional organisations and its close economic partners across the world to make this a global standard.

AI is likely already transforming your business or organisation. The EU has provided us with a way to do it safely and provide a much-needed measure to reassure citizens, customers, employees and the most vulnerable in society. Australia’s approach will have to be similar when implemented.

Professor Duncan Bentley is Vice-Chancellor and President of Federation University Australia


New tool highlights our valuable urban wetlands

12 June 2024

The new mapping tool Valuing Urban Wetlands allows users to explore the world of urban wetlands and raises awareness of their biodiversity value.

Empowering research with data catalogue for grains industry

11 June 2024

A national data catalogue will give researchers access to years' worth of information and accelerate the release of new technology and knowledge in the grains industry.

Loading...

Testing for avian flu on the frozen continent

6 June 2024

VIDEO  Dr Meagan Dewar’s work in wildlife disease surveillance has taken her to Antarctica where she has led an international team of researchers searching for the deadly and fast spreading avian flu.