The Albanese Government is considering a ban on high-risk uses of AI.

The government is deliberating a potential ban on “high-risk” applications of artificial intelligence (AI) and automated decision-making, following concerns about the detrimental effects of these technologies, including the rise of deepfakes and algorithmic bias.

Industry and Science Minister Ed Husic is set to release a report by the National Science and Technology Council, along with a discussion paper on achieving “safe and responsible” AI. 

The document reportedly highlights the growing adoption of generative AI, which involves AI systems generating new content such as text, images, audio, and code. 

While educational institutions grapple with issues related to AI's role in student cheating, the industry department's discussion paper warns about the various potentially harmful applications of AI. 

These include the creation of deepfakes for deceptive purposes, spreading misinformation and disinformation, and even encouraging self-harm.

Algorithmic bias is also highlighted as a significant risk of AI, with the potential to prioritise certain candidates in recruitment or target minority racial groups unfairly.

The paper acknowledges the positive applications of AI in fields like medical image analysis, enhancing building safety, and providing cost savings in legal services. 

However, it does not cover the implications of AI on the labour market, national security, and intellectual property.

The report from the National Science and Technology Council allegedly cautions that the concentration of generative AI resources in a few large multinational technology companies, primarily based in the US, poses risks to Australia. 

Although Australia has strengths in computer vision and robotics, its capacity in large language models and related areas is relatively weak due to limited access.

The paper will outline various global responses to AI governance, ranging from voluntary approaches in Singapore to more stringent regulations in the EU and Canada. It says there is an emerging international trend toward a risk-based approach to AI governance.

The government aims to implement appropriate safeguards, especially for high-risk AI applications and automated decision-making. 

Stakeholders will be invited to participate in an eight-week consultation, which seeks input on the complete ban of certain high-risk AI applications and the criteria for such bans. However, the paper acknowledges the need to align Australia's governance with major trading partners to take advantage of AI-enabled systems globally and foster AI growth domestically.

Minister Husic says he recognises the balancing act of using AI safely and responsibly, stating that building trust and public confidence in these critical technologies is crucial. 

The federal government has allocated $41 million in the budget for the National AI Centre and a Responsible AI Adopt program for small and medium enterprises.

The paper notes that Australia's existing laws, which are “technology neutral”, already regulate AI to some extent, encompassing consumer protection, online safety, privacy, and criminal laws. 

Previous cases, such as penalties imposed on Trivago for algorithmic decision-making misleading consumers, highlight the current regulatory framework's relevance.

Concerns regarding AI's potential harm extend beyond Australia. Industry leaders, including CEOs of major AI companies, recently issued a statement through the nonprofit Center for AI Safety, emphasising the need to mitigate AI risks to prevent existential threats. 

While some skeptics argue that fears of AI-induced catastrophe are premature, industry insiders urge attention to issues such as algorithmic bias and disinformation.

The departure of Geoffrey Hinton, a prominent figure in AI research, from Google's AI research team further reflects concerns about the pace of deploying AI in the public domain.