AI experts urge creation of National AI Safety Institute

104
Image credit: Alex/stock.adobe.com

In recent testimony before the Senate Committee on Adopting AI, experts have called for Australia to establish a dedicated AI Safety Institute to address growing concerns about the risks associated with artificial intelligence.

Polling by the Lowy Institute reveals that over half of Australians believe the risks of AI outweigh its benefits. 

This sentiment underscores the urgency for robust safety measures in the rapidly evolving field of AI, Australians for AI Safety said in a news release. 

Greg Sadler, spokesperson for Australians for AI Safety, emphasised the necessity of government action: “The Government will fail to achieve its economic ambitions from AI unless it can satisfy Australians that it’s working to make AI safe.”

The establishment of AI safety institutes is already underway in several leading nations, including the US, UK, Canada, Japan, Korea, and Singapore. 

These institutes are advancing technical efforts to ensure the safety of next-generation AI models. 

In line with the Seoul Declaration on AI Safety, which Australia signed on 24 May 2024, countries committed to either creating or expanding AI safety institutes. 

However, Minister Husic has yet to outline Australia’s approach to this issue.

Senator Pocock has expressed concerns that while temporary expert advisory bodies have been created, there has been no move towards establishing a permanent AI Safety Institute. 

Reflecting on the substantial funding provided by Canada and the UK to their safety institutes, Senator Pocock remarked, “That seems very doable to me.”

Microsoft has warned that Australia risks falling behind its global counterparts if it does not establish its own AI safety institute. 

Lee Hickin, AI Technology and Policy Lead for Microsoft Asia, highlighted the global trend: “What I see developing globally is the establishment of AI Safety Institutes.” 

“The opportunity exists for Australia to also participate in that safety institute network which has a very clear focus of investing in learning, development, and skills.”

Meanwhile, Soroush Pour, CEO of Harmony Intelligence, also testified, stressing the potential risks of next-generation AI models.

“The next generation of AI models could pose grave risks to public safety. Australian businesses and researchers have world-leading skills but receive far too little support from the Australian government. If Australia urgently created an AI Safety Institute, it would help create a powerful new export industry and make Australia relevant on the global stage.”

In a joint submission to the Inquiry, more than 40 Australian AI experts supported the call for an AI Safety Institute. 

Their submission from Australians for AI Safety states, “Australia has yet to position itself to learn from and contribute to growing global efforts. To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities.”

The full letter is available at AustraliansForAISafety.com.au.