Reality check: Aussie factories’ journey to responsible AI implementation

131
Stock photo, used for illustrative purposes only. Image credit: schankz/stock.adobe.com

Artificial intelligence has become more than just a buzzword as it promises to revolutionise every other industry in the world, including manufacturing. In Australia, in particular, the hype surrounding AI and automation has reached a fever pitch, with many business leaders leveraging their resources to race towards adoption and implementation.

The promise extends beyond individual businesses, with AI expected to create up to 200,000 jobs in Australia by 2030 and contribute $170 billion to $600 billion to the country’s gross domestic product. With such impressive forecasts, it is no wonder that AI has captured the attention of industry leaders and policymakers in the country.

However, beneath all this enthusiasm lies a stark reality check. The National AI Centre’s Responsible AI Index 2024 has revealed a significant disparity between perception and practice among Australian businesses. According to the report, a whopping 78% of businesses in the country believe they are implementing AI safely and responsibly when, in truth, only 29% are doing so correctly.

This gap between perception and reality is particularly crucial in the manufacturing sector, where the implementation of AI can have far-reaching consequences on productivity, worker safety, and product quality.

Main roadblocks to safe AI implementation in Australia

NAIC’s report on responsible AI adoption surveyed 413 executive decision-makers responsible for AI development in their organisations. These include companies in financial services, government, health, education, telecommunications, retail, hospitality, utilities and transportation. The study found four significant disparities between how Australian organisations perceive their AI practices and the reality of their implementations.

  1. Transparency and accountability: Only 25% of companies have actively involved their top management in discussions about responsible AI practices. Similarly, 23% have established specific rules to oversee and control their AI systems.
  2. Fairness and bias: Over 69% of organisations said they are confident in their ability to prevent unfair treatment through AI systems, but only 35% have fairness metrics signed with desired outcomes.
  3. Safety and security: Approximately 84% of organisations said their AI systems adhere to privacy and security regulations. However, only 37% have actually conducted safety risk assessments.
  4. Explainability and contestability: The gap is particularly wide when it comes to companies’ ability to explain how their AI algorithms work. About 76% claim they can elucidate their AI algorithms’ workings, while only 39% have developed concrete materials to explain their AI systems’ inputs and processes.

These findings underscore the significant work that needs to be done to align perceptions with actual practices in AI implementation.

Transforming vision into reality

Last week, 5 September 2024, the Australian government unveiled a new Voluntary AI Safety Standard. This standard provides practical guidance for businesses operating in high-risk AI environments, enabling them to proactively implement best practices in AI usage. By offering both mandatory and voluntary frameworks, the government seeks to create a balanced approach that promotes responsible AI adoption while fostering innovation in the Australian business landscape.

For Australian manufacturers, the journey towards safe and effective AI implementation is not just about gaining a technological edge in an ever-evolving global landscape. It is also about fostering a culture of responsibility and transparency as the industry stands at the cusp of an AI-driven revolution. For more information about the current situation of responsible AI in Australia, explore the report’s findings at fifthquadrant.com.au.