Just released: CyberCX 2026 Threat Report → 

Governing AI Risk: Understanding the Emerging Regulatory Landscape

Cyber Security Strategy

Published by Dimitri Vedeneev, Executive Director Secure AI Lead and Henry Ma, Technical Director, Strategy & Consulting on 5 March 2026

 

With artificial intelligence (AI) technology rapidly advancing and new use cases for AI emerging every day, policymakers around the globe face a complicated dilemma around how to optimally regulate AI. Concerns have seesawed from opening Pandora’s Box to extinguishing the flickering spark, resulting in a diverse range of approaches.

On one end of the regulatory spectrum there is a compliance-heavy, prescriptive approach; on the other, a flexible, risk-based approach that emphasises governance frameworks over statutory mandates. Between these poles, some countries – including Australia – are building out pragmatic middle paths. 

The way governments choose to govern AI will influence compliance requirements, impact competitive landscapes, and shape investment decisions.  

 

Two regulatory poles 

These regulatory poles signal distinct views about the role of regulation in managing emerging technology. For multinational organisations, this means designing risk and compliance strategies that are adaptable across jurisdictions with very different expectations for controls and assurance processes.  

 

Regulatory recalibration  

When governments choose the compliance-first route, there is a risk that well-intended safeguards could hamper innovation and competitiveness. In the EU’s case, industry groups, policy makers and business leaders have voiced concern that rigid requirements, particularly for high-risk systems, could ‘throw the baby out with the bathwater’. There is now discussion among EU institutions about simplifying compliance pathways, reducing administrative burden and delaying the implementation timeframe.  

This recalibration reflects a broader regulatory learning curve. Jurisdictions experimenting with heavy mandates are observing real-world implementation challenges and exploring options to strike a better balance between safety outcomes and market competitiveness.  

 

Approaches in Asia 

Beyond the EU and US, several Asian countries are pursuing more moderate regulatory paths that incentivise investment and innovation, taking a flexible, risk-based approach over a strict, prescriptive approach.  

 

Australia’s model  

Where does Australia’s stance fall within this global divergence?  

In December 2025, the Federal Government released the National AI Plan, signalling a distinct “middle path” for Australian AI governance that sits between the EU’s compliance-heavy model and the US risk-framework approach.  

Australia had previously explored an approach around making its Voluntary AI Safety Standard mandatory for high-risk AI systems. However, that direction softened over time following a Productivity Commission report and culminated with the announcement of the National AI Plan in December 2025 and establishment of an AI Safety Institute (AISI). Rather than imposing a new, standalone AI law, the National AI plan emphasises tech-neutral regulation and relies on existing legal and regulatory frameworks to address AI-related harms.  

Under this approach, government agencies and sector regulators are tasked with identifying and mitigating AI risk within their policy domains. The AISI will provide independent advice, horizon scanning, and risk insights to government and regulators, helping ensure that AI use remains compliant with Australian laws, standards and public expectations.  

For Australian businesses, this strategy places emphasis on governance, controls, and demonstrable accountability. Organisations are expected to manage AI risk within existing mechanisms, such as privacy law, cyber security obligations, consumer protection rules, and industry-specific standards, reinforced by AISI’s guidance.  

 

Don’t reinvent the wheel  

Despite different regulatory trajectories, AI risk is real and must be managed now.  

In our previous blogs on AI risk management and mitigation strategies we explored pragmatic ways to manage these AI risks by evolving and uplifting existing controls. With various countries taking diverse approaches to AI regulation, this incremental approach gives organisations an opportunity to evaluate and tailor their risk mitigation strategies that best fit their AI adoption strategy without making fundamental changes.  

Organisations who operate in multiple jurisdictions must consider multiple legislative requirements, which CyberCX can assist with in deconflicting. It all starts with a Secure AI strategy that is aligned to business goals, drivers and regulatory requirements.  

 

Contributing authors: Umang Barot, Brianna Street, and Katherine Walsh 

Other Cyber Security Resources

cta icon

Ready to get started?

Find out how CyberCX can help your organisation manage risk, respond to incidents and build cyber resilience.