Governing AI Risk: Understanding the Emerging Regulatory Landscape
Published by Dimitri Vedeneev, Executive Director Secure AI Lead and Henry Ma, Technical Director, Strategy & Consulting on 5 March 2026
With artificial intelligence (AI) technology rapidly advancing and new use cases for AI emerging every day, policymakers around the globe face a complicated dilemma around how to optimally regulate AI. Concerns have seesawed from opening Pandora’s Box to extinguishing the flickering spark, resulting in a diverse range of approaches.
On one end of the regulatory spectrum there is a compliance-heavy, prescriptive approach; on the other, a flexible, risk-based approach that emphasises governance frameworks over statutory mandates. Between these poles, some countries – including Australia – are building out pragmatic middle paths.
The way governments choose to govern AI will influence compliance requirements, impact competitive landscapes, and shape investment decisions.
Two regulatory poles
- The European Union (EU): Through the EU AI Act, the EU has adopted a risk categorisation model that classifies AI systems into clear levels from unacceptable to high, limited, and minimal risk, and attaches compliance obligations accordingly. High-risk systems are subject to stringent requirements spanning data quality, transparency, documentation, and cyber security. The EU’s approach reflects a desire for legal certainty and robust public safeguards not dissimilar to its approach to privacy with General Data Protection Regulation (GDPR) about ten years ago.
- The United States (US): In contrast, the US has embraced a more flexible, innovation-first orientation. The NIST AI Risk Management Framework (AI RMF) exemplifies this approach. It is voluntary, principles based, and focused on empowering organisations to govern, map, measure, and manage AI systems within their operations. Federal enforcement mechanisms are less pronounced, with US strategy emphasising standards development, sectoral guidance, and ecosystem incentives rather than top-down legal mandates.
These regulatory poles signal distinct views about the role of regulation in managing emerging technology. For multinational organisations, this means designing risk and compliance strategies that are adaptable across jurisdictions with very different expectations for controls and assurance processes.
Regulatory recalibration
When governments choose the compliance-first route, there is a risk that well-intended safeguards could hamper innovation and competitiveness. In the EU’s case, industry groups, policy makers and business leaders have voiced concern that rigid requirements, particularly for high-risk systems, could ‘throw the baby out with the bathwater’. There is now discussion among EU institutions about simplifying compliance pathways, reducing administrative burden and delaying the implementation timeframe.
This recalibration reflects a broader regulatory learning curve. Jurisdictions experimenting with heavy mandates are observing real-world implementation challenges and exploring options to strike a better balance between safety outcomes and market competitiveness.
Approaches in Asia
Beyond the EU and US, several Asian countries are pursuing more moderate regulatory paths that incentivise investment and innovation, taking a flexible, risk-based approach over a strict, prescriptive approach.
- Singapore: The Singapore government has taken a more US-like approach by leveraging industry frameworks like ISO 42001 and NIST AI Risk Management Framework to provide guardrails for organisations to follow to incentivise foreign investments in Singapore. Rather than enacting a comprehensive AI-specific law, it has strategically built upon existing regulatory regimes, like the Personal Data Protection Act, and issued guidance notes (e.g. the Companion Guide on Security AI Systems) to regulate AI.
- South Korea: On the other hand, South Korea has followed the EU approach with the promulgation of the AI Basic Act in January that consolidated AI-related laws from 19 bills into one unified framework. But unlike the EU AI Act, it has taken an innovation-first approach to promote AI development through startup support, talent programs and industry clustering with a focus on transparency rather than prohibition where organisations must disclose where AI is used. It has also set a relatively light maximum fine for non-compliance at 30 million KRW (approx. $29,000 AUD).
- Japan: Japan’s AI Promotion Act takes an agile governance approach built on an assumption that AI policy can evolve alongside the technology itself. Rather than setting fixed compliance rules, the Act emphasises ongoing government coordination and best-effort responsibilities for organisations, allowing guidance and oversight to adapt as new risks and use cases emerge. Japan’s approach distinctively frames AI governance as an enabler of national capability and competitiveness, with risk management designed to sit inside that broader promotion agenda, rather than lead it.
Australia’s model
Where does Australia’s stance fall within this global divergence?
In December 2025, the Federal Government released the National AI Plan, signalling a distinct “middle path” for Australian AI governance that sits between the EU’s compliance-heavy model and the US risk-framework approach.
Australia had previously explored an approach around making its Voluntary AI Safety Standard mandatory for high-risk AI systems. However, that direction softened over time following a Productivity Commission report and culminated with the announcement of the National AI Plan in December 2025 and establishment of an AI Safety Institute (AISI). Rather than imposing a new, standalone AI law, the National AI plan emphasises tech-neutral regulation and relies on existing legal and regulatory frameworks to address AI-related harms.
Under this approach, government agencies and sector regulators are tasked with identifying and mitigating AI risk within their policy domains. The AISI will provide independent advice, horizon scanning, and risk insights to government and regulators, helping ensure that AI use remains compliant with Australian laws, standards and public expectations.
For Australian businesses, this strategy places emphasis on governance, controls, and demonstrable accountability. Organisations are expected to manage AI risk within existing mechanisms, such as privacy law, cyber security obligations, consumer protection rules, and industry-specific standards, reinforced by AISI’s guidance.
Don’t reinvent the wheel
Despite different regulatory trajectories, AI risk is real and must be managed now.
In our previous blogs on AI risk management and mitigation strategies we explored pragmatic ways to manage these AI risks by evolving and uplifting existing controls. With various countries taking diverse approaches to AI regulation, this incremental approach gives organisations an opportunity to evaluate and tailor their risk mitigation strategies that best fit their AI adoption strategy without making fundamental changes.
Organisations who operate in multiple jurisdictions must consider multiple legislative requirements, which CyberCX can assist with in deconflicting. It all starts with a Secure AI strategy that is aligned to business goals, drivers and regulatory requirements.
Contributing authors: Umang Barot, Brianna Street, and Katherine Walsh

