Table of Contents

About the Author

Sharing is Caring 

Latest Articles

AI Regulations Australia 2026: Laws, Safety & Compliance

ai regulations

AI regulations in Australia in 2026 are built on a “regulation by evolution” approach: rather than a single AI Act, the government is relying on existing technology‑neutral laws, national AI plans, and detailed guidance and assurance frameworks, especially for the public sector.

For businesses, that means AI is already regulated through consumer, privacy, anti‑discrimination, online‑safety and sector‑specific rules, with AI‑specific standards and policies tightening expectations around governance, transparency, and testing.

Big Picture: How AI Regulations in Australia

Australia does not have a single, EU‑style AI Act that regulates all AI systems. Instead, AI is governed through a mix of:

  • Existing technology‑neutral laws (privacy, consumer protection, discrimination, online safety, workplace, financial services, medical devices, etc.).
  • National strategies and guidance, especially the National AI Plan and the Guidance for AI Adoption.
  • Public‑sector rules like the Policy for the Responsible Use of AI in Government and the National Framework for Assurance of Artificial Intelligence in Government.

Reuters’ overview “Australia rolls out AI roadmap, steps back from tougher rules” notes that the National AI Plan confirms the government will build on existing legal frameworks rather than introduce economy‑wide mandatory guardrails, at least for now. The IAPP’s Global AI Governance – Australia piece describes this as a shift from an EU‑style risk‑based regime to a more flexible, standards‑led approach focused on productivity and innovation.

National AI Plan: Strategy and Regulatory Direction

The National AI Plan, released in December 2025, is the federal government’s main roadmap for AI. It sets three overarching goals:

  1. Capture AI’s economic opportunities – building infrastructure, skills and investment.
  2. Spread benefits and build resilience – supporting workers, communities, and SMEs.
  3. Keep Australians safe – using robust legal, regulatory and ethical frameworks.

Bird & Bird’s analysis, “A New Era for AI Governance in Australia: What the National AI Plan Means for Organisations”, stresses that AI is now treated as a “critical national capability” and that regulators will increasingly ask not just whether you use AI, but how it is governed.

The Plan confirms a few key regulatory choices:

  • No immediate economy‑wide AI law; instead, uplift and clarify existing technology‑neutral laws, such as the Privacy Act, consumer law, discrimination law, copyright and national‑security legislation.
  • Release more practical guidance to promote responsible AI practices, including the updated Guidance for AI Adoption and public‑sector AI policies.
  • Establish an AI Safety Institute (AISI), funded with about A$29.9 million to launch in early 2026, to test models, advise regulators, and coordinate responses to major AI incidents.

White & Case’s alert, “Australia’s National AI Plan: big ambitions, but light on details”, describes this as “regulation by evolution”—relying on existing frameworks and targeted tweaks rather than a whole new AI code. The Association of Corporate Counsel’s commentary “Australia has a National AI Plan. Now What?” notes that the government has abandoned plans for mandatory guardrails for ‘high‑risk AI’ and instead adopted a two‑pronged model: uplift existing laws and issue more guidance, backed by the new AI Safety Institute.

Even without an AI‑specific Act, many Australian laws already apply to AI systems:

  • Privacy law – The Privacy Act 1988 governs personal‑information handling, including AI‑driven profiling, automated decisions and data‑sharing.
  • Consumer law – The Australian Consumer Law covers misleading or deceptive conduct, unfair practices and product safety, including AI‑enabled services.
  • Anti‑discrimination law – Federal and state laws prohibit discrimination based on protected attributes; AI‑driven decisions that unfairly disadvantage groups can still lead to liability.
  • Online Safety Act and eSafety – Regulates harmful content and some algorithmically amplified harms.
  • Sector‑specific laws – For example, financial‑services laws (ASIC, APRA) for AI in lending and advice, TGA and health laws for AI‑based medical devices, and workplace‑safety and employment laws for AI at work.

The IAPP’s AI regulation state‑of‑play article for Australia has described this as an “AI‑agnostic but AI‑relevant” framework: AI is regulated indirectly through how its outputs affect people and markets.

Practically, that means: if an AI system mishandles personal data, misleads consumers, embeds unlawful discrimination, or causes harm, existing regulators can already act, even without AI‑specific law.

Guidance for AI Adoption (NAIC) – Soft Law With Hard Expectations

In October 2025, the National AI Centre (NAIC) released Guidance for AI Adoption, replacing the earlier Voluntary AI Safety Standard (VAISS).

The IAPP explains that this Guidance for AI Adoption:

  • Consolidates VAISS’s 10 guardrails into six responsible‑AI practices: governance and accountability, impact assessment, risk management, transparency & documentation, testing & monitoring, and human oversight.
  • Provides a practical, nationally consistent blueprint for organisations seeking to govern AI responsibly.
  • Encourages proportionate, risk‑based governance and emphasises transparency and human accountability.

White & Case’s “Australia launches new AI guidance” notes that while this guidance is non‑binding, it will influence how regulators and courts interpret “reasonable steps” when assessing whether AI‑related harms breach existing laws.

For businesses, that means this soft law is likely to become de facto baseline practice: organisations that ignore it may struggle to justify their governance if something goes wrong.

Public‑Sector AI: Policy and Assurance Frameworks

AI regulation is currently strongest and most explicit in government, where dedicated policies and assurance frameworks now apply.

Policy for the Responsible Use of AI in Government

In November 2025, the Digital Transformation Agency updated the Policy for the Responsible Use of AI in Government (effective 15 December 2025).

The DTA’s update article, “AI Policy Update: Strengthening responsible use across government”, explains that agencies must now:

  • Develop an AI adoption strategy and internal governance approach.
  • Embed a minimum set of responsible‑AI practices aligned with national guidance.
  • Conduct AI impact assessments for in‑scope systems before deployment.
  • Maintain an internal register of AI use cases.
  • Ensure transparency, contestability, and accountability, including naming responsible officers.

This policy applies across the Australian Public Service and effectively mandates AI governance structures for federal agencies, making the public sector a leading example of AI assurance and documentation in Australia.

National Framework for Assurance of AI in Government

A joint federal–state National Framework for the Assurance of Artificial Intelligence in Government was agreed by Data and Digital Ministers in June 2024.

The Finance Department describes this framework as establishing the “cornerstones and practices of AI assurance” for government use of AI, including:

  • Principles and methods for testing, validating and monitoring AI systems.
  • Aligning AI use with Australia’s AI Ethics Principles.
  • Guidance on documentation, transparency, risk assessment, and evaluation.

The IAPP notes that this is supported by related instruments such as an AI Technical Standard (design and testing requirements) and AI Data‑Security Guidance dealing with provenance, supply‑chain integrity and attacks like data poisoning.

APS AI Plan 2025

Alongside the National AI Plan, the DTA has published an AI Plan for the Australian Public Service 2025. This plan sets out how the APS will use AI to deliver better services while respecting rights, and reinforces that agencies must comply with the Responsible AI Policy and assurance framework when deploying AI.

Australia’s AI Ethics Principles – The Values Baseline

Australia was one of the first countries to adopt national AI ethics principles.

The Department of Industry’s Australia’s AI Ethics Principles outline eight voluntary principles: human, social and environmental wellbeing; human‑centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability.

The IAPP’s governance overview notes that these principles:

  • Have informed subsequent policy, procurement guidance, and research funding.
  • Underpin the Responsible AI Policy for the APS and the AI assurance framework.
  • Align with global norms around fairness, transparency and human rights.

While not enforceable law on their own, they now form part of the “ethical expectations” baseline regulators and the AI Safety Institute are likely to use when assessing whether AI systems are being run responsibly.

AI Safety Institute (AISI) and Future Enforcement

A key piece of the regulatory architecture is the AI Safety Institute (AISI), scheduled to launch in early 2026.

The National AI Plan and IAPP’s Australia AI policy roadmap article explain that the AISI will:

  • Assess upstream risks—capabilities, datasets, system design—for advanced AI models.
  • Analyse downstream harms and support sector regulators (e.g., privacy, financial services, health).
  • Develop reference methods for testing, red‑teaming, and documenting AI systems.
  • Coordinate major incident responses for significant AI‑related events.

Bird & Bird suggests that AISI is likely to become a practical reference for “what good looks like” in AI testing and documentation, influencing both regulators and the courts.

For companies, this means that over time, AISI guidance and reports are likely to shape expectations of reasonable AI governance—even before any AI‑specific legislation is introduced.

What This Means for Businesses Using AI in Australia

From a practical point of view, “AI regulations in Australia” today mean:

  • There is no single AI Act, but AI is regulated through existing laws plus evolving guidance.
  • Regulators and courts will ask whether your AI use complies with privacy, consumer, discrimination, safety and sector‑specific laws, just as if a human made the decision.
  • Soft‑law instruments like the Guidance for AI Adoption and APS policies are creating a de facto standard of care for responsible AI governance.

Key expectations emerging from guidance and policy include:

  • Governance and accountability – clear ownership of AI systems, internal policies, and risk registers.
  • AI impact assessments – early assessment of risks to individuals, communities, and rights before deployment.
  • Transparency and documentation – explaining when and how AI is used, and keeping records of design, data and testing.
  • Testing and monitoring – robust pre‑deployment testing and ongoing monitoring for drift, bias and performance.
  • Human oversight and contestability – humans in the loop for high‑stakes decisions, plus mechanisms for people to challenge AI‑driven outcomes.

The ACC commentary “Australia has a National AI Plan. Now What?” summarises the new reality: businesses are expected to adhere to existing regulatory regimes while also following emerging guidance from the National AI Plan, the AI Safety Institute, and the National AI Centre.