Policy
AI policy milestones across Australian federal and state jurisdictions: implemented frameworks, active consultations, and proposed legislation.
53
tracked policy events43
in force↑ up9
active development↑ up42
federal jurisdiction→ stablePolicy Timeline
2026
Australia and EU signed an AI cooperation agreement covering AI safety research, regulatory alignment, and talent exchange, aligned with EU AI Act framework.
Signals alignment with EU's risk-based regulatory approach. May influence mandatory guardrails design.
2025
SA Government released an AI roadmap focused on health, agtech, and defence industry applications of AI, with Adelaide positioned as a defence AI hub.
Leverages SA's strengths in defence, health, and wine/agtech industries.
Government proposed a mandatory reporting framework requiring organisations using high-risk AI to report material AI incidents to the AAISI within 72 hours.
Would mirror NIS2-style incident reporting in EU. Industry consultation underway.
DISR released an updated AI Action Plan incorporating lessons from the voluntary safety standard, the mandatory guardrails consultation, and the Senate Select Committee recommendations. Sets out a 3-horizon approach to AI regulation with increasing mandatory requirements for high-risk applications.
Replaces the 2021 Action Plan as the primary strategic document. First Australian AI plan with a regulatory roadmap and defined timelines for mandatory guardrails.
DISR commissioned a comprehensive review of Australia's AI capabilities across industry, research, and government, to inform the next phase of AI strategy.
Will inform AI investment priorities and identify critical capability gaps.
WA Government digital strategy included significant AI components, particularly for resources sector, health, and government services.
Resources-focused AI investment. Partnership with Rio Tinto and BHP on AI R&D.
2024
Queensland Government released an AI strategy focused on economic development, government service improvement, and responsible AI adoption by Queensland businesses.
Committed $20M to AI-related programs. Focus on resources and agtech AI.
Final report of the Senate Select Committee on Adopting AI, with 53 recommendations on AI legislation, governance, sector-specific regulation, workforce transition, and Australia's international AI engagement strategy.
53 recommendations including establishment of an AI Commissioner, mandatory guardrails for high-risk AI, and a National AI Strategy with 10-year targets. Government response pending.
Department of Defence released its AI Strategy, committing to responsible AI adoption across defence, with priorities in autonomous systems, intelligence, and logistics.
Sets framework for billions in defence AI investment over the decade.
DFAT began review of export controls as they apply to AI technology, particularly dual-use AI systems with military applications, in line with Wassenaar Arrangement.
Could restrict export of Australian AI defence technology. Aligns with US EAR approach.
APS released mandatory guidance for government agencies procuring AI systems, including risk assessment requirements, vendor due diligence, and transparency obligations.
All federal agencies must comply. Major signal for vendors. IRAP assessment increasingly expected.
Tasmanian Government digital strategy with AI components focused on tourism, agriculture, and marine industries.
Smallest state strategy. Partnership with UTAS AI research group.
DTA published the Responsible AI Framework providing binding obligations for all APS agencies using AI, including mandatory human oversight for high-stakes decisions, explainability requirements, and regular algorithmic impact assessments.
Binding on all APS agencies. Requires agencies to publish an AI register of all AI systems in use. First mandatory AI transparency requirement for the public sector.
Privacy Act reform bill introduced to Parliament including provisions related to automated decision-making, requiring entities to disclose when AI makes decisions affecting individuals.
Major reform to Australia's data privacy law. AI transparency provisions a significant addition.
NT Government digital transformation strategy included first AI chapter, focused on remote service delivery, Indigenous community applications, and health.
Smallest jurisdiction AI strategy. Emphasis on equity and remote access.
Government established the Australian AI Safety Institute (AAISI) to evaluate risks from frontier AI models, conduct research, and engage with international AI safety bodies.
Aligned with UK and US AI Safety Institutes. Signals growing focus on frontier model risks.
ASIC guidance released requiring superannuation funds using AI in investment decisions to disclose this to members, as part of broader AI transparency initiative.
Affects hundreds of billions in superannuation assets. First financial sector AI disclosure rule.
ASD published guidelines for secure AI system design, deployment, and operation, including guidance on adversarial attacks, model poisoning, and supply chain risks.
First dedicated AI cybersecurity guidance from ASD. Joint publication with Five Eyes partners.
Victoria released a comprehensive AI strategy focused on economic growth, responsible use in government, and Victoria as an AI talent hub.
Committed $50M+ to AI initiatives in Victorian budget.
The Albanese Government's $22.7 billion Future Made in Australia package, including significant investment in AI capabilities, critical technology manufacturing, clean energy, and sovereign industrial capacity. The National Interest Account and the AI industry fund within FMIA represent the largest single commitment to AI-adjacent sovereign capability in Australian history.
Largest ever Australian government investment in sovereign industrial and technology capability. Includes AI hardware, clean energy tech, and critical minerals processing. Sets framework for AI industry policy through 2030.
ACT Government released AI policy for the territory covering government use of AI, with particular focus on protecting public sector workers and ensuring human oversight.
Model for other jurisdictions. Strong union consultation included.
The Senate Select Committee on Adopting Artificial Intelligence released its interim report with 10 recommendations covering AI governance, workforce impacts, public sector adoption, and the need for a national AI strategy with legislated targets.
First parliamentary committee specifically on AI adoption. Recommended a legislated National AI Strategy, an AI Commissioner, and mandatory AI incident reporting.
Australia, US, and UK established a dedicated AI and Autonomy Taskforce under AUKUS Pillar II, with specific workstreams on: autonomous undersea vehicles, AI-enabled ISR, autonomous systems for littoral operations, and joint AI safety testing standards.
Operationalises AUKUS AI commitments. Significant for Australian sovereign AI capability in defence — may require Australian-hosted AI infrastructure for classified workloads.
JSA published a major report on AI's impact on the Australian workforce, projecting job displacement and creation, and recommending reskilling investments.
Influenced $200M+ in AI skills training programs. Referenced in budget measures.
eSafety Commissioner received expanded powers to address AI-generated harmful content, including deepfakes, CSAM, and non-consensual intimate imagery.
First targeted AI content regulation in Australia. Builds on Online Safety Act.
National AI Centre launched AI Connect, providing $5M to help small and medium enterprises adopt AI through grants, mentoring, and technical assistance.
Direct industry support. Hundreds of SMEs engaged.
Government released interim response to the Safe and Responsible AI consultation, indicating preference for risk-based approach and enhanced voluntary measures before mandatory regulation.
Confirmed regulatory direction. Voluntary measures to be strengthened, mandatory rules for high-risk AI being developed.
2023
Australia, UK and US agreed to cooperate on AI and autonomy as part of AUKUS Pillar II advanced capabilities, including autonomous systems and AI-enabled defence capabilities.
Major commitment to AI sovereignty through allied cooperation rather than full independence.
DISR consulted on proposed mandatory guardrails for high-risk AI applications, modelled partly on EU AI Act risk categorisation approach.
If implemented, would be Australia's first legally binding AI regulation.
Therapeutic Goods Administration released guidance on Software as a Medical Device (SaMD) incorporating AI/ML, aligned with FDA and international approaches.
Critical for Australian health AI companies like Harrison.ai seeking regulatory approval.
Australia's first national quantum strategy released, with AI quantum computing as a key focus area. Committed $1B+ over 10 years.
Positions quantum-AI intersection as strategic priority. PsiQ and other quantum companies beneficiaries.
Department of Defence committed $15.5M to AI procurement uplift, including establishing AI evaluation capability and skilling acquisition workforce.
Critical for defence AI supply chain development. Opens pathway for Australian AI vendors.
Australian Government released a voluntary AI safety standard aligned with international frameworks, providing practical guidance for responsible AI use by organisations.
Adopted by hundreds of organisations. Precursor to potential mandatory requirements.
Australian Signals Directorate expanded IRAP framework to explicitly include AI services and cloud-hosted AI tools, requiring assessment for government use.
Significant for cloud AI providers. Creates compliance pathway for AI services in government.
DISR released a progress report on Australia's AI Action Plan, documenting implementation of commitments and consulting on priorities for a refreshed plan to 2030.
Updated implementation status of 2021 AI Action Plan commitments. Identified gaps in compute infrastructure and model development as priority areas.
ACCC investigation into digital platforms expanded to consider AI-driven algorithmic harms, market concentration, and consumer protection issues.
Could result in AI-specific competition and consumer protection rules.
Government released a Critical Technologies Statement listing AI as a critical technology requiring investment, workforce development, and protection of sovereign capability.
AI formally recognised as critical technology alongside quantum, semiconductors, and robotics.
NSW Government released its AI Strategy, committing to responsible AI adoption across NSW Government services with a focus on trust, transparency, and accountability.
First major state AI strategy. Set a template for other states.
2022
Department of Industry released a consultation paper on safe and responsible AI use in Australia, examining whether existing regulations are sufficient or new legislation needed.
Major step toward potential AI regulation. Received 500+ submissions.
PM&C's National Science and Technology Council identified AI as a critical technology priority, triggering whole-of-government coordination mechanisms.
Elevated AI to cabinet-level attention. Led to coordinated AI investment across agencies.
Whole-of-government strategy for data and digital capability, including AI as a core technology for service delivery improvement.
Embedded AI in whole-of-government digital transformation agenda.
CSIRO launched a major AI research initiative with $50M over 5 years, focusing on AI for agriculture, health, environment, and resources sector.
Significant research investment. Multiple commercial spinouts expected.
DISR released a discussion paper exploring regulatory and non-regulatory options for responsible AI, seeking feedback on risks, opportunities, and governance approaches.
Informed subsequent Safe and Responsible AI consultation and policy development.
The National AI Centre at CSIRO Data61 published its strategic plan, establishing the AI Adopt program for SMEs, the Responsible AI Program, and the AI for Government initiative. The Centre has engaged 1,500+ organisations since establishment.
Operationalised the $33.7M National AI Centre. Key programs include: AI Adopt (SME support), Responsible AI (ethics/safety), Trusted AI Initiative.
2021
PM&C published Australia's first statement on critical technologies, listing 63 critical technologies grouped into 9 fields. AI and machine learning ranked first, creating the policy basis for export controls, investment screening, and R&D prioritisation.
Formal identification of AI as Australia's #1 critical technology. Basis for Foreign Investment Review Board scrutiny of AI company acquisitions and export control reviews.
Security of Critical Infrastructure Act amended to include data storage or processing, communications, and financial market infrastructure as critical assets requiring risk management programs.
AI systems used in critical sectors now subject to mandatory security obligations.
The 2021 Digital Economy Strategy set a target for Australia to be a top 10 digital economy and society by 2030, with AI identified as a key enabling technology. Committed $1.2B over 5 years including the Digital Economy Package.
Set the overarching framework within which AI policy sits. The $1.2B package funded digital skills, cyber security, and AI adoption programs.
$33.7M investment to establish the National AI Centre (NAIC) at CSIRO, focused on building AI capabilities for Australian industry.
Created a central hub for AI adoption support, particularly for SMEs. NAIC now supports hundreds of companies.
Comprehensive AI Action Plan outlining government strategy to position Australia as a global AI leader, with focus on responsible AI, economic opportunities, and international collaboration.
Set the overall strategic direction for federal AI policy through to 2025.
Australian Research Council funded a Centre of Excellence examining automated decision-making in law, social services, and government.
Major research investment. Produced foundational work on AI accountability.
2020
Standards Australia published a roadmap for developing AI standards aligned with ISO/IEC JTC1 SC42, identifying 48 areas where Australian standards are needed including AI governance, risk management, and trustworthiness.
Foundation for Australia's participation in international AI standards. Australia co-leading several ISO/IEC AI standards workstreams.
2019
Eight voluntary AI ethics principles released by the federal government, covering: human, social and environmental wellbeing; human-centred values; fairness; privacy; reliability; transparency; contestability; and accountability.
Australia among first nations with a formal AI ethics framework. Voluntary, not legally binding.
2018
Government published a discussion paper on an ethics framework for AI, seeking public input on responsible AI development and deployment.
Established the foundation for Australia's approach to AI governance.