If you are a prime contractor, or a significant subcontractor working into defence, energy, critical infrastructure or government-funded projects, the practical question is simple: what will the next tender ask you to prove?
The answer is not one regulation. It is four streams moving at the same time. Some are legal obligations. Some are government policy. Some are assurance frameworks. Some are already being pushed through contract clauses. Together, they create a new baseline for how AI is governed on Australian projects.
The contractor version: you need to know what AI your business and subcontractors are using, what data it touches, who is accountable for it, and whether you can prove the risk has been assessed before the client asks.
The four streams to put on your register
Privacy Act automated decision-making
The Privacy and Other Legislation Amendment Act 2024 introduces new transparency obligations for automated decision-making involving personal information. For contractors, this matters where AI is used in fitness-for-work, fatigue monitoring, HR shortlisting, safety risk flagging, worker surveillance or any process that influences a decision about an identifiable person.
What to have ready: a list of AI-assisted decisions using personal information, the data used, the decision affected, the plain-English explanation, and the pathway for human review.
DTA Policy v2.0 for government AI use
The Australian Government's responsible AI policy applies to Commonwealth agencies, but the practical effect flows into suppliers through procurement and contract management. If you are delivering services to a Commonwealth customer and AI is used in delivery, expect to be asked how the use case is registered, assessed, governed and incident-managed.
What to have ready: an AI use case register, accountable owners, a process for AI impact assessment, and a simple incident pathway that project teams can actually use.
National AI assurance across government
The National Framework for the Assurance of AI in Government creates a common direction across Commonwealth, state and territory government. For contractors, the important point is convergence. You do not want one AI governance answer for Commonwealth work, another for state infrastructure work, and a third for a government-owned corporation.
What to have ready: one governance framework that can travel across jurisdictions: use case register, risk assessment method, human accountability, records of review, and evidence that the controls are being followed.
SOCI Act and critical infrastructure risk management
For defined critical infrastructure assets, AI is now part of the risk conversation. If AI is used on or near operational technology, asset monitoring, cyber/information systems, personnel controls or supply chain management, it should be visible in the Critical Infrastructure Risk Management Program.
What to have ready: an AI risk view that maps to the CIRMP: the asset affected, the AI tool or system used, the data touched, the hazard vector, the mitigation, the owner and the review cycle.
How this flows down the supply chain
The obligations do not stop at the prime's front gate. In most projects, the client will hold the prime accountable for what happens across the delivery chain. That means subcontractor AI use becomes a prime contractor governance problem.
For primes, the practical control is not a long policy. It is a supply-chain process that captures AI use at onboarding, asks the right data questions, restricts risky tools, and gives subcontractors a simple way to declare changes during the project.
For subcontractors, the message is just as practical. You do not need enterprise theatre. You do need to know which tools your team is using, what project data goes into them, and how you will answer when a head contractor asks for evidence.
The working timeline
| Stream | Timing | Practical action |
|---|---|---|
| Privacy Act automated decision-making | 10 December 2026 | Document AI-assisted decisions using personal information and update privacy disclosures. |
| DTA Policy v2.0 | New use in force from 15 December 2025; existing use cases to be brought into line by 30 April 2027 | Maintain use case register, accountable owners, AI impact assessment and incident pathway. |
| National AI assurance framework | Released June 2024, now shaping public-sector assurance expectations | Build one assurance approach that works across Commonwealth, state and territory government work. |
| SOCI Act CIRMP | Already live for responsible entities | Integrate AI risks into critical infrastructure risk management where defined assets are affected. |
What to do before the next tender asks
- Build the register first: list every AI use case in your business and, for primes, the material use cases in your supply chain.
- Classify the data: personal information, sensitive information, security classified information, operational technology data and project commercial data should not be treated the same way.
- Name the accountable owner: tender evaluators want to see responsibility, not just policy language.
- Write the subcontractor question set: approved tools, data residency, human review, incident notification and change notification.
- Keep the evidence pack short: a register, risk assessment, approval record and tender response language will do more work than a 60-page AI policy nobody uses.
Reference points to check
- Privacy and Other Legislation Amendment Bill 2024
- OAIC APP 1 guidance
- Policy for the responsible use of AI in government
- National framework for the assurance of AI in government
- Security of Critical Infrastructure Act 2018
- Critical Infrastructure Risk Management Program factsheet
- ACSC guidance on AI in operational technology environments
Need a contractor-ready AI governance pack?
I help Australian primes and subcontractors turn AI obligations into registers, tender language, subcontractor flow-down questions and evidence packs that can be used on real bids.
Talk through your next tender