Back to Insights

This is already your problem — why AI is a board issue, not a technology issue

Let's set aside the marketing language for a moment. When boards talk about AI, they tend to frame it as an operational decision — something management handles. A new software system. A pilot programme. An efficiency initiative to be reported on quarterly. That framing is wrong, and acting on it is becoming a governance liability.

AI is not a technology decision. It is a decision that creates legal exposure across at least 13 distinct regulatory domains, several of which attach personal liability to you as a director. The Corporations Act doesn't ask whether your CTO understood the technology. It asks whether you took reasonable steps to oversee the risks your business was running.

This matters now because Australian heavy industry businesses are deploying AI at pace. Predictive maintenance systems. Workforce scheduling tools. Automated procurement scoring. AI-assisted document generation in tender responses. Each deployment creates exposure — to workers, to privacy obligations, to critical infrastructure rules, to contract law — and that exposure sits, ultimately, at the board table.

The question is not whether these obligations exist. They do. The question is whether your board has a defensible position when a regulator, an insurer, or a counterparty asks you to demonstrate that it does.

51% of companies now assessing
AI at board level
$600K personal liability per officer
under Corporations & WHS Acts
10 Dec 2026 National AI Plan
grace period ends

The 13 obligation domains that apply right now

The regulatory frame for AI governance in Australian heavy industry is not a single AI Act — it is an intersection of existing legislation that already applies to your operations. AI has not created new laws; it has created new ways to breach the ones already on the books.

The 13 legal domains that generate director AI liability in heavy industry are:

  1. Corporations Act 2001 — directors' duties of care, diligence, and oversight of material business risks
  2. Work Health and Safety Act 2011 — officers' positive duty to exercise due diligence over technology that affects worker safety
  3. Privacy Act 1988 — data handling obligations triggered wherever AI processes personal information
  4. Security of Critical Infrastructure Act 2018 (SOCI) — mandatory board-level oversight for AI systems touching critical infrastructure assets
  5. ASX Corporate Governance Principles — for listed entities, disclosure obligations around material technology and AI risks
  6. Competition and Consumer Act 2010 — prohibitions on misleading conduct that can be triggered by AI-generated outputs in bids, marketing, or advice
  7. Environmental Protection legislation — liability where AI monitoring or reporting systems produce inaccurate compliance data
  8. Fair Work Act 2009 — obligations around AI use in workforce management, performance monitoring, and automated decision-making affecting employees
  9. Anti-Money Laundering and Counter-Terrorism Financing Act 2006 — where AI performs customer screening or transaction monitoring functions
  10. Defence Industry Security Program (DISP) — AI governance requirements for businesses holding defence security obligations
  11. Modern Slavery Act 2018 — expanded scope to cover AI use in supply chain and workforce management (see Section 5)
  12. Contract Law — liability for representations made using AI-generated content in tenders, proposals, and contracts
  13. Emerging AI-specific regulation — the National AI Plan governance framework, with its grace period ending 10 December 2026, and anticipated mandatory guardrails for high-risk AI use cases

Each of these is a live obligation, not a forward-looking risk. Most boards have governance frameworks that address several of these domains individually. What they typically lack is a view of how AI creates exposure across all of them simultaneously — and who is accountable for that cross-domain risk.

AI has not created new laws. It has created new ways to breach the ones that already apply to your business — and several of them attach personal liability to directors, not just the company.

What ASIC's "reasonable steps" test means for AI

ASIC has previously pursued directors under section 180 of the Corporations Act — the duty of care and diligence — for failures to understand and oversee technology risks within their businesses. The precedent is established. Technology governance is not a safe harbour for boards who delegate it entirely to management.

The "reasonable steps" test asks a straightforward question: could a director in your position, with access to information that was or should have been available to you, have understood the material risks created by this technology and taken steps to address them?

This does not require you to be a data scientist. It requires you to be able to demonstrate that your board:

Delegating to management is not a defence. ASIC's position, consistent with its broader approach to governance failures, is that directors are responsible for ensuring adequate systems exist — and for understanding enough about those systems to know whether they are adequate. Personal liability of up to $600,000 per officer applies under both the Corporations Act and the WHS Act for governance failures that meet this threshold.

In practice, most boards in heavy industry have no documented evidence that they have ever considered their AI governance position. That is an exposure, and it is one that is straightforward to address.

The SOCI Act and Critical Infrastructure: the highest-risk boards

If your organisation operates, or provides services to, assets designated as critical infrastructure under the Security of Critical Infrastructure Act 2018, your AI governance obligations are materially elevated — and the board's role is explicit, not implied.

The SOCI Act requires that Critical Infrastructure Risk Management Programs (CIRMPs) address all material risks to the asset, including technology risks. The Australian Signals Directorate has made clear that AI systems that affect the operation, management, or security of a critical infrastructure asset are within scope. If your organisation uses AI for predictive maintenance, operational scheduling, access control, or any function that touches the asset's availability or integrity, it belongs in your CIRMP.

Board-level oversight of the CIRMP is mandatory — not a best practice. The responsible entity must have its board, or equivalent governing body, take ownership of the program. This means your board cannot delegate SOCI compliance to an operations manager and consider the matter closed.

The sectors with the highest concentration of SOCI-designated assets in the heavy industry space include energy generation and distribution, water and wastewater, port and maritime infrastructure, and defence industry facilities. If you are a prime contractor or asset owner in any of these sectors and AI is touching your operational environment, your CIRMP needs to reflect that — and your board needs to be able to demonstrate it does.

For SOCI-obligated entities: board-level oversight of the CIRMP is mandatory by statute. AI systems touching your critical infrastructure asset belong in that program — full stop.

Modern Slavery and AI — the obligation that's catching boards off guard

The intersection of AI and modern slavery obligations is the area where we are seeing the most significant disconnect between board awareness and actual legal exposure.

Under the Modern Slavery Act 2018, entities with annual consolidated revenue above $100 million must report on the risks of modern slavery in their operations and supply chains — and, critically, on the actions taken to address those risks. Most boards are aware of this. What many have not yet connected is the obligation's reach into AI systems used in workforce management.

NSW Procurement Policy Direction PBD-2025-05, mandatory for construction and infrastructure tenders from 1 July 2026, requires modern slavery due diligence clauses in contracts. Those clauses now expressly encompass AI use in workforce management — specifically, AI tools used for labour sourcing, contractor vetting, performance monitoring, and scheduling where those tools may obscure visibility into the human rights conditions of the people performing the work.

In plain terms: if your business or any contractor in your supply chain uses AI to manage labour — and that AI reduces your ability to see where your workers are coming from, how they were recruited, and under what conditions they are working — you have a modern slavery exposure. And if you are bidding for NSW government construction work from 1 July 2026, your contract will require you to warrant that you have addressed it.

This is not a compliance exercise most boards have contemplated. The combination of the Modern Slavery Act's reporting obligations, the NSW PBD-2025-05 contract requirements, and the Fair Work Act's provisions around AI in workforce management creates a layered obligation that sits directly at the board level.

What a defensible board position looks like

Defensibility is a practical concept. It does not require your board to have solved every AI governance question. It requires you to be able to show — to a regulator, an insurer, a counterparty, or a court — that your board applied its mind to the question in a structured and documented way.

A defensible board position has four components:

1. A governance register. A documented inventory of every AI system in use or under evaluation, the legal domains it engages, the risks it creates, and the controls in place. This is the foundation. Without it, you cannot demonstrate that your board understands what it is overseeing.

2. Assigned accountability. A named executive or committee with specific responsibility for AI governance, reporting to the board on a defined cycle. The board should receive regular, structured briefings — not ad hoc updates when something goes wrong.

3. Board-level literacy. Directors do not need to understand how a large language model works. They do need to understand what it is being used for, what its failure modes are, and whether the controls in place are proportionate to the risk. A single board briefing session — properly structured — is usually sufficient to establish this baseline.

4. Documented board consideration. Board minutes, committee reports, and resolution records that demonstrate the board considered its AI governance obligations, received management assurances, and directed action where gaps were identified. This is the audit trail that matters if a governance failure is ever examined.

The National AI Plan grace period ends 10 December 2026. That date is not a compliance cliff — governance frameworks do not suddenly spring into existence the day before a deadline. Building a defensible position takes time, and the window to do it in an orderly way is now, not in the fourth quarter.

The three questions your chair should be asking this quarter

Board governance is ultimately the chair's responsibility to drive. These are the three questions that, in our experience, most quickly surface whether a board has a defensible AI governance position — or whether it has meaningful work to do.

Question one: Can management provide the board with a complete inventory of AI systems currently in use across the business — including those deployed at the project or site level — along with the legal frameworks each one engages?

Most management teams cannot answer this question fully on first asking. That is not a criticism — AI deployment in large contractors often happens faster than governance frameworks can track. But if your management team cannot produce this inventory, you do not have the foundation for a defensible governance position.

Question two: Who in this organisation is accountable for AI governance, and when did the board last receive a structured briefing on our AI risk position?

If the answer is "the CTO" or "IT," the accountability model is likely too narrow. AI governance in heavy industry touches WHS, procurement, legal, HR, and operations. It requires cross-functional ownership with direct board visibility — not a single technology lane.

Question three: If ASIC asked us tomorrow to demonstrate that this board has taken reasonable steps to understand and oversee the technology risks our business is running, what evidence could we provide?

This is the question that matters. Not whether you have a policy document somewhere, but whether you have a documented record of board-level consideration, informed oversight, and directed action. If the honest answer is "we would struggle," that is the starting point for the work that needs to happen now.

The obligation is personal. The liability is real. And the pathway to a defensible position is well-defined for boards willing to act on it.

Is your board's AI governance position defensible?

James works directly with boards and leadership teams to map director obligations across all 13 legal domains, identify gaps, and build a governance register that holds up under ASIC scrutiny. Fixed scope, board-ready output.

Review Your Board's Obligations