What is shadow AI, and why heavy industry is particularly exposed
Shadow AI is not a new concept. It follows the same pattern as shadow IT — staff using tools that solve a real problem, without waiting for sign-off from the people responsible for data, contracts, or governance. The difference is scale and speed. In the past two years, AI tools have moved from novelty to daily workflow faster than any previous technology cycle.
Ask your estimating team what they use to draft tender responses. Ask your project managers how they summarise meeting notes. Ask your bid writers where they start when they need a specification outline. In most heavy industry businesses — construction, energy, infrastructure, defence — the honest answer includes ChatGPT, Microsoft Copilot, Google Gemini, or one of dozens of niche scheduling and document tools with AI built in.
This is not a failure of discipline. It is a rational response to genuine productivity pressure. A senior estimator who can produce a first-pass bill of quantities in two hours instead of two days will use the tool that makes that possible. The problem is not the behaviour. The problem is the complete absence of governance around it.
Heavy industry is particularly exposed for a specific structural reason: your data environments are not clean. Construction SMEs operate across an average of 11 separate data environments — project management systems, ERP platforms, document control tools, spreadsheet libraries, email archives, and now cloud-based AI services sitting entirely outside your network perimeter. When a staff member pastes a client specification, a subcontractor schedule, or a tendered rate sheet into an AI tool, that data moves to an overseas server under the terms of a consumer-grade privacy policy, not your contract with the client who owns that information.
Eighty-one per cent of employees globally have shared confidential data with free AI tools. In a heavy industry context, that confidential data is not abstract — it is your client's sovereign project data, your subcontractor's commercial-in-confidence rates, your own proprietary methodology, and potentially personally identifiable information about your workforce. Fifty-four per cent of Australian contractors already flag data security as a major AI concern. What most boards do not yet understand is that concern without governance is not protection.
The three liability gaps
Shadow AI creates three distinct liability exposures. They are not hypothetical. Each one maps to a real and current regulatory or contractual obligation.
1. Data sovereignty breach. Major government-adjacent and defence-related projects in Australia now carry explicit sovereign AI mandates — requirements that project data be processed on Australian-hosted infrastructure. The Digital Transformation Agency's policy framework and the emerging National AI Plan both reflect this direction. If your team is running project documents through ChatGPT — hosted on OpenAI's US infrastructure — or through Microsoft Copilot in a tenancy configuration that does not meet Australian data residency requirements, you may already be in breach of your tender obligations. That is not a privacy compliance footnote. That is a contract performance issue.
2. Contractual non-disclosure. Most project contracts in heavy industry contain data handling clauses, confidentiality provisions, and intellectual property terms that were written before AI tools existed. They were not written to permit third-party processing of client data by overseas AI services. When your staff paste confidential project information into a free AI tool, they are almost certainly breaching the confidentiality provisions of your head contract — and they do not know it, because no one has told them. The liability sits with the company, not the individual.
3. Governance audit failure. The Australian Privacy Act 1988 and the Australian Privacy Principles require that organisations processing personal information through third-party services have appropriate data handling agreements in place. Processing personal information — including employee data, subcontractor details, or any data that touches individuals — through a consumer AI service without a Data Processing Agreement is a compliance exposure. Beyond the Privacy Act, the National AI Plan and the DTA's Policy for Responsible AI require a documented "human oversight" position for AI tools used in regulated activities. If you cannot produce that documentation, you have an audit failure before the auditor has asked a single question.
The three gaps are not independent. A single staff member pasting a project schedule into ChatGPT can simultaneously breach a sovereign data mandate, a client confidentiality clause, and your Privacy Act obligations. One action. Three exposures.
Why boards are personally exposed — not just the company
Directors need to understand that AI governance is not a technology issue delegated to the IT manager. Under the Corporations Act 2001, directors have a duty to exercise reasonable care and diligence. The standard for what counts as "reasonable" has shifted. AI governance is now a known risk category — it has been the subject of ASIC guidance, DTA policy, and extensive industry commentary. A director who cannot demonstrate that they were aware of shadow AI risks and took reasonable steps to address them is in a materially different position than they were two years ago.
The practical exposure is straightforward. If a significant contract is lost, terminated, or disputed — and the cause is traced to a data handling breach created by an undisclosed AI tool — the question will be asked: what did the board know, and when? If the answer is "nothing, because no one told us," that is not a defence. The board's obligation is to create the structures that surface this kind of risk, not to wait for it to be handed to them.
Personal liability for directors in data breach contexts is not hypothetical in Australia. The Office of the Australian Information Commissioner has demonstrated a willingness to name individuals and pursue civil penalties in serious breach cases. The Privacy Act amendments that passed in 2024 increased maximum penalties significantly. Shadow AI, operating entirely outside organisational governance, is exactly the kind of systemic failure that regulators look at after an incident and ask: was this reasonably preventable?
The answer, if you have not yet done a shadow AI audit, is yes.
What a basic shadow AI audit looks like
A shadow AI audit does not require a six-month engagement or an external security firm. It requires two things: the right questions and the willingness to hear honest answers.
The starting point is a structured inventory of AI tool use across the business. This is not a technology audit — it is a human conversation. You are asking team leaders, project managers, estimators, and bid writers what tools they actually use, not what tools they are supposed to use. The gap between those two answers is your risk exposure.
A basic audit covers four areas:
- Tool inventory: What AI tools are in use, by which teams, and for what purposes? Common findings in heavy industry include ChatGPT for estimating and tender writing, Microsoft Copilot for document drafting and email, Midjourney or DALL-E for presentation graphics, Google Gemini for meeting summaries, and various AI-assisted scheduling tools integrated into project management platforms.
- Data exposure mapping: What categories of data are being processed through each tool? Distinguish between low-risk use (drafting generic text, summarising public information) and high-risk use (processing client data, project schedules, personnel records, or commercially sensitive rates).
- Contractual alignment check: Does the use of each identified tool comply with your current head contracts, confidentiality agreements, and data handling obligations? This is a legal review step, not a technology step.
- Governance gap assessment: For each tool in use, do you have a Data Processing Agreement? Does the tool meet your sovereignty obligations? Is there a documented human oversight position? Is the use disclosed to relevant clients?
The output of this audit is an AI register — a live document that records what tools are in use, under what conditions, and what governance controls apply. This is not bureaucracy for its own sake. It is the document that sits between you and a regulator, a client, or a court when the question is asked.
The audit is not about catching staff doing the wrong thing. It is about understanding what the business is actually doing, so you can govern it properly. Most of what you find will be reasonable, useful, and easily formalised. Some of it will need to stop.
Turning the audit into a governance asset
The businesses that manage this well do not ban AI tools. They govern them. There is a meaningful difference, and getting that distinction right is the difference between a governance programme that actually reduces risk and one that simply drives the behaviour further underground.
Once you have your AI register, you have the foundation for a proportionate governance framework. In practice, this means three things.
Approved tool tiers. Not all AI tools carry the same risk. A tiered approach — green (approved for general use), amber (approved with conditions), red (not approved for use with sensitive data) — gives staff a clear decision framework without requiring them to escalate every use case. This works better than blanket prohibition and is far more defensible in an audit.
Data classification integration. Governance only works if staff understand what data categories apply to the work they are doing. A simple internal classification — for example, public, internal, confidential, and project-restricted — gives people the reference point they need to apply the right tool tier to the right data. Most heavy industry businesses have some version of this already in their contract management processes. The task is to connect it to AI tool use.
Board visibility. The AI register should be a standing agenda item at board level — not monthly, but quarterly, and any time a significant new tool is adopted or a new contract introduces data sovereignty requirements. Directors need to see it, sign off on it, and be able to demonstrate that it is actively maintained. That is what transforms the audit from a compliance exercise into a governance asset.
The businesses that will be best positioned in 24 months are not the ones that moved fastest to adopt AI, and not the ones that held out longest. They are the ones that built the governance infrastructure while the regulatory environment is still being defined. Right now, the DTA's Responsible AI policy and the emerging AI Act-adjacent frameworks give you latitude to shape your own governance approach. That latitude closes as regulation tightens.
Do the audit. Build the register. Put it in front of your board. That is the whole job.
Want to know what AI your business is actually using?
A Shadow AI audit is the starting point for every governance engagement James runs. It takes one focused session to map what's in use, where the risk sits, and what needs to be documented.
Talk to James about your shadow AI exposure