AI is now showing up on both sides of the OT cybersecurity conversation. Defenders are using ML for ICS anomaly detection, asset discovery, and detection-engineering automation. Adversaries are using AI to accelerate target reconnaissance, exploit-path discovery, and OT-specific social engineering. Both directions are real; both are early.

What AI is genuinely doing for the defender

Three categories where AI is operationally useful in OT today:

ML-based anomaly detection inside ICS networks. Passive monitoring tools have used statistical baselines for a decade. The newer ML approaches handle more variables, hold longer baselines, and surface anomalies with fewer false positives. The category has moved from research to baseline expectation.

Asset discovery and inventory. OT inventory is famously hard. Modern AI-augmented tools can correlate network traffic, vendor protocols, and firmware fingerprints to produce inventory at a fidelity that was operationally infeasible five years ago.

Detection engineering acceleration. Translating threat intelligence into detection rules is the kind of pattern-recognition work where modern LLMs add real productivity. Most SOCs covering OT environments are in early experimentation here. Some are already in production.

What AI is genuinely doing for the adversary

Three categories where AI is changing the offensive picture against OT:

Reconnaissance acceleration. Public-facing infrastructure imagery, satellite data, regulatory filings, vendor case studies, and operator press releases are now reconnaissance inputs at machine speed. The adversary is mapping plant footprints faster than the defender is updating their attack-surface inventory.

OT-specific exploit-path discovery. Generative reasoning over published vendor advisories, vulnerability databases, and protocol documentation produces credible exploit-path hypotheses faster than manual analysis ever did. Defensive vulnerability management has not yet caught up to the attacker compression.

OT-targeted social engineering. Voice-clone phishing of named operators is now in the wild. The credibility of the social-engineering attack against OT staff is materially higher than it was 24 months ago, and the targeting is more precise.

The governance questions this surfaces

Three governance questions that are now in front of operators:

  1. How do we use AI in OT defense without expanding the attack surface? AI tooling has its own supply-chain exposure, model integrity exposure, and privileged-access exposure.
  2. How do we evaluate vendor AI claims? The OT cybersecurity vendor landscape has moved aggressively into AI marketing. Distinguishing real ML capability from marketing varnish is a procurement skill that does not yet exist at most operators.
  3. How do we adapt our threat model for AI-augmented adversaries? The traditional MITRE ICS ATT&CK matrix does not yet fully account for AI-augmented reconnaissance and social engineering. Threat models that depend on it lag the adversary.

For a deeper treatment of the broader AI-and-OT topic, see AI in OT security speaker.

The architecture decisions that follow

Four architectural choices that operators should be making now:

  1. Treat AI tooling as supply chain. The same vendor-management discipline that applies to any other OT vendor applies to AI vendors. Model integrity, supply-chain attestation, and termination authority are non-optional.
  2. Keep AI on the defensive side of the boundary. Most OT operators benefit from running ML on the IT-side of the IT-OT boundary, not inside the OT zone, with carefully governed read-only access.
  3. Document the model versions. Detection-engineering AI tools change behavior with model updates. Treat that the same way you would treat a firmware change to a controller.
  4. Update the threat model. Add reconnaissance acceleration, exploit-path discovery, and AI-augmented social engineering to the threat model explicitly. Do not let the threat model lag the adversary by three years.

Where this is heading

The next 24 months will see ML-based anomaly detection move from differentiator to baseline; AI-augmented reconnaissance move from advanced-actor capability to commodity capability; and the operator’s ability to govern AI in OT become a measurable program element rather than a footnote.

The operators who get ahead of this are the ones who treat AI in OT as a governance and architecture problem now, while they have the leverage. The ones who treat it as a vendor-evaluation problem after the fact will spend the next three years catching up.