There's no shortage of content explaining what ISO 42001 is. Almost none of it tells you what it actually feels like to implement it in a real organization where AI systems are already in production, stakeholders have competing priorities, and no one has a clean inventory of what "AI" even means internally.

This is that post.

The inventory problem is the real first problem

ISO 42001 starts with understanding your AI systems — their purpose, how they make decisions, who is affected, and what level of risk they introduce. Simple in theory. In practice, when I started the ISO 42001 assessment work at Splunk, the first month was almost entirely spent just answering: what counts as AI here?

Machine learning models embedded in products? Yes. Internally-used LLM tooling? Arguably. Rules-based alerting systems with heuristic logic? The standard doesn't give you a clean answer. You have to make a principled decision, document it, and hold the line.

The most important early decision in any ISO 42001 program is scope — and the standard leaves that almost entirely to you.

My recommendation: start narrow and expand deliberately. Don't try to scope in every system that touches data. Identify the AI systems that make or significantly influence consequential decisions — for customers, employees, or operations — and start there.

Clause 6: Risk assessment for AI is genuinely different

ISO 42001's approach to risk assessment builds on ISO 27001 — and if you already have an ISMS, that's helpful scaffolding. But AI risk has characteristics that break standard risk models:

  • Model drift — a system that was low-risk at deployment may not be low-risk six months later
  • Explainability gaps — you often cannot fully articulate why a system produces a specific output
  • Training data provenance — risks embedded in the data don't show up in traditional asset inventories
  • Third-party model risk — when you use a foundation model from a vendor, the risk doesn't disappear because you didn't build the model

These aren't hypothetical concerns. They show up in real assessments. Build a risk taxonomy for AI specifically, rather than forcing AI systems into your existing IT risk categories.

Controls that actually map to AI realities

Annex A of ISO 42001 provides 38 controls. The challenge is that many of them — governance structure, accountability, human oversight — are organizational rather than technical. That means implementation depends heavily on whether leadership is genuinely committed or just compliance-motivated.

The controls I found most implementable and highest-value in a real program:

  • A.5.2 – AI system impact assessment: Build this into your product development lifecycle, not as a standalone audit step
  • A.6.1 – AI system documentation: Model cards are the right format. Push engineering teams to own them
  • A.8.4 – Monitoring of AI systems: This is where CSPM tooling and Splunk dashboards become compliance evidence
  • A.9.3 – Human oversight mechanisms: Define exactly what "override" means for each system. "A human reviews outputs" is not a control — it's a description

The documentation trap

ISO standards invite over-documentation. I've seen programs where teams spend 70% of their effort producing policies that no one reads, to satisfy controls that are really about operational practice.

Fight this tendency. Your ISO 42001 evidence should be mostly operational: logs, review records, training records, incident tickets, approved design documents. Policies matter — but a three-paragraph AI use policy that's actually followed is worth more than a thirty-page document that sits in Confluence.

What certification actually requires

If certification is the goal, you'll need a certified body audit (Stage 1 and Stage 2). The gap between "we have implemented controls" and "we can demonstrate conformance to an auditor" is non-trivial.

What auditors look for that teams often underestimate:

  • Evidence of management review — not just that it happened, but that outputs were acted on
  • Documented decisions about scope exclusions and why they were made
  • Objective evidence that training and awareness programs reached the right people
  • Records showing that AI incidents were reviewed and improvements were made

Start collecting evidence from day one. Don't reconstruct it later.

The honest summary

ISO 42001 is a well-constructed standard, and implementing it properly does make your AI governance program meaningfully stronger. But it requires organizational buy-in that you cannot manufacture through GRC effort alone. The technical controls are the tractable part. The cultural change — getting engineering, legal, and product teams to treat AI risk as a first-class concern — is the long game.

If you're starting this journey, I'd be glad to compare notes. Find me on LinkedIn.