With ISO/IEC 42001:2023, there is for the first time an international standard that describes how an organisation manages the use of Artificial Intelligence in a systematic way — not securing a single model, but putting the organisation as a whole under obligation.
The standard is young: published in December 2023, with the first certifications issued from 2024. It sits alongside ISO 9001 (quality) and ISO 27001 (information security) — deliberately structured along the same lines, so that organisations can extend existing management systems rather than build a new one next to them.
This page sets out what the standard actually requires, where it overlaps with the EU AI Act and where it does not, and how organisations looking to implement it should go about it.
What the standard is
ISO/IEC 42001:2023 defines an AI Management System (AIMS) — a structured framework in which an organisation plans, develops, deploys, operates, monitors and retires AI systems. The standard is risk-based and requires every organisation to tailor the AIMS to its specific Context of the Organization.
Its addressees are not only technology-oriented companies. The standard is aimed at any organisation that uses AI — irrespective of size, sector or depth of AI deployment. For a mid-sized company running a single AI tool, the AIMS looks different than for a corporation with its own AI department. The standard explicitly provides for this.
Structure and core requirements
ISO/IEC 42001 follows the familiar Annex SL structure (Clauses 4–10) and supplements it with AI-specific annexes.
- Clause 4 — Context of the Organization. What does the organisation do, which AI systems are concerned, which Interested Parties are relevant?
- Clause 5 — Leadership. Top Management carries responsibility — with a documented AI Policy and assigned Roles, Responsibilities and Authorities.
- Clause 6 — Planning. Risks and opportunities captured systematically, AI objectives defined, changes planned rather than improvised.
- Clause 7 — Support. Resources, Competence (this is where Art. 4 of the EU AI Act meets the standard), Communication, Documented Information.
- Clause 8 — Operation. The lifecycle of AI systems from inception through to retirement, including data management, testing and supplier control.
- Clause 9 — Performance Evaluation. Monitoring, Measurement, Analysis and Evaluation; Internal Audit; Management Review.
- Clause 10 — Improvement. Handling of Nonconformities and Continual Improvement of the AIMS.
These are complemented by normative annexes setting out AI-specific controls (Annex A) and implementation guidance (Annex B) — covering such matters as data quality, model lifecycle, impacts on affected individuals, and transparency towards users.
Relationship with the EU AI Act
The standard and the regulation are not identical, but they overlap to a significant extent. Three dimensions of that relationship:
Legal standing
The EU AI Act is binding European law. ISO 42001 is a voluntary standard. Anyone who ignores the AI Act risks sanctions of up to 7% of their global annual turnover. Anyone not applying ISO 42001 initially risks nothing — beyond competitive disadvantage in an increasingly regulated environment.
Overlap in substance
Risk management, roles, documentation, human oversight, continual improvement — these are central elements of both frameworks. An organisation that implements ISO 42001 properly will meet a large part of its deployer obligations under the AI Act almost as a by-product.
What only the AI Act governs
Prohibited practices (Art. 5), the concrete high-risk lists (Annex III), the Fundamental Rights Impact Assessment (Art. 27), and the obligations for GPAI models (Art. 53). These detailed provisions do not appear as such in ISO 42001.
The robust strategy for many organisations: adopt ISO 42001 as the structuring foundation, and layer the AI Act-specific detail requirements on top of it. This avoids duplicated effort and produces a consistent system.
Implementation in practice
A realistic implementation runs in four phases. Typical duration: six to twelve months to reach certification readiness, depending on the size of the organisation and the management systems already in place.
- Phase 1 — Stocktake
- Which AI systems are already in use (including shadow AI)? Which roles exist? What is already in place in terms of policy, documentation and processes? The output is a Gap Analysis.
- Phase 2 — AIMS design
- AI Policy, roles, Risk Management Methodology, documentation structure. This is where it is decided how the AIMS fits the organisation — not the other way round.
- Phase 3 — Implementation and first iterations
- Take the AIMS live, run first Internal Audits, correct nonconformities, raise the maturity level. Without this phase, an external certification does not make sense.
- Phase 4 — Certification (optional)
- Through an accredited Certification Body. CAIE does not certify itself — that is the role of independent auditors. We prepare, we accompany, and we remain the point of contact throughout.
What the standard does not deliver
ISO 42001 is a management system. It describes how an organisation structures its use of AI. It does not describe whether a given use of AI is ethically defensible, whether it serves the public good, or whether it fits certain business models. Those questions remain — and they are precisely why CAIE thinks compliance and ethics together.
A company can apply ISO 42001 to perfection and still deploy AI that harms people. The certification says: the processes are in place. It does not say: what these processes carry through is good.
How CAIE approaches this
Within the CAIE Board, responsibility for ISO 42001 sits with David Mirga (alongside the EU AI Act). This is complemented by the IT transformation experience of Jeremy James Wilhelm — 25 years of enterprise transformation for adidas, Lindt & Sprüngli, METRO and SPAR, certified AI trainer at WIFI Vienna, KMU.Digital adviser. That combination matters, because an AIMS does not work on paper — it works inside the organisation, where change actually happens.
For organisations, that means: we support the build-out of an AIMS to ISO 42001 as an advisory partner, not as a certifier. We produce Gap Analyses, draft policies, structure documentation, train the relevant roles — all the while keeping the AI Act requirements in view. The organisation is then certified by an accredited body of its own choice.
Frequently asked questions
01 What is ISO/IEC 42001:2023?
ISO/IEC 42001:2023 is the first international standard for an AI Management System (AIMS). It sets out how an organisation plans, controls, monitors and improves the use of Artificial Intelligence in a systematic way — comparable with ISO 9001 for quality management or ISO 27001 for information security. Published in December 2023.
02 Do you have to implement ISO 42001 in order to be EU AI Act compliant?
No. The standard is voluntary. But it is the world's first management system for AI and covers a large share of what the AI Act implicitly demands — risk management, roles, documentation, continual improvement. For organisations that will have to become AI Act compliant in any case, ISO 42001 is often the most pragmatic path there.
03 Is certification worth it, or is applying the standard enough?
That depends on the market. In heavily regulated sectors (finance, health, public sector) and in major tenders, certification is increasingly required or viewed favourably. For many organisations, clean application is enough to start with — with the option of certifying later on. CAIE does not certify itself; we prepare organisations for it.
04 How does ISO 42001 differ from ISO 27001 or ISO 9001?
The structure is deliberately related (Annex SL), so that organisations can extend existing management systems. The difference lies in the subject matter: 42001 addresses AI-specific questions — model lifecycle, data quality, bias, transparency, impact on affected individuals — that do not feature systematically in 27001 and 9001.