TL;DR
- ISO 42001 is the world's first certifiable AI management standard — published December 2023, adoption accelerating fast
- EU AI Act full enforcement kicks in August 2026 — it applies to any business worldwide that sells AI-powered services to EU customers
- Australian businesses aren't exempt — the extraterritorial reach mirrors GDPR; if you touch EU data or EU users, you're in scope
- The consulting gold rush is real — ISO 42001 gap assessments fetch $8–10K AUD, with 1.5–2× the day-rate premium over ISO 27001 work
- VerifyWise is the only free, open-source platform that maps to both ISO 42001 and the EU AI Act — it's your secret weapon as a small consultancy
What Is ISO 42001, in Plain English?
Imagine you hired a robot assistant for your business — it schedules your appointments, replies to customer emails, and flags suspicious transactions. That robot is useful. But how do you know it's making good decisions? How do you know it's not quietly discriminating against some customers? How do you prove to your clients (and regulators) that you've thought this through?
Free Resource
Get Our Weekly Cybersecurity Digest
Every Thursday: the threats that matter, what they mean for your business, and exactly what to do. Trusted by SMB owners across Australia.
No spam. No tracking. Unsubscribe anytime. Privacy
Free AI Governance Checklist
Assess your organisation's AI risk posture in 10 minutes. Covers transparency, bias, data governance, and ISO 42001 alignment.
Download Free Checklist →That's exactly the problem ISO/IEC 42001:2023 was built to solve [1].
Published in December 2023 by the International Organization for Standardization, ISO 42001 is the world's first internationally recognised, certifiable standard for Artificial Intelligence Management Systems (AIMS). Think of it like ISO 27001 (the gold-standard for information security) — but specifically for AI. It gives organisations a structured, auditable way to demonstrate that their AI systems are:
- Governed — someone is accountable, policies exist, objectives are documented
- Risk-assessed — AI-specific risks (bias, hallucinations, data quality failures) are identified and treated
- Transparent — decision-making processes are documented and explainable
- Continuously improving — measured, reviewed, corrected over time
It covers the full lifecycle of AI: from the moment you decide to build or buy an AI system, through training, deployment, and ongoing monitoring. If you're using AI tools in your business — even third-party ones like ChatGPT, Copilot, or automated recruitment software — ISO 42001 gives you a framework to govern them responsibly [1].
ISO 42001 vs ISO 27001 — What's the Difference?
ISO 27001 covers information security — protecting data from breaches, leaks, and unauthorised access. ISO 42001 covers AI governance — ensuring the AI systems processing that data are themselves trustworthy. They complement each other and share architectural DNA: the same Plan-Do-Check-Act management cycle, overlapping controls around access management, incident response, and risk assessment [2].
The practical implication? If your organisation already has ISO 27001, you have a running start. The Protecht Group estimates the journey from ISO 27001 to ISO 42001 certification is achievable within a 12-month window when you build on an existing ISMS [2]. If you're starting from scratch, ISO 27001 first is the recommended path — it builds the management system foundation that ISO 42001 extends.
Key ISO 42001 requirements include [1]:
- AI policy and objectives — a documented commitment to responsible AI
- AI risk assessment — identification and treatment of AI-specific risks
- Data governance — controls for training data quality, bias, and privacy
- Transparency and explainability — documentation of how AI systems make decisions
- Human oversight — defined roles for monitoring AI behaviour
- Continuous improvement — metrics, audits, and corrective action processes
Why the EU AI Act Matters — Even If You're in Melbourne
Here's the thing most Australian businesses get wrong: they read "EU AI Act" and assume it's someone else's problem.
It isn't.
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence, passed by the European Union in 2024. Like GDPR before it, it has extraterritorial reach — meaning it applies to organisations outside the EU if they place AI products or services on the EU market, or if outputs from their AI systems are used within the EU [3].
Jackson McDonald Lawyers, writing in December 2025, put it plainly: "The AI Act is not a distant piece of foreign legislation. Its extraterritorial scope means compliance obligations may arise if AI products or services are marketed to EU users, or if outputs are used within the EU." [3]
The Risk Ladder
The EU AI Act classifies AI into four risk tiers [4]:
| Risk Level | Examples | Status from Aug 2026 |
|---|---|---|
| Unacceptable | Social scoring, emotion recognition in workplaces | Banned (since Feb 2025) |
| High-Risk | CV-sorting software, credit scoring, medical AI, exam marking | Strict obligations from Aug 2026 |
| Limited-Risk | Chatbots, deepfakes | Transparency requirements |
| Minimal-Risk | Spam filters, AI in video games | No obligations |
If you build, sell, or deploy AI tools that fall into the high-risk category — and many common business tools do — you face mandatory requirements from 2 August 2026 [4]:
- Adequate risk assessment and mitigation systems
- High-quality training datasets that minimise discriminatory outcomes
- Logging and traceability of AI system activity
- Clear human oversight mechanisms
- Registration in an EU public database
The Fines That Will Get Attention
Non-compliance with the EU AI Act isn't a slap on the wrist. Fines reach up to €35 million or 7% of global annual turnover for violations involving prohibited practices, and up to €15 million or 3% of turnover for high-risk system violations [5]. For context, GDPR's maximum is 4% of global turnover — this goes further.
But I Don't Sell to Europe...
Even if you don't intentionally sell to Europe, consider:
- Your SaaS product has EU customers signing up organically? In scope.
- Your AI recruitment tool is used by a multinational client with EU offices? Potentially in scope.
- Your clients are Australian businesses that themselves operate in the EU? Your AI tools are part of their supply chain — and their compliance requirements may flow down to you.
The ripple effects of EU regulation have a long history of reaching Australian shores. GDPR reshaped Australian privacy practices. The EU AI Act will do the same for AI governance.
What Australian Businesses Need to Do Right Now
The good news: you don't need to understand every article of the EU AI Act. You need a practical starting point.
Step 1: Map your AI systems. List every AI tool your business uses or develops. Include third-party tools (ChatGPT, Copilot, automated workflows, AI hiring tools). If it makes decisions that affect people, it's in scope.
Step 2: Classify by risk. For each system, ask: does this affect employment, credit, health, education, or safety? If yes — high-risk. If it's customer-facing and passes itself off as human — limited-risk with transparency obligations.
Step 3: Run a gap assessment. Compare your current practices against ISO 42001's Annex A controls and the EU AI Act's obligations. This is where a consultant (or a tool like VerifyWise) becomes invaluable.
Step 4: Build your AIMS. Implement the policies, procedures, and oversight mechanisms ISO 42001 requires. This doesn't have to be perfect on day one — regulators reward genuine, documented effort.
Step 5: Consider certification. Full ISO 42001 certification from an accredited body (like DNV, BSI, or Bureau Veritas) is the gold standard — but even a documented, auditable AIMS without formal certification demonstrates intent and due diligence [6].
The Consulting Opportunity Is Real — And It's Opening Now
Here's where this gets interesting for anyone who consults, advises, or builds technology for other businesses.
ISO 42001 is a brand new standard in a gold rush moment.
Enterprise adoption is accelerating — Cornerstone OnDemand announced ISO 42001 certification in December 2025, signalling mainstream enterprise uptake. Deloitte's State of Generative AI survey found that while 87% of executives claim to have AI governance frameworks, fewer than 25% have fully operationalised them [7]. That gap between intention and implementation is where consultants live.
The market dynamics right now:
- Supply of qualified ISO 42001 consultants is thin. The standard is 14 months old. There aren't many people with practical implementation experience yet.
- Demand is accelerating. August 2026 creates a regulatory forcing function. Businesses are waking up.
- Day rates command a premium. ISO 42001 gap assessments are currently pricing at $8,000–$10,000 AUD for a standard engagement, with day rates running 1.5–2× what ISO 27001 work fetches. The premium is justified: the expertise pool is smaller, the standard is newer, and the regulatory stakes are higher.
What a Gap Assessment Actually Delivers
A well-run ISO 42001 gap assessment gives a client:
- An AI system inventory — what they have, where it lives, who owns it
- A risk classification — which systems are high-risk under EU AI Act
- A control gap report — where their current practices fall short of ISO 42001 Annex A
- A remediation roadmap — prioritised actions with effort estimates
- Audit-ready documentation — evidence the work was done properly
For a small-to-medium business with 5–20 AI systems in play, this is a 2–3 week engagement. At $8–10K, it's accessible. For clients with EU exposure, it's non-negotiable.
The Dual Certification Advantage
Here's the long game: organisations that achieve both ISO 27001 and ISO 42001 certification are exceptionally rare in Australia right now. The overlap in management systems means the incremental effort isn't that large — but the market differentiation is enormous. Government procurement increasingly requires standards-based evidence. Enterprise clients with EU exposure will start requiring supplier compliance.
If you're a small consultancy, the roadmap is:
- Help clients get ISO 27001 certified (established market)
- Extend to ISO 42001 (new premium market)
- Build your own dual certification (credibility multiplier)
- Offer EU AI Act readiness audits as a packaged service
ISO 42001 AI Governance Pack — Coming Soon
Policy templates, risk assessment frameworks, and implementation guidance for organisations deploying AI systems. Join the waitlist for early access.
Join the Waitlist →VerifyWise: Your Open-Source Starting Point
Most consultancies think they need expensive enterprise GRC platforms to do this work. They don't.
VerifyWise (GitHub: bluewave-labs/verifywise) is the only open-source AI governance platform that maps specifically to ISO 42001, ISO 27001, NIST AI RMF, and the EU AI Act — all in one tool [8].
What it does:
- Translates compliance requirements into actionable task checklists
- Generates audit-ready documentation automatically
- Maps your AI systems against multiple frameworks simultaneously
- Tracks compliance status across an organisation's AI portfolio
- Supports LLM evaluation (for organisations running their own AI models)
For a small consultancy, VerifyWise is a force multiplier. Instead of manually building gap assessment templates from scratch, you start with a structured framework that's already mapped to the standards you're working against. The platform replaces what enterprise competitors like Vanta or Drata charge thousands per month for — at zero licensing cost [8].
You host it yourself (Docker-based deployment), which means client data never touches a third-party SaaS. That's a genuine security selling point when clients are nervous about putting AI governance data in someone else's cloud.
The NIST AI Risk Management Framework (AI RMF), published by the US National Institute of Standards and Technology, also maps well to ISO 42001 and is freely available [9]. Using both — NIST AI RMF as the thinking framework, ISO 42001 as the certification target, VerifyWise as the implementation platform — gives you a professional, rigorous methodology that scales.
How ISO 42001 and the NIST AI RMF Fit Together
The NIST AI RMF organises AI risk management into four core functions: Govern, Map, Measure, Manage [9]. ISO 42001 maps closely:
| NIST AI RMF Function | ISO 42001 Equivalent |
|---|---|
| Govern — organisational context, roles, culture | Clause 4, 5, 6 (context, leadership, planning) |
| Map — categorise AI risks and impacts | Clause 6.1 + Annex A (risk assessment controls) |
| Measure — analyse, benchmark, track | Clause 9 (performance evaluation) |
| Manage — respond, recover, improve | Clause 10 + Annex A incident controls |
Organisations that document their work against both frameworks simultaneously satisfy international client expectations (ISO 42001) and US government/enterprise expectations (NIST AI RMF) from a single implementation effort [9] [10].
The Australian Privacy Act Connection
One more reason this matters locally: the Australian Privacy Act 1988 (Cth) is undergoing reform, with AI-specific provisions expected to strengthen. The Australian Information Commissioner has already signalled that automated decision-making using AI will face increased scrutiny [10]. ISO 42001's data governance controls — covering training data quality, bias assessment, and privacy impact — align directly with Privacy Act obligations.
Building an ISO 42001-aligned AIMS now means you're ahead of both EU and Australian regulatory curves simultaneously.
Action Plan: Start This Week
You don't need to boil the ocean. Here's a concrete starting point:
If you're a business owner using AI tools:
- List every AI tool you use (including embedded AI in existing software)
- Ask your vendor: "Do you have ISO 42001 certification or EU AI Act compliance documentation?"
- Book a gap assessment — this is the single most valuable thing you can spend $10K on in 2026
If you're a consultant or IT provider:
- Download the ISO 42001 standard summary from ISO.org [1]
- Clone the VerifyWise repository and run a local instance [8]
- Build a 2–3 week gap assessment methodology using the framework
- Offer it. The demand is there. The competition is thin.
The August 2026 deadline is five months away. Businesses that move now get structured, well-documented implementations. Businesses that wait get rushed, expensive, box-ticking exercises.
Don't be the latter.
Want to understand exactly where your business stands on ISO 42001 and EU AI Act readiness? lil.business offers structured AI governance gap assessments for Australian SMBs and consultancies. Book a consultation and walk away with a clear roadmap, not a sales pitch.
→ Book your AI governance gap assessment at consult.lil.business
FAQ
ISO 42001 is an international standard that tells organisations how to manage AI systems responsibly. It's similar to ISO 27001 (information security) but focused specifically on AI — covering governance, risk assessment, data quality, human oversight, and transparency. Published in December 2023, it's the first certifiable AI management standard in the world [1].
Yes — if your AI products or services reach EU users, or if your AI outputs are used within the EU, the EU AI Act may apply to you regardless of where your business is registered [3]. The regulation has explicit extraterritorial reach, mirroring the GDPR approach. Australian businesses with even indirect EU exposure should assess their obligations before August 2026.
The prohibited practices (unacceptable-risk AI) became enforceable in February 2025. High-risk AI system obligations come into full effect on 2 August 2026 [4]. General Purpose AI (GPAI) provider obligations have applied since August 2025. The Commission can investigate and fine from August 2026 onwards.
A standard ISO 42001 gap assessment for a small-to-medium organisation typically ranges from $8,000–$10,000 AUD, covering AI system inventory, risk classification, control gap analysis, and a remediation roadmap. This is 1.5–2× the cost of a comparable ISO 27001 assessment, reflecting the scarcity of qualified practitioners and the complexity of AI-specific risk assessment.
VerifyWise is an open-source AI governance platform (GitHub: bluewave-labs/verifywise) that maps compliance requirements from ISO 42001, ISO 27001, NIST AI RMF, and the EU AI Act into actionable tasks and audit-ready documentation [8]. It is completely free to self-host. It replaces expensive enterprise GRC platforms and is particularly useful for consultants running gap assessments or organisations building their own AI management systems.
Yes. Formal certification from an accredited body is the gold standard and carries maximum weight with clients and regulators, but implementing an ISO 42001-aligned AI Management System without external certification still demonstrates documented intent and due diligence. For most small businesses, a documented AIMS is the right first step — certification follows naturally once the system is mature [6].
The NIST AI Risk Management Framework (AI RMF) and ISO 42001 are complementary, not competing. NIST AI RMF is a US-developed voluntary framework organised around Govern, Map, Measure, and Manage functions [9]. ISO 42001 is the international certifiable standard. Organisations can satisfy both frameworks from a single implementation effort by carefully mapping controls — VerifyWise supports both simultaneously [8].
References
[1] ISO/IEC, "ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management system," International Organization for Standardization, Dec. 2023. [Online]. Available: https://www.iso.org/standard/42001
[2] Protecht Group, "AI governance: Why ISO 42001 is the natural next certification step," Protecht Group Blog, Oct. 2, 2025. [Online]. Available: https://www.protechtgroup.com/en-us/blog/ai-governance-iso-42001-certification
[3] Jackson McDonald Lawyers, "The EU AI Act: what Australian businesses need to know," Jackson McDonald Insights, Dec. 22, 2025. [Online]. Available: https://www.jacmac.com.au/insights/the-eu-ai-act-what-australian-businesses-need-to-know/
[4] European Commission, "AI Act — A risk-based approach," Shaping Europe's Digital Future, 2024. [Online]. Available: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[5] European Parliament and Council, "Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)," Official Journal of the European Union, Jun. 2024. [Online]. Available: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
[6] DNV, "ISO 42001 Certification — AI Management System," DNV Services, 2025. [Online]. Available: https://www.dnv.us/services/iso-42001---service/
[7] Deloitte, "ISO 42001 Standard for AI Governance and Risk Management," Deloitte Insights, 2025. [Online]. Available: https://www.deloitte.com/us/en/services/consulting/articles/iso-42001-standard-ai-governance-risk-management.html
[8] Bluewave Labs, "VerifyWise — Complete AI governance and LLM Evals platform with support for EU AI Act, ISO 42001, ISO 27001 and NIST AI RMF," GitHub, 2025. [Online]. Available: https://github.com/bluewave-labs/verifywise
[9] National Institute of Standards and Technology, "AI Risk Management Framework (AI RMF 1.0)," NIST AI Resource Center, Jan. 2023. [Online]. Available: https://airc.nist.gov/
[10] Office of the Australian Information Commissioner, "Artificial intelligence and privacy," OAIC, 2025. [Online]. Available: https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/artificial-intelligence
[11] ANAB, "Artificial Intelligence Management Systems — ISO/IEC 42001 Accreditation," ANSI National Accreditation Board, 2025. [Online]. Available: https://anab.ansi.org/accreditation/iso-iec-42001-artificial-intelligence-management-systems/
[12] LegalNodes, "EU AI Act 2026 Updates: Compliance Requirements and Business Risks," LegalNodes, Feb. 2026. [Online]. Available: https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks
[13] Elevate Consult, "EU AI Code of Practice: 2025 Guide + ISO 42001 Map," Elevate Consult Insights, Nov. 4, 2025. [Online]. Available: https://elevateconsult.com/insights/eu-ai-code-of-practice-iso-42001/
Work With Us
Ready to strengthen your security posture?
lilMONSTER assesses your risks, builds the tools, and stays with you after the engagement ends. No clipboard-and-leave consulting.
Book a Free Consultation →