Building Responsible AI with ISO 42001 and the EU AI Act
was held on 27th January 2026
About the Security Event
2026 is a defining year for AI governance. With the EU AI Act obligations for high-risk AI systems become effective and ISO/IEC 42001 emerging as the global standard for AI management systems, organizations are being pushed to formalize how AI is governed, documented, and monitored. Regulatory scrutiny is rising, customer expectations are changing, and ad-hoc AI controls are no longer enough.
In this webinar, we break down what these changes actually mean for security, risk, compliance, and product teams. We will cover why ISO/IEC 42001 adoption is accelerating, how it is being used as proof of responsible AI practices, and what auditors are really looking for during certification. You will also get a clear view of the EU AI Act timelines, risk categories, and compliance obligations, including what applies immediately to general-purpose AI models.
The session also looks at how modern GRC automation can reduce the operational load of managing ISO 42001 and EU AI Act requirements. From policy alignment and continuous monitoring to vendor risk and supply chain readiness, we will share practical guidance on building a scalable AI governance program that holds up as regulations evolve.
What you will learn:
- Why ISO/IEC 42001 is becoming a must-have in 2025
- Core governance, technical, and monitoring requirements for certification
- Key EU AI Act milestones, risk categories, and enforcement expectations
- Common audit pitfalls and lessons from real certification programs
- How ISO 42001 and the EU AI Act align in practice
- How automation can simplify compliance, monitoring, and reporting
The webinar wraps up with a live Q&A focused on certification readiness, audit challenges, automation, and next steps teams should prioritize now.
Ideal for security leaders, compliance teams, GRC professionals, product owners, and anyone responsible for AI risk and governance.