Building Trustworthy AI: How to Detect Hidden Threats in AI Models Before They Strike

22nd April 2026
Online
General Security
building-trustworthy-ai-how-to-detect-hidden-threats-in-ai-models-before-they-strike
Webinar Company Webinar Operations

About the Security Event

AI models and their supporting platforms are becoming a critical attack surface, with threats often embedded directly within model files and artifacts. This webinar explores how malicious code can be hidden in serialized model formats and how attackers exploit these vectors to establish backdoors or exfiltrate data. The session focuses on detecting threats within AI models before they are deployed or distributed.

Attendees will learn how advanced analysis techniques can identify unsafe function calls and hidden malicious behavior without executing code. The discussion also covers protecting model hosting platforms from real world attacks, including prompt injection and malicious inputs across different formats. The webinar provides practical guidance for securing AI development pipelines and ensuring the integrity of models in production environments.