Thursday
Room 4
13:40 - 14:40
(UTC+02)
Talk (60 min)
Applying the OWASP Top 10 for Agentic Applications to Your AI Agents
The OWASP Top 10 for Agentic Applications (2026) identifies the most critical security risks facing AI agents, from goal hijacking and tool misuse to identity abuse, memory poisoning, and cascading failures. But the guidance stops at "what," not "how."
AI agents make autonomous decisions: which tools to call, what parameters to use, how to interpret results. That autonomy is both the value proposition and the attack surface.
This talk bridges the gap between framework and implementation. Using my personal health agent as an example, I'll walk through every applicable OWASP Agentic risk and show the exact code, infrastructure, and architecture decisions that mitigate each one.
You'll see how input validation on tool parameters stops prompt injection before it reaches the LLM, how Entra Agent ID gives agents their own least-privilege identity separate from the host application, how circuit breakers and structured error responses turn a $2,000 cascading retry loop into a $0.01 graceful degradation, and why your system prompt belongs in Azure App Configuration, not your source code.
Whether you're building agents, securing them, or deciding whether to deploy them, you'll leave with a practical, layered security model you can apply to your own agentic systems, and a healthy skepticism of any agent that doesn't treat LLM output as untrusted input.
