AI Red Team: Continuous Testing. Explainable Results, Proven Resilience.
AI systems evolve faster than traditional security testing can keep up. Join F5 experts to learn how AI Red Team accelerates continuous adversarial testing across models, apps and agents—using an extensive attack database, multi-turn Agentic Resistance campaigns, and operational stress tests—to surface vulnerabilities before they’re exploited. We’ll demo how severity and risk-scored results and Agentic Fingerprints produce audit-ready, explainable reports and show how findings can be operationalized into runtime protections via F5 AI Guardrails.
Key Takeaways:
- Explain the evolving runtime-layer threat landscape and why traditional pen-testing is insufficient
- Demonstrate how to configure and run continuous red-team campaigns (signature + agentic tests) and interpret CASI/ARS risk scores.
- Read and act on Agentic Fingerprints and audit-ready vulnerability reports to prioritize remediation and enable GRC initiatives
Speaker Details
Allan Healy, Senior Solutions Engineer, F5
Jessica Brennan, Senior Product Marketing Manager, F5
Event Topic
Artificial Intelligence, Machine Learning, ManagementRelevant Audiences
All State and Local Government, All Federal GovernmentOther Agency
Other Federal Agencies
Event Type
Virtual / Online
Event Subtype
Webinar / Webcast
When
Thu, Feb 05, 2026 | 11:00 am ET
Registration Cost
Complimentary
Organizer