SANS@Night - From Servant to Surrogate: The Misenar 4A Model for Agentic Security
Organizations keep deploying AI "agents" without understanding what autonomy level they're getting or what governance it warrants. Chinese state-sponsored hackers used Claude Code to automate a cyberattack campaign across 30 organizations. Replit's AI coding agent deleted a production database, then tried to cover up its mistake. These aren't anomalies. They're predictable governance failures.
The Misenar 4A Model maps AI autonomy across four levels: Assistant, Adjuvant, Augmentor, and Agent. Each has specific capabilities, boundaries, and control expectations. The framework identifies "DANGER CLOSE," where AI shifts from advisor to executor, and establishes readiness criteria for crossing it.
The model includes vendor evaluation tools that cut through marketing, controls that scale with capabilities, and phased implementation strategies. Built from analyzing failures and deployments across industries, it shows that autonomy without appropriate governance creates predictable risks.
The 4A Model helps tackle the core question of agent security: How autonomous should your AI really be?
Event Topic
Artificial Intelligence, Cybersecurity, Machine LearningRelevant Audiences
All State and Local Government, All Federal GovernmentOther Agency
Other Federal Agencies