The CMMC practitioner community is asking the right question: what about the risks in using AI?
With AI-enabled lateral movement now a credible threat vector — not just a theoretical one — defense contractors face a challenge that a simple “don’t paste CUI into ChatGPT” policy memo simply cannot solve.
That kind of policy is easy to write and easy to train employees on, but it completely misses the real risk: what a threat actor’s AI can do inside your network once it has gained a foothold.
This week, Anthropic’s Claude Mythos Preview system card made explicit what many security practitioners have been anticipating: autonomous AI agents capable of sandbox escape and sophisticated privilege-escalation chains.
Whether Mythos itself becomes an immediate operational threat or remains primarily a research demonstration is still debatable. What is no longer debatable is the direction of travel. The capability for AI to move laterally across a network toward Controlled Unclassified Information (CUI) — without a human operator making every decision — has arrived.
The Foundational Answer Hasn’t Changed
For defense contractors operating under CMMC Level 2, the core solution remains the same: defense in depth, properly implemented.
Well-executed defense in depth raises the cost of attack regardless of the adversary’s tooling. Key controls that matter more than ever include:
- Network segmentation — to limit lateral movement
- Least privilege — to restrict what a compromised identity can access
- Zero-trust architecture — to eliminate the flat network terrain that autonomous agents love to exploit
- Strong boundary protection — at both the network and SaaS layers, to reduce initial foothold opportunities
None of this is new doctrine. What has changed is the speed and persistence with which these controls will now be probed and tested by automated adversaries.
That reality makes a strong case for tight enclave architecture. A minimal, well-bounded CUI enclave is no longer just a cost-management strategy — it is a meaningful reduction in attack surface. The less your CUI environment overlaps with general business systems, the less value lateral movement actually delivers to an adversary.
The Productivity Reality Check
Contractors are going to use AI tools whether security teams officially sanction them or not.
The right answer — especially for organizations operating in GCC High — is not to prohibit AI, but to deploy it safely inside the boundary.
Microsoft’s Azure OpenAI Service, and Copilot – both available within the GCC High environment – allow organizations to run powerful AI capabilities against their own data without that data ever leaving the government cloud. This is a practical, architectural solution to the commercial AI exposure problem that keeps productivity gains inside a defensible perimeter.
Detective Controls and Preparedness
Preventive controls are essential, but detective controls deserve equal attention.
Modern endpoint detection, identity threat detection and response (ITDR), SIEM correlation, and SOAR automation can dramatically compress the window between initial access and detection — the critical window where real damage occurs.
Organizations should also update their preparedness efforts:
- Include AI-assisted attack scenarios in tabletop exercises and incident-response rehearsals.
- Run purple-team exercises that incorporate AI-assisted offensive tooling. This gives defenders a far more realistic picture of their actual exposure.
If AI can be used offensively to find vulnerabilities and escalate privileges faster, it can (and should) be used defensively to identify those same weaknesses before an adversary does.
The Maturing Discipline of AI Security
The AI security discipline is maturing rapidly. ISACA’s new Advanced AI Security Management (AAISM™) certification — which ResponseForce1’s CCO is currently pursuing — reflects the growing recognition that AI risk governance is becoming a core competency for security leadership.
Bottom Line for CMMC Contractors
The CMMC framework, when properly implemented, already addresses most of the foundational controls needed to limit AI-enabled lateral movement risk.
For most small and mid-sized contractors, the gap is not regulatory — it is execution.
Controls that exist only on paper do not slow down an automated adversary. That execution gap is exactly what CMMC assessors look for… and what threat actors exploit.
Ed Minyard, CISM, CCP(pending), CMMC-RP, CBCP
This post reflects the evolving intersection of artificial intelligence and cybersecurity compliance in the defense industrial base. Defense contractors should treat AI not as a future risk, but as a present-day operational reality.
