Artificial Intelligence (AI) can greatly improve human efficiency and deliver insights that drive decision-making. However, for all of its benefits, AI also introduces security risks both for the organizations that are using it and for the nation at large. The National Security Memorandum (NSM) on Artificial Intelligence, released in the fall of 2024, details national security strategy and policy toward AI. While this particular guidance is aimed at agencies directly involved in national security, its three high-level policy objectives should be part of every agency's AI strategy.
- Maintain U.S. leadership in the development of advanced AI systems
A key focus is on not just using AI but driving responsible AI development. To do so, the U.S. needs an AI-capable workforce. This means having the experts that develop the technology as well as training the operational and tactical workforce in how to best use it.
The U.S. also needs the infrastructure to support the widespread use of AI; this includes employing clean power to build the energy infrastructure that can support the processing needs of machines running AI. Estimates show that AI could represent as much as 25 percent of total U.S. electricity consumption. Building a secure energy infrastructure that supports AI and general electricity needs is critical.
Finally, as the U.S. builds new AI technology, that tech has to be protected as critical intellectual property. Counterintelligence efforts need to be expanded to prevent adversaries' theft, espionage, and disruption of AI developments and systems.
- Accelerate adoption of AI systems across U.S. national security agencies
This objective gives national security agencies the directive to utilize "frontier" AI, like generative AI, as part of their strategies going forward. This includes reforming acquisition policies and procedures to make it easier and faster to acquire AI technologies. It also encourages a new perspective on cybersecurity policies to ensure that security issues do not become an excuse for not implementing AI.
- Develop robust governance frameworks to support U.S. national security
AI is only as good as the data used to train it. The reliance on data means that AI systems need strong governance frameworks to prevent data leaks and to ensure that AI trained on private information doesn't reveal it.
The "human-in-the-loop" framework is a critical component in the responsible use of AI. This means ensuring that people are involved in reviewing the outputs of AI before that output is used to trigger an action or inform decision-making.
To learn how the U.S. is balancing AI innovation with national security, check out these resources:
- Energizing the Mission with AI (December 12, 2024, webcast) - This event brings together top AI and technology executives from federal agencies to share their strategic vision for AI integration. Panelists will explore the successes and obstacles they've encountered in implementing AI, providing real-world insights into how AI is transforming operations, improving decision-making, and shaping the future of government services.
- Introduction to Responsible AI in Practice (December 20, 2024; webcast) - This is a high-level exploration of recommended best practices for responsible AI usage across different areas of focus: fairness, interpretability, privacy, and safety.
- Revolutionizing Federal IT: The Power of AI-Assisted Software Development (January 29, 2025; webcast) - Harnessing AI is a useful way to advance modernization goals, but AI governance--including ethical considerations, data security, and compliance with federal regulations--must remain a top priority. Increased AI implementation demands that organizations rethink how they staff, develop, and run their day-to-day operations.
- AI Acquisition Forum 2025 (July 23, 2025; McLean, VA) - This forum is designed to update the audience on new AI Procurement guidance, how the government is addressing the Generative AI responses to procurements, and the status and updates of the procurement staff as they address the requirements and responses to proposals.
- Engaging with Artificial Intelligence (AI) (white paper) - This paper summarizes some important threats related to AI systems. It prompts organizations to consider steps they can take to engage with AI while managing risk. It also provides mitigations to assist both organizations that use self-hosted and third party-hosted AI systems.
- Enabling Principles for Artificial Intelligence Governance (white paper) - The question of how to govern artificial intelligence (AI) is rightfully top of mind for U.S. lawmakers and policymakers alike. Strides in the development of high-powered large language models have demonstrated the potentially transformative impact that AI could have on society, replete with opportunities and risks.
- A Plan for Global Engagement on Artificial Intelligence Standards (white paper) - This document establishes a plan for global engagement on promoting and developing AI standards. The plan calls for outreach to and engagement with international stakeholders and standards-developing organizations to help drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.
For more on national security implications of AI, search for additional events and resources on GovEvents and GovWhitePapers.