Just as cloud computing upended how government buys technology, agencies are now having to adapt to acquire fast-evolving artificial intelligence (AI) technology. AI is proving to be a key tool in helping government improve the efficiency and connection of its workforce and deliver improved service to citizens, but the promises of this new technology come with risks. To ensure AI solutions are secure and ethically designed, agencies are implementing a number of guardrails to ensure the safe and effective use of powerful technology.
How to Use AI
The Office of Management and Budget (OMB) developed a policy document to harness the benefits and mitigate the risks of AI for Federal agencies. This guidance provides details on how to use AI securely and effectively with a focus on five key areas: risk management, transparency, responsible innovation, workforce, and governance.
At a high level, the document outlines key requirements agencies must address in rolling out AI-powered solutions. These include:
- Government agencies must verify that AI tools do not endanger the rights and safety of the American people.
- Agencies must publish a list of their AI systems, along with assessments of the risks those systems might pose, and details of how those risks are being managed.
- All federal agencies must designate a chief AI officer with the experience, expertise, and authority to oversee AI technologies used by that agency.
Coordinating AI Use Across Government
The director of the Department of Homeland Security's (DHS) AI Corps has called for a centralized way to manage AI risks in line with OMB guidance. There is a model for doing so in FedRAMP, the government-wide approach to security assessment, authorization, and continuous monitoring for cloud products. By using the FedRAMP model, the process for evaluating and clearing AI technologies for use in government could be sped up to better keep pace with the evolution of AI.
To run such a coordinated effort, however, government needs a large team of AI experts. There has been a concerted effort recently to fill AI-specific roles across government, with a talent surge bringing in over 200 new AI-focused employees.
Staying Above the AI Hype
With transparency a key focus of AI policy guidance, agencies are looking for ways to ensure they understand the technology they are buying, as well as the full impact it can have on their systems and users. Doing so requires tighter coordination between buyers and sellers during the acquisition process to ensure it matches up with AI directives.
This level of diligence may soon be mandated. A Senate bill aims to formalize the assessment of risks that accompany AI technologies. It would establish pilot programs to implement "more flexible, competitive purchasing practices" and also require that confirmation of data ownership, civil rights, civil liberties, and privacy be included in government contracts for AI.
To stay up to date with acquisition policies and processes for AI, check out these resources from GovEvents and GovWhitePapers:
- AI Governance: Ensuring Responsible AI Use in Federal Institutions (October 8, 2024; webcast) - Federal institutions must implement robust governance frameworks to mitigate risks, safeguard privacy, and foster equity. During this webinar, attendees will learn more about responsible AI use within Federal institutions.
- Introduction to Responsible AI in Practice (October 23, 2024; virtual) - A high-level exploration of recommended best practices for responsible AI usage across the following areas of focus: fairness, interpretability, privacy, and safety.
- GovAI Summit 2024 (October 28-30, 2024; Arlington, VA) - Practical applications and opportunities of AI in the public sector are shared across .mil, .gov, .edu, and .org. This conference includes real use cases and explores both the art of the possible and the challenges with getting there.
- Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (white paper) - Learn more about the AI RMF which is intended for voluntary use to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
- Engaging with Artificial Intelligence (AI) (white paper) - This publication provides organizations with guidance on how to use AI systems securely. The paper summarizes some important threats related to AI systems and prompts organizations to consider steps they can take to engage with AI while managing risk.
- Enabling Principles for Artificial Intelligence Governance (white paper) - The question of how to govern artificial intelligence (AI) is rightfully top of mind for U.S. lawmakers and policymakers alike. Strides in the development of high-powered large language models have demonstrated the potentially transformative impact that AI could have on society, replete with opportunities and risks. This paper provides three principles for U.S. policymakers to follow in order for future AI governance efforts to be effective.
For more information on AI use in government, visit GovEvents and GovWhitePapers.