Improving equity for citizens is a key goal of the Biden Administration. At the same time, agencies across government are adopting Artificial Intelligence (AI) solutions to better use data for a variety of tasks and decision making. Seeing the increasing role of AI in day-to-day operations, the government is looking for ways to ensure that the technology is used fairly and safely without impinging on the innovation being felt by AI adoption in government.
AI as an Administration Focus
The White House Office of Science and Technology Policy released a blueprint for a "Bill of Rights" to provide some guidelines for the development and use of AI in government. It details the roles various agencies need to play to ensure that AI tools align with privacy rights and civil liberties.
Additionally, The Executive Order on Racial Equality included a provision on AI that states, "When designing, developing, acquiring, and using artificial intelligence and automated systems in the federal government, agencies shall do so, consistent with applicable law, in a manner that advances equity." It also suggests that agencies should loop their civil rights offices into decisions about the "design, development, acquisition, and use of artificial intelligence and automated systems."
Ensuring AI Respects Civil Rights
AI is being used to automate many processes in government, from hiring to housing and financial assistance to criminal investigations. A letter co-signed by the Federal Trade Commission, Consumer Financial Protection Bureau, Justice Department, and the Equal Employment Opportunity Commission (EEOC) detailed their agencies' continued enforcement efforts against biases in AI systems.
This focus is sorely needed as a study from the Stanford Institute for Economic Policy Research found that black taxpayers are audited by the IRS at disproportionately high rates because of IRS algorithms. Ensuring this type of bias is not present in AI solutions is critical across all government use cases. Doing so required more model transparency to identify the use of unrepresentative datasets that train AI algorithms to make biased decisions.
The EEOC released technical assistance to help prevent discrimination via software systems connecting Title VII of the Civil Rights Act, which protects employees and job applicants from employment discrimination based on race, color, religion, sex and national origin to employers' use of automated systems. The guidance shows how to apply adverse impact measures--an existing civil rights concept--to monitor algorithmic decision-making tools.
Ensuring AI Respects Privacy
AI requires using personal data in new ways that were not envisioned when it was initially collected. The Justice Department is developing a draft policy concerning privacy and the use of artificial intelligence. This policy will focus on governing how the technology is used and ensuring it is in line with existing privacy regulations and practices. This extends to governing what data can be used to train AI, how it is stored, and who has access to it. Agencies need to develop AI data use policies for their vendors, adding to their current data protection policies.
GovEvents and GovWhitePapers feature the needed resources to stay on top of quickly evolving AI policy and use.
- 2023 AI Solutions Forum (September 1, 2023; webcast) - This event will look at how AI is to be harnessed and cautiously used. Information security experts bring their ideas, theories, and case studies of how AI will impact security for years to come.
- Trusted AI and Autonomy Forum (September 12, 2023; Falls Church, VA) - AI is being harnessed across government, industry, and military branches alike to support and advance missions in a myriad of applications. But as a still relatively nascent technology, AI also comes with inherent risks, threats, and vulnerabilities. Experts at this forum will address the question, "Can we trust and rely on AI in increasingly critical missions?"
- Winning in the New AI Landscape (September 21, 2023; virtual) - During this conference, you will hear about AI trends and how organizations are using AI to drive value while planning for and managing the risks, regulations, and ethical considerations.
- Generating Harms: Generative AI's Impact & Paths Forward (white paper) - This paper explains the typology of harms that generative AI can produce. Each section first explains relevant background information and potential risks imposed by generative AI, then highlights specific harms and interventions that scholars and regulators have pursued to remedy each harm.
- There's Little Evidence for Today's AI Alarmism (white paper) - Recent high-profile statements warning of the supposed existential risk of artificial intelligence are unconvincing. Many AI fears are speculative, and many others seem manageable. This paper argues why AI innovation should proceed and be allowed to proliferate.
- A Matrix for Selecting Responsible AI Frameworks (white paper) - Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI), but the sheer number of frameworks, along with their loosely specified audiences, can make it difficult for organizations to select ones that meet their needs. This report presents a matrix that organizes approximately 40 public process frameworks according to their areas of focus and the teams that can use them. Ultimately, the matrix helps organizations select the right resources for implementing responsible AI.
- Artificial Intelligence Index Report 2023 (white paper) - This report measures and evaluates the rapid rate of AI advancement from research and development to technical performance and ethics, the economy and education, AI policy and governance, diversity, public opinion and more.
For more information on the intersection of AI and equity check out additional resources on GovEvents and GovWhitePapers.