With many of us using our faces to "open" our phones, biometric technology has become an everyday consumer technology. Capitalizing on the comfort and ease of use of facial recognition, government agencies are looking to incorporate it (and other biometric methods) into their modern cybersecurity plans and approaches but are realizing implementation in a government setting raises a host of complications.
Interest in facial recognition is strong
The U.S. Government Accountability Office (GAO) released a report in August of 2021 that detailed current and planned use of facial recognition technology by federal agencies. In a survey of 24 departments and agencies it found that 18 reported using the technology and 10 reported plans to expand their use of it.
Facial recognition is currently being used by Customs and Border Protection (CBP) at a border crossing in Maine. They use the technology to automate manual document checks. The biometric facial comparison technology compares a traveler's photo to an existing passport or visa image and a CBP officer also reviews both to verify that the photos are a match. The process takes only seconds and is said to be more than 98% accurate.
The IRS was among the agencies looking to introduce facial recognition to allow taxpayers to access their online accounts. The IRS had contracted with a company to implement the solution but was pressured by Congress to back out of those plans until more policy and structure can be put in place governing facial recognition and the AI that powers it.
Consequences of failure
The pushback against facial recognition comes when you look at what happens when the technology gets it wrong. If your phone cannot recognize your face, it defaults to the passcode, a minor inconvenience. But if facial recognition identifies you as a person who has committed a crime or allows someone else to access your social security benefits, the stakes are much higher. Other biometric identifiers such as fingerprints or DNA traces have gone through formal evaluation based on industry standards. No such standards have been set for facial recognition.
Two modes of failure
Facial recognition can fail based on prejudicial or technical bias. A number of studies have found that algorithms perform differently for different ethnic groups. One NIST study found that the majority of the systems tested had clear differences in their ability to match two images of the same person when one ethnic group was compared with another. Another study found the algorithms are more accurate for lighter-skinned males than for darker-skinned females.
Another bias is related to the technology itself. When face recognition technologies were first developed, image lighting, pose angle, and pixel numbers greatly impacted results creating technical bias. Also, the reality is a facial recognition algorithm that is trained for a longer period of time with more data will perform more accurately than a newer solution, introducing operational bias.
The future of facial recognition
Following the IRS decision to back off the use of facial recognition, the General Services Administration (GSA) put their facial recognition ID plans on hold as well. Officials have cited the need for rigorous review to ensure that solutions provide equitable service, without causing harm to vulnerable populations. Similarly, the Department of Labor is being urged to ensure state unemployment programs are secured without the use of biometric software.
To support the ongoing development of facial recognition technology, NIST is launching a new version of its Face Recognition Vendor Test for technology to fight biometric spoof attacks. The test will provide independent testing of software-based PAD (Presentation Attack Detection) technologies submitted by vendors and other developers to help inform developers and prospective end-users about the performance of specific algorithms in specified conditions.
The transparency needed to build trust in facial recognition also means creating policy and standards around the collection and use of the personal information that underlies the training of facial recognition systems. Today, facial recognition technologies can be powered with images posted on TikTok and Facebook (really any social media site) along with information that can be scraped from sites as personal as Venmo.
GovEvents and GovWhitePapers have a host of resources to help you stay up to date on the technology and policy impacting the use of facial recognition and other biometrics in government.
- Identity & Cybersecurity: Exploring New Trends, Expectations and Use Cases for Identity and Access Management (on-demand webinar) - Understanding, verifying, and protecting a user's identity is at the center of any modern cybersecurity strategy. This event is focused on the role of identity and access management in modern cybersecurity. Conversations with multiple state and local security experts will address how governments can adopt an identity-centered approach for cybersecurity, keeping their networks, employees, and constituents safer in the years ahead.
- AFCEA Bethesda Law Enforcement and Public Safety (LEAPS) Technology Forum (May 10, 2022; Washington, DC) - Government and industry will share information about lessons learned and current challenges. Panel discussions and roundtables provide a forum for engaging and interactive dialog. A key focus will be the use of biometrics and emerging trends that include integration with mobile devices, multi-factor authentication, and surveillance.
- Gartner Identity and Access Management Summit (August 22-24, 2022; Las Vegas, NV) - Digital transformation and the increasing reliance on remote business continue to accelerate the adoption of new identity and access management (IAM) approaches and technologies. Gain valuable insights and get a comprehensive update on privileged access management, IAM programs and strategy, cloud identity, multi factor authentication, passwordless methods, and more.
- Identity Week America (October 4-5, 2022; Washington, DC) - This event brings together the brightest minds in the identity sector to promote innovation, new thinking, and more effective identity solutions. Key areas of focus include secure physical credentials, digital identity, and advanced authentication technologies, such as biometrics.
- Face Recognition Vendor Test (white paper) - This publication discusses how facial recognition systems can fail. Other items included in the study are how errors might be estimated, citing relevant standards, and the consequences if errors remain unaddressed.
- Stanford HAI Artificial Intelligence Bill of Rights (white paper) - This paper is in support of the work of the White House Office of Science and Technology to develop an AI Bill of Rights. Stanford HAI recommends six principles to guide the public and private uses of biometric and broader artificial intelligence technologies, including ensuring those technologies support fundamental democratic values, safeguarding fairness and rights to nondiscrimination, ensuring transparency and explainability, strengthening the participation of civil society organizations, embedding accountability measures into the system design, and enhancing citizen education concerning AI and its impacts.
For more resources on biometrics in government check out GovEvents and GovWhitePapers.