CREATE Submits RFI on Disability Bias in Biometrics

CREATE, in collaboration with representatives of the Trace Center people with backgrounds in computer science policy and disability studies, submitted a response to the Science and Technology Policy Office‘s request for “Information on Public and Private Sector Uses of Biometric Technologies“. See our full response on Arxiv. We summarize the request for information, and our response, below.

What are biometric technologies?

Biometric technologies are computer programs that try to guess something about you. Sensors today can capture your heart rate, fingerprint, face. They can watch you walk, and even find identifying information in your typing style and speed. Initially, biometrics were designed to identify people. This has expanded to include emotion, intent, disability status, and much more.

What does the Office of Science and Technology Policy (OSTP) want to know?

The OSTP wants to hear from many different types of people who may have very different wishes for, and worries about, biometric technology. For example, their Request for Information mentions industry, advocacy groups, tribal governments, academic researchers, and the general public.

They are interested in how biometrics may be used and what their benefits are. But they also want to know how to use them well. For example, they ask how biometrics can be more likely to correctly identify a person, or their emotion, intent and so on. They also ask about security concerns — could one person using a biometric system pretend to be someone they are not, for example. They ask about other possible harms such as whether biometrics work equally well for all people, or whether they might be used for surveillance, harassment, or other worrisome activities.

They finally ask about appropriate governance. Governance refers to rules that might increase the value and safety of biometrics. An example is who should be included in ethical decisions about biometric use. Rules about how it is ok to use biometrics, or ways of preventing problematic use are also on this list. Transparency and whether biometrics can be used in court are also mentioned.

What did CREATE have to say about this?

CREATE led a discussion of disability bias and risk in the use of biometric technology.

Ableist assumptions

The benefits of such technologies are similar for people with and without impairments, however access to such technologies is important for equitable use. Ableist assumptions built into an application can make it inaccessible even if it meets legal standards. For example, an automatic door may close too fast for some users, or a voice menu may time out. These inaccessibilities are avoidable if systems are designed with disabled users in mind.

Biased data

Biometric systems require data (many examples of whatever information they are using, to guess things about people). If that data is biased (for example, lacks examples from people with disabilities), biometrics are likely to be far less accurate in their guesses for those populations. A person might have unusual or missing  limbs and not have a fingerprint, or walk differently, or speak differently than the system expects, and thus be unable to access services tied to recognition of fingerprints, gait, or voice. This can also make biometrics inaccessible. The CREATE response discusses several examples of how these biases can creep into data sets.

Lack of personal agency and privacy

Next, the risks of biometric failures can be higher for people with disabilities. For example, if a disability is rare, this can make data security more difficult, or make it more likely that different people with similar impairments are confused for each other. In addition, it is possible that biometrics might be used to label someone as disabled without their permission, an abuse of personal agency and privacy. Also, a biometric system may implicitly enforce what it means to be “human” when they fail to recognize a disabled body and then deny access to services as a result.

What solutions did CREATE recommend?

These problems are difficult, but not impossible, to solve.

Include people with disabilities in the design

CREATE’s first recommendation is to ensure that people with disabilities are given the chance to help in the design and assessment of biometric systems. Participatory design, which includes people with disabilities as important stakeholders in the design process, is a good first step. However true equity will require that people with disabilities can enter the technology workforce so that they can directly build and innovate such systems. This requires access to higher education programs; access to conferences and events where research and products are discussed, presented and shared; and accessible tools for programming biometrics. In addition, the disability community needs to get involved in policy and decision making around biometrics.

Set standards for algorithm accessibility

Next, we need new standards for algorithm accessibility, just as we have standards for web page accessibility. These should include expectations about testing with disabled populations, and collecting unbiased data.

Ensure transparency, responsiveness and permission

Additionally, there should be rules about transparency, the ability to override or replace biometric systems when they fail to correctly understand a disabled person’s actions or input, and rules about not abusing biometrics by, for example, detecting disability without permission.