As the debate surrounding facial recognition intensified in 2019, you could be forgiven for not being able to make sense of the cacophony. Not only is facial recognition itself a subset of the broader privacy discussion, but it brings to bear a diverse assortment of issues whose relevance depends on the technology’s use, objective, and target audience in each instance. Smartphone facial recognition differs from police use of facial recognition, which differs from Department of Motor Vehicles (DMV) use of facial recognition, which differs from retail use of facial recognition; the list could go on. Recently, the discussion has gained even more urgency, as some law enforcement agencies have embraced Clearview AI, a company that uses facial recognition with questionable business practices and potentially nefarious motives.

Heading into 2020, local, state, and federal lawmakers are beginning to proffer legislative responses, even though many correspond only to certain elements of the broader facial recognition ecosystem. Here at ACT | The App Association, we want to help you parse the increasingly large and tangled web that is the facial recognition narrative so you can identify which thread is worth your time to follow.

Before diving too deep though, it might be prudent to provide some baseline definitions. Facial recognition is typically thought of as the identification of individuals through assessing the “spatial and geometric distribution of facial features,” as captured in photographs or videos. The “recognition” is achieved when those features are digitized into an individualized algorithmic template, which is then stored in a database and compared against one or more templates. If two templates achieve a pre-determined quotient for similarity (which can typically be modulated by users prior to a query), those faces are deemed be a match.

In practice, facial recognition is used to accomplish one of two goals: facial verification or facial identification. Facial verification involves the confirmation of an individual’s identity through a comparison of their face to an identity claimed by that person, as in the case of one-to-one passport matching at international borders or when a facial scan authenticates a user on a password-protected device. Alternatively, facial identification works to identify an individual absent a claimed identity. For instance, police might use a still-image of an unknown individual sourced from a security camera to search against a mugshot database. This is commonly referred to as a “one-to-many” search and is a form of facial identification.

Legislative Responses

The categories provided above demonstrate the many different permutations a facial recognition query can take. A search can require informed consent or can occur without the individual ever knowing that their photograph was collected. It can prevent identification fraud, or it can track you as you make the rounds at your local grocery store. It can occur statically or in real-time (albeit currently at a much higher cost).  With such a breadth of issues on the table it’s no wonder that lawmakers have approached facial recognition piece by piece. Even so, one can sort most of the recent legislative responses to facial recognition into one of three buckets: moratoria/bans, regulation, and further study.

Moratoria

Moratoria on facial recognition have easily generated the most buzz and media attention among the different responses thus far. Their structure is simple – no (in most cases, governmental) uses of the technology for a certain prescribed amount of time. Citing civil-libertarian concerns with the growing surveillance state and a number of recent studies finding that certain commercial facial recognition products are less accurate when identifying women and people of color, many privacy advocates are increasingly in favor of delaying the rollout of facial recognition in public spaces until at least a public conversation on the ethics of the technology can commence.

Last year, local governments in San Francisco, CA, Oakland, CA, and Somerville, MA, enacted moratoria that prevent government uses of facial recognition. Just a few weeks ago, city leaders in Portland, OR, began considering a ban that would go a step further by precluding government agencies and private businesses from using facial recognition. These efforts are beginning to percolate upward, with state legislators now contemplating temporary bans as well. So far this year, state lawmakers have introduced bills to preclude government uses of facial recognition in Washington (S.B. 5528, H.B. 2856, which compete against other measures discussed below), Massachusetts (S.B. 1385/H.B. 1538), New York (S.B. 7572 – permanent ban), and New Hampshire (H.B. 1642 – permanent ban).

In Congress, Senators Booker (D-NJ) and Merkley (D-OR) introduced the first bill to create a facial recognition moratorium just two weeks ago though in more limited form than many of the existing moratoria. The bill would prevent federal law enforcement from using facial recognition without a court warrant until Congress implements guidelines for the technology and would prevent states from using federal funds for facial recognition. In the other chamber, the House Oversight Committee held a series of hearings on the topic, with some Committee members expressing sympathy to the idea of a moratorium.

Regulation

In lieu of an outright ban, other lawmakers have attempted to craft guardrails to guide the continued use of facial recognition. Some states are attempting a holistic approach: Washington state, for example, is considering two separate measures, one for government uses (S.B. 6280) and one for private uses (S.B. 6281 – also the vehicle for broader privacy reform). These bills would each require facial recognition users to submit to a number of auditing, reporting, and transparency standards, as well as notice and consent for private uses and restrictions on ongoing or persistent surveillance for government uses. Conversely, Maryland’s S.B. 476 rolls public and private uses of facial recognition into a single bill while instituting similar regulatory mechanisms. A new Indiana proposal (H.B. 1238) would require law enforcement using surveillance technology to prepare a surveillance technology impact and use policy. Already on the books, the Illinois Biometric Information Privacy Act requires private entities to obtain informed consent prior to collecting consumers’ biometric information, which includes facial recognition scans and templates, and is the source of a new class-action lawsuit against Clearview AI.

Other regulation-oriented measures take a smaller bite of the apple. For example, lawmakers in Illinois are looking to supplement their existing biometric law with a new one (S.B. 2269) that would prevent the Secretary of State from providing facial recognition search services to any federal, state, or local law enforcement agency for the purpose of enforcing federal immigration laws. California, New Hampshire, and Oregon last year passed laws that prohibit law enforcement from using facial recognition in police-worn body cameras. Indiana, New Jersey, South Carolina, and Washington state are all pursuing similar bills this session.

Other aspects of facial recognition currently being debated in the states include: defendants’ right to know if facial recognition was used against them (Maryland S.B. 46), consumers’ right to know when retailers use facial recognition in stores (Vermont H.B. 595), and the use of real-time facial recognition (Michigan S.B. 342).

Meanwhile, Congress has been slightly less active than the states, though a few measures to regulate facial recognition are currently active. Late last year, Senators Christopher Coons (D-DE) and Mike Lee (R-UT) introduced a bill requiring federal law enforcement investigators to obtain court orders to use facial recognition for ongoing surveillance (defined as lasting longer than 72 hours).  This bill stands alongside Senators Schatz (D-HI) and Blunt’s (R-MO) effort to require consent for private uses, as one of the few measures to regulate facial recognition on a federal level. As efforts to pass comprehensive privacy legislation evolve, we could also see the inclusion of facial recognition specific provisions, whether new or adapted from these existing bills.

Further Deliberation

The last tranche of responses come from lawmakers who contend greater deliberation is required prior to the use of facial recognition. For instance, a recent New Jersey measure (A.B. 1210) would create avenues for citizen engagement by requiring the Attorney General to organize public hearings before law enforcement could use facial recognition in a given jurisdiction. In yet another proposal in Washington state (H.B. 2761), local law enforcement would need to secure explicit permission from each city council before implementing the technology in that jurisdiction.

Other state bills call for further study of facial recognition. Similar to many consumer privacy study bills we saw last year, these bills call for a range of stakeholders, typically including representatives from law enforcement, government, academia, and private industry, to work together over the course of a year to file a report to the legislature. In Virginia, H.J. 59 would require stakeholders to “(i) identify how the technology is currently being deployed; (ii) assess current privacy standards and data management; (iii) identify biases and concerns with the implementation of facial recognition and artificial intelligence; and (iv) provide recommendations for future regulations on the technology.” New York’s S.B. 6623 considers a similar approach.

Conclusion

We hope this survey of facial recognition legislation is a helpful starting point as you determine how best to engage with this issue. Clearly, a wide range of responses remain in play, and through the rest of the year, we are likely to see a host of new ones emerge as well. Rest assured that we are committed to helping untangle this knot, even as it is likely to grow in size.