Private Capital Investors Exposed to Regulatory Risk Through Investments in Facial Recognition
November 3, 2022
As communities, public interest groups, and politicians have raised red flags about government use of facial recognition technology, private equity and venture capital investors that own companies in the industry may be exposed to financial and regulatory risk. The following private equity and venture capital-owned companies provide facial recognition technology to local, state, or federal law enforcement agencies in the US and around the world:
Company | Private Equity Owner | Date Acquired |
Cognitec | Peninsula Capital | 2022 |
Pangiam | AE Industrial Partners | 2020 |
FaceFirst | Hollencrest Capital, Innovate Partners, Solis Capital | 2016 |
Idemia | Advent International | 2011 |
Company | Venture Capital Investors | Date of Last Investment |
Clearview AI | Khalili Brothers; Flexcap; Andrew Vigneault; Hal Lambert; Kirenaga; Peter Thiel | 2022 |
Realeyes | Bouygues Telecom Initiatives; NordicNinja VC; BaltCap; Global Brain; NTT Docomo Ventures; Karma Ventures; Molten Ventures; Horizon 2020 SME Instrument; Tera Ventures; The Unilever Foundry; Entrepreneurs Fund; European Commission | 2020 |
Oosto (AnyVision) | SoftBank Investment Advisers; Talent Resources Ventures; DFJ Growth; Eldridge (Greenwich); Elysian Park Ventures; Lightspeed Venture Partners; O.G. Tech, Qualcomm Ventures; Robert Bosch | 2020 |
Clarifai | Canada Pension Plan Investment Board; New Enterprise Associates; NextEquity Partners; Numeta Capital; SineWave Ventures; Trousdale Ventures; Menlo Ventures; New York University Endowment; TriplePoint Capital; Winston Venture Capital; Zavain Dar; R/GA Accelerator; Westfield Labs; Corazon Capital; GV; Jan Erik Solem; LDV Capital; Lux Capital; NVIDIA GPU Ventures; NYU Entrepreneurial Institute; NYU Tandon School of Engineering; Osage University Partners; Qualcomm Ventures; Union Square Ventures | 2021 |
Corsight | Awz Ventures (Avi Libman) | 2020 |
NTechLab | Da Vinci Capital Management; Mubadala Investment Company; Russian Direct Investment Fund; Typhoon Digital Development; Day One Ventures; RT-Business Development; Vardanyan, Broytman & Partners, Face Recognition Prize Challenge, Aleksandrom Provotorovym | 2020 |
Suspect Technologies | Chaak Ventures, Mark Cuban, Swapnil Shinde, Princeton Alumni Entrepreneurs Fund | 2021 |
VisionLabs | MTS AI (Alexander Khanin) | 2022 |
iProov | Sumeru Equity Partners; Department of Homeland Security Science and Technology Directorate; Tech Nation; Fintech Valley; Startupbootcamp; Microsoft for Startups; JRJ Group; Innovate UK | 2022 |
Following reports that law enforcement agencies used facial recognition to identify people protesting against police brutality in May and June 2020, some of the world’s largest, publicly-traded companies like Amazon, IBM, and Microsoft implemented moratoria on selling facial recognition tools to police departments. While Amazon initially implemented a one-year moratorium, the company extended the moratorium indefinitely in May 2021; IBM and Microsoft have committed to ending or further restricting their own use of facial recognition. Companies owned by private equity or venture capital have made no such commitments.
An article from the Electronic Frontier Foundation (EFF), which calls for a total ban on government use of the technology, explains the different ways facial recognition can be implemented:
Today, the most widely deployed class of face recognition is often called “face matching.” This can be used for “face identification,” that is, an attempt to link photographs of unknown people to their real identities. For example, police might take a faceprint from a new image (e.g., taken by a surveillance camera) and compare it against a database of known faceprints (e.g., a government database of ID photos). It can also be used for “face verification,” for example, to determine whether a person may have access to a location or device. Other forms of face matching include “face clustering,” or automatically assembling together all the images of one person, and “face tracking,” or automatically following a person’s movements through physical space.
With around half of US adults already in law enforcement facial recognition systems, lawmakers and organizations are calling for increased regulation around the technology. As of May 2022, 17 cities across the United States had banned the use of facial recognition technology. Some states have banned specific technologies – New Jersey and Illinois have both placed bans on Clearview AI, a venture capital-backed company found to be selling access to the biometric information collected by its system.
While some companies are focused on selling directly to retailers and other private entities, others rely heavily on government contracts. Idemia, acquired by Advent International in 2011, is the sole source vendor for facial recognition at US airports, having received over $100 million from the US Department of Homeland Security. Idemia also has contracts with agencies at the state level in Louisiana and Montana, where state legislatures and municipalities have considered bans on facial recognition.
FaceFirst is another private equity-owned facial recognition company that has been subject to scrutiny. A 2020 report from Reuters found that FaceFirst regularly misidentified people across 200 RiteAid stores over eight years. The drugstore chain has since stopped using the technology. RiteAid installed most of the cameras in areas where Black and Latinx shoppers were the largest group, leading to false matches that would ultimately impact people of color the most if acted upon: “At one store Reuters visited, a security agent scrolled through FaceFirst “alerts” showing a number of cases in which faces were obviously mismatched, including a Black man mixed up with someone who was Asian.” Rite Aid eventually ended its use of facial recognition technology due to concerns shared with other retailers: “’This decision was in part based on a larger industry conversation… other large technology companies seem to be scaling back or rethinking their efforts around facial recognition given increasing uncertainty around the technology’s utility.’”
Illinois was the first state with a comprehensive law regulating facial recognition under its Biometric Information Privacy Act (BIPA), which requires companies that collect and use biometric data to obtain consent from individuals, destroy identifying data after a certain amount of time, and securely store biometric data. Concerned Illinois residents filed a class action lawsuit against FaceFirst in 2020 under BIPA. Though this case has been stayed at FaceFirst’s request, another case brought by the ACLU against Clearview AI in 2020 under the same Illinois law had big implications for the tech company: “The central provision of the settlement restricts Clearview from selling its faceprint database not just in Illinois, but across the United States. Clearview is permanently banned, nationwide, from making its faceprint database available to most businesses and other private entities. The company will also cease selling access to its database to any entity in Illinois, including state and local police, for five years.”
FaceFirst, owned by Hollencrest Capital, Innovate Partners, and Solis Capital, does not publicly share its client list, limiting how much the public knows about which entities may be using the tools. When Buzzfeed approached the company in 2018 for a story about how retailers use its technology, ‘FaceFirst declined to share details about its retail clients, citing nondisclosure agreements with the companies.” In a case study available to members only, FaceFirst shared that California’s Automated Regional Justice Information System (ARJIS) used FaceFirst from 2012 to 2019. Through ARJIS, 30 local, state, and federal law enforcement agencies had access to FaceFirst technologies. While the San Diego Police Department used the system most frequently, other agencies like US Immigration and Customs Enforcement (ICE) and the US Marshals also used the tools. EFF questioned ICE’s use of the data.
Despite concerns about ICE’s use of the technology, and reports like those from Reuters, FaceFirst’s business is expanding – as of 2022, the FaceFirst system “runs more than 12 trillion face comparisons a day for its customers, up from 100 million in 2017.”
Regulatory Risk from US Congressional Action
In February 2020, Senators Jeff Merkley and Cory Booker introduced the Ethical Use of Facial Recognition Act, which would place a moratorium on federal use of the technology while Congress determines how to regulate facial recognition. Merkley urged fellow lawmakers to take concerns about facial recognition seriously:
“Congress has an important responsibility to make sure that the government does not abuse emerging technology in ways that violate Americans’ right to privacy or that disproportionately and wrongfully single out Americans of color. Facial recognition technology is both a powerful and a problematic new frontier. Before this unregulated market becomes too big to tame, Congress needs to put a moratorium on federal use of facial recognition while we develop responsible and ethical guidelines for its use going forward.”
Since Merkley and Booker’s bill, two other pieces of federal legislation restricting facial recognition have been introduced. Representative Don Beyer proposed the Stop Biometric Surveillance by Law Enforcement Act in June 2020, which would prevent federal agencies from using facial recognition software on body camera footage.
In June 2021, Merkley and several members of the US House of Representatives and Senate introduced the Facial Recognition and Biometric Technology Moratorium Act, legislation that would temporarily halt federal use of facial recognition until Congress passes a law regulating the technology. In announcing the bill, members of Congress cited a national study on algorithmic bias in facial recognition and the experiences of Black men wrongly accused of crimes based on faulty matches.
In September 2022, Senators Ed Markey and Ron Wyden sent a letter to Immigration and Customs Enforcement (ICE) urging the agency to end its use of facial recognition technology. The letter referenced a 2022 Georgetown Center for Privacy and Technology report which found that “ICE has used facial recognition technology on the driver’s license photographs of almost one-third (32%) of all adults in the United States, and has access to the driver’s license data of almost three-fourths (74%) of them — in most cases without obtaining a search warrant.” The letter included a series of questions for ICE about how the agency uses facial recognition and which companies it contracts with to collect information. The Senators asked ICE to respond to the letter by October 3, 2022. A response has not yet been made public.
Most recently, the Facial Recognition Act proposed by Congressman Ted Lieu would prohibit the use of facial recognition without a warrant and probable cause of a felony. Furthermore, facial recognition could no longer be used for immigration enforcement or to identify someone at a protest. While it does not go as far as other proposed legislation in banning the use of the technology altogether, the Facial Recognition Act would place strict limits on the largely unregulated use of such tools.
EU Considers Facial Recognition Ban
Under the current EU General Data Protection Regulation (GDPR), which became law in 2016, the use of facial recognition technology and other personal data processing systems is only permitted when consent is obtained, when legally obligated, or “for the performance of a task carried out in the public interest.” The public interest carve-out allows biometric data to be used by law enforcement agencies, which has increasingly become a source of tension.
In October 2021, the European Parliament passed a non-binding resolution to ban police from using facial recognition technology. This is stronger than what has been proposed in the European Commission’s AI Act, which as currently written would still allow for the use of facial recognition technology to solve certain “serious” crimes. However, the exact exceptions have not been determined; As of the latest legislative negotiations around the act in October 2022, the ban would prohibit “placing or making available on the market, the putting into service or use of remote biometric identification systems that are or may be used in publicly or privately accessible spaces, both offline and online.”
While exceptions are still likely to be made, it seems as though lawmakers are in favor of a broad ban on the technology. A strict ban would undoubtedly impact returns for investors in facial recognition technology, as many of the companies work with companies and governments across the European Union.
As those concerned about the right to privacy continue pushing for legislation that bans or restricts facial recognition and other biometric technologies as the industry expands, some of the world’s largest companies like Google have reviewed and restricted company facial recognition policies. In 2020, IBM completely ended its use, research, and development of facial recognition technology: “IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” Private capital managers should encourage the companies they invest in to do the same – at minimum, companies should lead in these efforts by cooperating with requests for transparency and committing to comply with regulations as they arise. Investors should also consider exiting the industry completely, as the surveillance technology inherently threatens privacy and other human rights.