THE FACE RACE: PITFALLS OF FACIAL RECOGNITION

With countries increasingly utilizing technology well past what would have previously been imagined, it is crucial to understand how this area has expanded and the real concerns that emerge particularly around the area of privacy. Australia is investing in a facial recognition system called  “The Capability” and China recently unveiled a system called “Skynet” (which interestingly has the same name as the antagonist artificial intelligence system in the Terminator franchise). With countries increasingly investing and utilizing such technology for law enforcement purposes, it becomes crucial to understand the potential for overreach that this technology is already beginning to reveal. 

 The use of this technology has not only brought about a host of privacy and civil liberty issues but is also being viewed as an attempt by Governments to introduce mass surveillance. This is increasingly becoming evident as in many parts of the world, facial recognition is woven into law enforcement and commonly used as an investigative tool. However, the key issue is that these facial recognition searches are increasingly being used by various law enforcement officials globally without consent.  

An example of this is America’s Federal Berea of Investigation (FBI) who, since 2011, have logged more than 390,000 facial recognition searches using federal and state databases mainly made up of driver’s licence photos. These searches are conducted with no consent from any licence holders and with minimal transparency.  

This also occurs in the United Kingdom where the UK Metropolitan Police using facial recognition CCTV to scan thousands of shoppers entering and exiting popular malls searching for people of interest. Although signs were present saying “Live Facial Recognition”, these signs were positioned next to the cameras so those  recorded only knew they were being scanned after it took place, thus not getting an opportunity to choose. This takes away a person’s right to a private life as their choices are taken away.  

In addition, facial recognition technology also has some very practical issues as well and despite the  technology  being widely used, it is still very much a work in progress.  

A US Federal study found Asian and African people are up to 100 times more likely to be misidentified by different facial recognition algorithms when compared to a Caucasian  person. In addition, the study also found Native Americans had the highest rate of false positives and that children and the elderly were significantly more likely to be misidentified than those of other age groups. These findings illustrate algorithm bias and how this technology could potentially adversely impact minority groups and individuals by creating baseless police encounters and interrogations due to false matches.  

This occurred with  Amara Majeed who was wrongly accused by Sri Lankan authorities as being involved in last year’s Easter terror attacks. Investigators mistakenly found Majeed’s photo through use of facial recognition technology She was repeatedly harassed by law enforcement and had her photo wrongly distributed through the media. This demonstrates how facial recognition software is still imperfect and can lead to misidentification specifically in minority group. 

Currently there is major concern that US law enforcement may use facial recognition on those protesting the killing of George Floyd.  

In early June, a leaked memo, obtained by Buzzfeed News, showed the U.S. Justice Department had authorized the Drug Enforcement Administration to conduct covert surveillance and other investigations of those protesting the police killing of George Floyd. More recently, it was also reported that the Department of Homeland Security deployed helicopters, airplanes and drones over 15 cities where demonstrators gathered logging at least 270 hours of surveillance, according to Customs and Border Protection data. 

This is particularly concerning as police in the past have used surveillance and facial recognition to target demonstrators for arrest. In 2015, facial-recognition technology was used to track and arrest Baltimore protesters reacting to the police murder of Freddie Gray, a young Black man who died in police custody from spinal injuries for which no one was held responsible. Protestors with outstanding warrants were identified using facial recognition and arrested. 

In response, major tech companies have called for change.  

Microsoft has stated it will wait for federal regulation before selling facial recognition to US Police Departments in an effort to curtail efforts. Additionally, Amazon has put a one-year halt on Police using its facial recognition software called “Rekognition”.  IBM has said that it will no longer  offer facial recognition software and that technology should not promote racial injustice.  

In addition to these efforts, most major tech companies have been exercising a high degree of caution when developing facial recognition software. However, these efforts to slow down creation may not be effective with the rise of  new emerging companies challenging the status quo and potentially putting an end to privacy. 

Such a company is Clearview AI, which is a three-year-old company offering an incredibly novel facial recognition software different to any other. Clearview is estimated to have a database of more than 3 billion images of faces, the biggest database by far of any facial recognition software. These images were gathered scraping all publicly available images from social media giants like Facebook, Instagram and YouTube as well as many other websites without clear consent. Although, Clearview received cease-and-desist letters from Google, Facebook and Twitter, its CEO defended the data scraping, claiming it is his first amendment right to collect public photos.  

Currently Clearview has a range of paid and unpaid clients such as law enforcement in 26 countries including the US, Australia, Brazil, Canada, Spain, Sweden and the United Kingdom. As well as many private companies like Walmart and Macy’s (a paying customer that has completed over 6000 searches) and a sovereign wealth fund in the United Arab Emirates (a country with notable human rights abuses).  

While facial recognition is not new, Clearview has profited off allowing users to search a huge database despite not receiving any consent from the people whose images it stores. These could potentially include yours or mine and this is why we need to think about regulation now.  

Facial recognition has many benefits such as identifying persons of interest leading prevention of crime and enhanced security using biometric data. However, as we start to further integrate this technology into our lives, we must consider the price we are willing to pay. Using facial recognition to target protesters of Freddy Gray’s murder and potentially protestors of George Floyd’s murder is unacceptable and undermines our right to demonstrate. Without regulation around how facial recognition can be used as well as how databases are created, we may very well give up our personal privacy all together. This regulation also needs to happen now as this technology is developing at a rapid rate and before long it may be too late. 

+ posts