Your Face: The Final (Privacy) Frontier

Mar 21, 2022 | tech ethics

Our enthusiastic adoption of Big Tech’s social media platforms created a trove of unique personal data which has been used to market to and manipulate us. But, increasingly a new type of information is being harvested from us – facial recognition data – and its use could be even more terrifying.

Facebook introduced facial recognition in 2010, allowing users to easily auto tag people in their photos. However, as a result of the introduction of GDPR, the EU mandated that the tech was changed from a default to an opt-in feature. And, after a raft of lawsuits, Facebook finally shut down the feature in all other markets in late 2021.

Facial recognition technology outside social media has allowed all of us to unlock our mobile phones, go through passport control efficiently, or set up an online bank account. But when live facial recognition (LFR) technology is used in public places to identify and categorise us without our knowledge there are privacy risks.

I am deeply concerned about the potential for LFR technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant.

Elizabeth Denham, UK.Information Commissioner

A report from Georgetown Law found that half of all US adults – 117 million people -have their faces in facial recognition networks.

Facial Recognition Tech 101

Facial recognition technology enables the automatic identification and detection of a person by matching two or more faces from images. The roots of facial recognition were in the 1960s, when Woodrow Wilson Bledsoe developed a system of measurements to classify photos of faces. A new, unknown face could then be compared against the data points of previously entered photos

Current technologies use artificial intelligence (AI) to scan the pattern of a human face, creating a ‘biometric template’ by detecting and measuring various facial features. In further steps, it can then compare this face with features taken from a single face, or multiple faces in an existing database. These datasets of human faces are often created by scraping images from social media or other websites and can contain billions of images.

facial recognition

After a face is detected, the further steps of technology can include;

  • Facial verification  Your face can be compared by a device against a single stored image to determine if it is a match — for example, compared against your stored facial scan to unlock your phone or to board a plane.
  • Facial identification  Your face can be compared against a database of faces — for example, a database of drivers license photos or mugshots — to see if it’s a match for a potential suspect or another person of interest.
  • Facial attribute classification Your face can be analysed in an attempt to guess demographic attributes like your age, gender or ethnicity. This process can also detect accessories and facial hair.
  • Facial affect recognition Your facial expressions can be analysed in real-time or on video in an attempt to label your emotions or other inner qualities, including personality traits, mental health and intelligence. Your expressions can also be analysed in an attempt to label even more complex characteristics like sexuality, political beliefs or potential criminality.

Source: Algorithmic Justice League

Facial recognition technologies are being used in a wide range of private industries and sectors, as well as in government and policing. In sports, football clubs use it in their stadiums to identify individuals who have been banned from attending the club’s matches. (In 2001, law enforcement officials used facial recognition on crowds at Super Bowl XXXV, creating its first big controversy).  In HR, facial recognition technology has been used to analyse facial expressions of job candidates in interviews

AI-powered facial recognition technologies have been of particular interest to governments and law enforcement agencies which has sparked intense debates on the potential impact of facial recognition technology on civil rights. 

Facial Recognition Technology Controversies

Concerns have been raised that facial recognition technology as it currently exists has several problematic areas including;

  • Racial and gender biases built-in to the systems.
  • Questionable accuracy and lack of public testing of existing systems in use.
  • Privacy or legal violations in the sourcing of the photos for databases.
  • Misuse by some governments and law enforcement agencies.

Racial and gender bias

Dr. Joy Buolamwini, a computer scientist at MIT uncovered skin-type and gender bias in AI facial recognition from companies like Microsoft, IBM, and Amazon in her thesis now known as Gender Shades. Dr Buolamwini found that because many facial recognition algorithms were initially trained on mostly white, mostly male faces, they have much higher error rates for people who are not white males. As well as fundamental errors in detection (in the Netflix documentary Coded Bias, Dr Buolamwini showed how she had to wear a white mask to even be detected by the tech), this has also led to people being wrongfully arrested in the US.

https://twitter.com/rajiinio/status/1131268513241468929?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1131268513241468929%7Ctwgr%5E%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fcdn.embedly.com%2Fwidgets%2Fmedia.html%3Ftype%3Dtext2Fhtmlkey%3Dd04bfffea46d4aeda930ec88cc64b87cschema%3Dtwitterurl%3Dhttps3A%2F%2Ftwitter.com%2Frajiinio%2Fstatus%2F1131268513241468929image%3Dhttps3A%2F%2Fi.embed.ly%2F1%2Fimage3Furl3Dhttps253A252F252Fpbs.twimg.com252Fext_tw_video_thumb252F1131261484259053573252Fpu252Fimg252FyOodN7bHIwUsQM5Y.jpg26key3D4fce0568f2ce49e8b54624ef71a8a5bd

Mutale Nkonde, fellow of the Digital Civil Society Lab at Stanford and member of the TikTok Content Advisory Council, notes that even if systems are operating perfectly, issues with gender identification remain:

“Labels are typically binary: male, female. There is no way for that type of system to look at non-binary or even somebody who has transitioned.”

Mutale Nkonde

After her work at MIT on racial bias in facial recognition AI, Joy Buolamwini went on to found the Algorithmic Justice League and her TED Featured Talk on algorithmic bias has more than 1.5 million views.

Misuse by governments and law enforcement

China’s large scale use of facial recognition technology in combination with widespread surveillance cameras has led to discussions and concerns about potential human rights violations in that country.

According to reporting by The Washington Post, the technology has been used to pick people out from crowds based on their age, sex, and ethnicity and used to sound a ‘Uighur alarm’ that alerts police to the presence of people from the mostly Muslim minority.

In the UK, the Metropolitan Police has been deploying live facial recognition tech in London despite the UK non-profit civil liberties campaign group Big Brother Watch pointing out the technology has been criticised for high failure rates and misidentifications, with the Met Police’s own internal testing showing the technology had particular issues identifying women and performed more poorly for people of colour.

Today, in the US several cities—San FranciscoOakland, and Berkeley in California, and Boston and Somerville in Massachusetts have completely banned facial recognition tech by government entities. Several large facial recognition tech vendors, including Amazon, IBM, and Microsoft, have also put a halt on selling their technology to law enforcement.

Privacy Tips

As the use of, and complaints about, facial recognition evolves, there are small ways we already interact with facial recognition tech every day that are worth thinking about if privacy issues concern you.

  • Google and Apple Photos You can disable face grouping in Google Photos. You can’t turn the corresponding feature off in Apple’s Photos app, but Apple say the machine learning is running privately on-device so the recognition data never leaves your own smartphone or computer.
  • Unlocking a phone or computer As the features work now, face unlocking typically happens only on the device itself, and that data is never uploaded to a server or added to a database.
  • Home security cameras Only a handful of home security cameras currently include facial recognition. Face detection on Google Nest cameras is off by default. However, in the US privacy advocates are concerned about potential inclusion of facial recognition with Ring cameras, a system that shares data with police through its Neighbors app.
  • Face ‘altering’ apps  Be wary of apps like the hugely popular FaceApp which gained popularity by allowing people to ‘age’ themselves. Although that company says it doesn’t use the app to train facial recognition software, it’s difficult to predict what might happen with the data each app of this type collects in the future.
My Brain Has Too Many Tabs Open by Tanya Goodin

My latest book My Brain Has Too Many Tabs Open is all about how to navigate the tricky business of living with technology and contains plenty of practical tips and strategies.