A colleague of mine at Brunel, shared this video, about Facial Recognition. It captures some of the ethical challenges presented by this technology – it is well worth a watch.
Key points covered in this video include:
- Start – How it’s depicted in media.
- 1min – Not just humans.
- 1:57 – Privacy and civil liberty issues. Example – app that can be used for stalking women.
- 3 min – Widely use in law enforcement, in the US. 1 out of 2 citizens have had their faces checked against the driving licences database.
- 4:20 – This technology has been used, recently, to identify protesters at Black Lives Matter events.
- 5:15 – There is no framework or rules for how face recognition technology is used. For instance, regarding our rights if we don’t want our face scanned by face recognition cameras.
- 6:40 – Australia’s system, named “The Capability”.
- 7 – How it is being used in China. For instance, faces captured by face recognition cameras can be matched with their cars, relatives, and people with home they meet frequently.
- 8:25 – Questions regarding personal freedoms.
- 8:40 – This technology is widely used, despite the fact that it is, still, very much, an experimental technology with high error rates. E.g., in a test, only 8 out of 42 face matches were correct (i.e., more than 80% failure rate).
- 8:58 – The system’s blind spots. Example of Amazon’s Face Recognition systems, which is said to have an high overall accuracy rate… but that’s because the rate is based on identifying white, male faces. The system struggles with other faces, and particularly dark, female faces. Asian and African American people were up to 100x more likely to be misidentified than white men.
- 10:10 – Some law enforcement agencies (LEAs) are using these systems in ways they weren’t designed to be used. This is a problem because LEAs place high face in the matching results produced by this (untested) technology. – e.g., to identify terrorist suspects.
- 12 – For years, tech companies approached face recognition with caution. However, this is no longer the case. Focus on Clearview.ai, a company that has been described as ‘a search engine for faces’, using images scraped from the internet – e.g., social media – even very old pictures / under age children.
- 14:55 – Over 600 LEAs have been using Clearview.ai. This database includes photos uploaded by other people, without our consent; and even photos that were uploaded and later turned to private.
- 15:45 – Clearview’s CEO position is that people have no choice between being captured by face recognition cameras, or not.
- 16:45 – Clearview has also been used by employers, white supremacy organisations, and foreign governments.
- 18:55 – San Francisco banned the use of Face Recognition
- 19:10 – Illinois passed law requiring explicit permission for collection and use of face recognition images
- 19:30 – Some companies are pulling back from the technology, temporarily.