Exclusive: Change the narration for face recognition

Rob Watts, CEO of Corsight AI, advocates seeing facial recognition technology as a force for good in society.

The concept of monitoring buildings, assets, and even people is not new. CCTV is an established practice with many of us equipped with cameras on the sides of buildings or reception areas that have screens that show activity throughout the building – inside and out. Here we can often look up and immediately see our face being displayed on one of the monitors while people watch us get in and out.

Then why, despite this familiarity and acceptance that monitoring people’s movements is legitimate, is facial recognition – an integration with video surveillance – so divisive?

Tony Porter, the outgoing security camera officer, is in the process of making guidelines for its use and I have to say they can’t come fast enough. If this technology is to have the impact it could have, we need to change the narrative that surrounds it. For example, in the United States, we hear of jurisdictions where government officials prohibit the use of the technology because there are concerns (no evidence) that the technology could lead to an over-surveillance state. In the UK, after the appeals court, people can read headlines that the South Wales Police’s use of live facial recognition technology violated equality law and violated privacy rights. After the judgment, Porter refers to “sunny highlands” for the use of this technology. With a closer focus on why people are put on a watchlist and where the equipment can be used, we have made it clear that this technology may be lawful to use. For this reason, according to the judgment of the Court of Justice, the development of police guidelines can represent a route map for successful use.

This will help change the fact that there aren’t many positive stories to prove successful use cases of facial recognition. There is no such thing as a fair narrative in which people can choose. A skewed argument tends to lead to misunderstandings. This, therefore, often leads us to hear how it invades people’s civil liberties and brings us to the abyss of an Orwellian society or Stalinist Russia, with every move recorded in a central government database. This total misunderstanding needs to be addressed by running a comprehensive training program so that everyone is informed about its use, application and parameters. Wouldn’t it be nice if facial recognition could be seen as a hero rather than a villain? But how?

In my opinion, this can be achieved in three steps.

We need to talk about watch lists

Part of the fear of facial recognition is the misunderstanding of who is being watched and why. Part of Tony Porter’s current commitment to the Home Office is to review watch lists and how they are put together and audited. I have to say that this is a good move. For us, as technology and security experts and providers, it is only right to be at the forefront when it comes to technology, including how it works and the parameters in which it works. While usage will be different in the private and public sectors, it is time to let the public know why there are watchlists.

This educational program forms two prongs – who is on watch lists. that is, only those who have committed crimes that need to be monitored; and second, what happens to your data? It is important to emphasize that there is privacy built into the best facial recognition systems. So when a camera is monitoring a crowded street, all the operator sees is a lot of blurry faces until a known culprit alongside the technology that determines how confident it is that it is a match. The operator can then use this information to decide whether to unmask them or not, depending on this level of match and the displayed reason why they are on the watch list.

The technology does not follow “normal” people, it does not have to store their data (for example, our software can delete unsupervised people and delete their data within 0.6 seconds) and does not follow people nearby.

Application monitoring

Another problem we, as an industry, need to address is the way the technology is applied – why we monitor people in the first place. We treated criminals, but what about health and safety after COVID-19? The technology could be used by retailers to identify those who are improperly wearing a mask and therefore putting others at risk. This can help improve compliance. Alternatively, it could be used to control boarding and alighting. We have seen how many retailers are currently forced to have staff at the doors to control the flow of people. Face recognition built into automatic doors can do this. If someone wears a mask, they can go in, leaving people free to add value elsewhere. It increases productivity and helps many retailers in a difficult time.

Face recognition can be a driving force

As with any technology, the education program is about showing how it can make life better. Facial recognition can be a driving force from Alexa making it easier to track orders. For example, the NHS could use it to identify those with Alzheimer’s disease. When alone, seemingly lost, or in a confused state, the technology could alert local shopkeepers, medical professionals, or police officers to who they are, what condition they are in, and where they live. Yes, this is a lot of data, but it could also lead to the best course of action for you. Usually they will likely be taken to hospital and held there until identified – alone. In this way, if they have actively opted for special, specialized watch lists, they can be brought home faster or their loved ones contacted to resolve the problem more quickly and in a humane way.

This technology changes the game and its speed and accuracy can be a driving force. Now all we need is a sense of digital responsibility. If people are willing to give their personal information to their phone provider to unlock their device, they need to understand that it can be used forever in other areas of society. To gain trust, we need to talk about application and regulation. That wouldn’t be a bad thing to stop potential abuse and help the public have confidence in what we are trying to achieve. If COVID-19 has taught us anything, technology is a help, not an obstacle. Now, let’s unleash forever the power that some of these integrations can use to make our society safer.

Rob Watts

This article was published in the December 2020 issue of the International Security Journal. Get your FREE digital copy here

Related Articles