There has always been intense debate and opinion on the use of facial recognition technology (FRT). As someone that as worked in various applications of computer vision (CV) for over 20 years I feel I have some level of experience and a balanced perspective on the subject.

But first to set the scene, before getting into the specifics of FRT and CV, we must recognise that no technology is perfect. With all technology, machinery and automation, that we as humans employ, there are imperfections and problems, we work hard to iteratively improve them. Look at how transportation has changed since the first version of Henry Fords motor vehicle. Taking this further, technologies can be applied in different ways; atomic fusion and fission for example, have given us the atomic bomb and nuclear energy. With all things there are benefits and disadvantages, opportunities and challenges. We have to assess the bigger picture we these transformational technologies.

Artificial Intelligence is the same. We can apply it for so many good things, changing our lives for the better in a multitude of different ways. Its potential is beautiful and utopian in nature, but only if we do it right, in a controlled and measured way, but without stifling innovation and progress. The counter argument is of course the technology can be applied badly, inappropriately, and used by bad actors with selfish and evil intentions. I just want to make the point that this is the same for any type of innovation / technology, not just Artificial Intelligence.

We also need to remember Artificial Intelligence is in its infancy, the current advances are all using historic data to make pseudo decisions, none are actually properly learning, understanding and reasoning (the way humans do) and making informed, explainable and transparent decisions. Even as humans we have preconceptions, biases and opinions that influence both our conscious and unconscious decision making. Our AI algorithms, we know are susceptible to the same influence that is often inbuilt into the data that is used to train these algorithms. However, we know this now, and good data science teams and AI companies work hard to minimise these biases in trained systems. Over time as AI research focuses on the AI ethics and data bias issues, the techniques and methods will get better , minimising this issue further, and in time removing this problem altogether. Until these types of teething problems with the technology are resolved, awareness and for specific industries additional regulatory oversight is beneficial to ensure the best that is possible is delivered, with approach checks and balances to ensure people are not unfairly affected by the use of these new approaches.

Lets now look at Facial Recognition as a specific application of AI techniques.

The first point to acknowledge is there is a difference between person recognition (not identifying or recognising a specific individual) using facial or other methods (is it a human or a car or other object / animal) and individual recognition (which is using facial recognition to identify and differentiate between individuals).

The other key factor is related to what and for how long information is kept, or more simply data privacy. There is multiple aspects to this, the incoming video feed, the initial machine learning recognition output from the video feed, and various predictive and meta data that is triggered from those initial recognition outputs. What I mean by this is, there is the incoming video from the camera, then there is the first level of finding a face in the image (without any individual recognition) and then there maybe further algorithms that match the facial metrics to a database of previously seen faces. But not all applications of FRT need that last element to work.

Many applications of FRT are for human safety for example, they dont need to know specifically who is in a scene, only to know a human is in a location that, at a given moment in time, is dangerous for any human to be there, and so an alert is raised. It never needs to know which human was there, just that a human was there. Another example is that FRT can be used to help track peoples movement and detecting if someones behaviour is abnormal, suggesting a potential terrorist attack or suicide attempt.

While I fully appreciate the concerns of people for privacy and wanting to avoid a police state where our every movement is tracked and recorded, I certainly do not want to live in a world like that, but I do want to live in a world where technology is being used to keep me and others safe from danger.

Lets not chuck out the baby with the bath water.

While I understand the reaction of IBM, Amazon and Microsoft to distance themselves from this controversial topic, this will only serve to push the development of this type of technology more “underground”, being developed by companies with limited resources and capability that would in all probability do a worse job that the big tech players (who simply can not afford to get this type of tech wrong).

Lets not end up in a worst situation in developing these new technologies but putting them into the hands of inexperienced companies with less reputation at stake than the big players.

In conclusion

I do believe we need some level of governance and regulation in terms of how this type of technology is used, and how the data it collects and creates is used, stored and shared. But we should not over reach and prevent the use of this type of application for areas that actual does benefit us all. Please understand the range of what FRT can do and understand many applications are only working at the level of identifying a person but not identifying and individual – there is a big difference.