The shortcomings of facial recognition in hiring – and how to know if your technology is ethical

The recent announcements by IBM, Microsoft, and Amazon to shut down their facial recognition systems amid the Black Lives Matter protests have recast the controversial technology under the spotlight once again. From law enforcement to disease detection to cybersecurity, facial analysis systems are developing at an ever-increasing rate – and they are changing the world.

Many critical voices have emerged in response to the rapid spread, calling out the potential to automate discrimination and amplify bias if this technology is used in the wrong way. In the Australian Human Rights Commission’s 2019 Technology discussion paper, the commission proposed “a moratorium on certain uses of facial recognition technology, until an appropriate legal framework that protects human rights has been established.”

Within the Talent Acquisition and Management space, many have wondered how viable facial analysis truly is as a tool to help recruiters identify their highest potential candidates. Some companies have implemented this technology to analyse job applicants’ video interviews in the hopes that it can help to predict a person’s employability. But when hiring decisions disadvantage certain groups and favour others in the selection process, the consequences can be grave for business profitability and societal prosperity.

So, if we are not confident that facial analysis systems treat all candidates equitably, should we use them in hiring? The short answer is no. Facial recognition technology lacks three characteristics that are key for an ethical assessment system: fairness, job-relatedness, and explainability.

Click to register for the free webinar.

1) Fairness: Facial Analysis is biased

Facial analysis has been repeatedly shown to perform inconsistently across demographic groups, meaning employers cannot trust it to evaluate candidates fairly. This problem is no secret. Back in 2015, Google received significant backlash when its automated photo-tagging tool misidentified black people as gorillas.

More recently, research conducted by Joy Buolamwini at MIT demonstrated that AI facial recognition systems yield less accurate results for women and racial minorities than for white men. For example, in one study of Amazon’s Rekognition technology conducted by the ACLU, 28 members of the U.S. congress were identified as criminals from a repository of mugshots. While only 22 percent of federal lawmakers are ethnically Asian, Black, or Hispanic, 40 percent of the failed matches occurred when the tool was identifying a person of color.                   

The unreliable performance of an algorithm is problematic in any context, but put in the context of employment decisions, it becomes highly contentious.

2) Job-relatedness: Facial analysis appears unrelated to job performance   

Fairness aside, there is little reason to believe that facial analysis provides any relevant information about a person’s employability. According to long standing legal standards, employers should have a rational explanation for how the criteria measured in their hiring process relates to success on the job. To date, criteria based on facial action units (FACs) seem to fall short in establishing any kind of job relevance.

Meredith Whittaker, a distinguished research scientist and Co-Director of the AI Now Institute, has even claimed that the application of this technology to hiring is reminiscent of the now-debunked pseudoscience of phrenology, the study of the shape and size of the cranium as a supposed indicator of character and intelligence. She explains that facial analysis, which falls under the broader umbrella of affect recognition, might claim to measure things like personality and “worker engagement”, but such statements are not backed by robust scientific evidence.

With a lack of empirical support, deploying this technology in hiring is both unethical and irresponsible.

3) Explainability: Facial analysis systems are not transparent

From a technological standpoint, image analysis systems are built using machine learning algorithms that are “black-box,” meaning they are not explainable to a human observer. This lack of explainability means that employers may have no idea which data inputs are driving determinations about a candidate’s employability.

Whenever AI is being used to make decisions that affect people’s lives, like hiring outcomes, it is particularly important to use “white-” or “glass-box” systems so as to build human oversight into the process. This practice has been supported by countless groups over the last several years, ranging from Microsoft to the European Commission to the National Science Foundation, because the AI industry knows that explainable models are the most optimal choice for facilitating fair and ethical outcomes.

Of course, there is a major difference between simply claiming ethical principles and actually practicing them. Especially in an application as important as Talent selection, companies must embody trustworthiness through concrete actions.

A real commitment to ethics includes updating standards based on new scientific insights, even when doing so is inconvenient. For example, while the problems of facial analysis systems may not have been understood several years ago, we understand them now, so the continued use of this technology in hiring is indefensible.

The Ethical Way Forward

Diversity is not a nice-to-have for today’s businesses, but rather an ethical imperative, giving actors involved in the hiring process enormous social responsibility. As gatekeepers to employment opportunities, companies have the duty to understand if the AI technology they are deploying have been ethically designed. The following principles are a good place to start:

Firstly, users must be empowered, not overpowered, by technology. Candidates should always know what data is collected, for what purpose, and where it’s collected from. Employers should hold their technology providers accountable to supporting user data privacy and ownership.

Secondly, companies need to be clear on how algorithms are being built and maintained. AI must be trained with unbiased data as much as possible. This means removing not only demographic variables but also variables that are correlated with demographics like postal codes. You should strive to work only with vendors who are fully transparent around the data going into their systems and the subsequent outcomes.

Thirdly, the AI must be open to audit. An audit serves the same purpose as safety-testing a vehicle to ensure it passes regulations before being put into production. The first step of an audit can happen internally, and even better if the audit methods are open-sourced. Progressively, there should be some form of external validation of that process by a third party. Note that only white-boxed algorithms can be audited to reconfigure its potentially- biased output.

In the words of pymetrics Head of Product, Priyanka Jain, “As creators of technology, it is our responsibility to build AI that is creating a future that we all want to live in, and if we have a way to help other creators of technology continue to build that feature as well, it’s our responsibility to share it.”

We strongly encourage you to understand how assessment providers are taking steps to audit their algorithms and make their processes transparent. We look forward to the day when all technologies are truly helping to establish a level playing field for individuals from all walks of life.

If you are interested to learn more about Ethics in Recruiting AI, do check out this webcast where we speak with Merve Hickok, Founder of AIEthicist.org, on prevalent biases and the movement to create more governance, accountability, and transparency in algorithmic systems. (A recording will be sent to all registrants after so no worries if you can’t make the actual session).

Cover image: Shutterstock

This article is contributed by pymetrics.

Related articles

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Sign up to our newsletter

Get a weekly digest on the latest in Talent Acquisition.

Deliver this goodness to my inbox!