3 lessons on AI & what we must consider before implementing it into our recruitment process

One of the things I love most about the ATC is that I get to meet interesting people and learn new things. One of them was Dr Tiberio Caetano, Chief Scientist at The Gradient Institute, one of Australia’s foremost experts on Artificial Intelligence (AI).

Tiberio was presenting at our Future of Talent 2019 retreat recently on ethics in relation to the use of AI in Talent Acquisition and I thought this is most relevant, given the recent furore in the US about AI-based video interviewing and psychometric assessments not being transparent.

According to him, for us to really understand AI and Machine Learning, we need to first distinguish it from traditional computer programming:

  • With traditional programming, we specify the solution and the computer steps through the program to achieve this; whereas
  • With AI, we define the problem to be solved and the computer uses datasets to identify potential solutions.

In short, AI is the delegation of cognitive work to machines with humans providing the goal and data. Based on this knowledge, I come to my first two key learnings:

We must be very clear on specifying the precise problem we are solving as the computer will not question the problem we have specified.

and

We must provide the right data.

So, when we implement AI technologies for Talent Acquisition (TA), the information we feed into the system will ultimately inform us as to who:

  • Gets an interview; or
  • Goes on to the next stage of the recruitment process; or
  • Is offered opportunities.

This places responsibilities on us, as recruiters, and more importantly, as human beings, to ensure the AI applications that we use work in an unbiased, transparent and evidenced way. We need to continuously check back our process in the following circular way:

Tiberio also set out two possible futures as a result of our use of AI in recruitment.

I fully understand no organisation sets out to gain these negative outcomes in the right box, but unintended consequences are possible.

For example, if your problem statement that you use is biased and the historical data that you use is based on selections with a particular skew, then the decisions that are offered to you from these will follow the previous decisions and be biased. 

This brings me to my third learning:

AI is only good as the data that we provide it, therefore if the data exhibits patterns of bias they will also be present in future decisions.

Here is an example of AI gone wrong:

Amazon created 500 computer models focused on specific job functions and locations. They taught each to recognise some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.

Instead, the technology favoured candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured”.

Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs. With the technology returning results almost at random, Amazon shut down the project.

In effect, Amazon’s system taught itself that male candidates were preferable. It penalised resumes that included the word “women’s,” as in “women’s chess club captain.”

Reuters

This is only one of the many ways the use of AI could go wrong, according to Tiberio. He shared more during his session:

  • Your target is too broad – such that the responses from AI are meaningless and unable to draw conclusions;
  • We expect AI to be correct – results are not questioned but assumed to be correct.
So what should we do before implementing AI onto our recruitment process?

It is essential to review your recruitment process and one of the ways we can do that is to look at it through Kevin Wheeler’s recruitment process model below:

In my opinion the areas of greatest risk would be the Screening and Assessment because this is where you gain data points to rule candidates in or out.

The risk with an AI-powered Screening tool is that you may exclude candidates who are desirable, or you may involuntarily disadvantage a segment of candidates. 

The same can be said for Assessment. If the tool combines Screening and Assessment, the risk is naturally increased.

The approach I would use in reviewing this is as follows:

  1. Review the output data of previous screening and assessment and analyse for bias and to understand if it represents a fair sample;
  2. Develop an ideal sample that represents an unbiased and inclusive approach;
  3. Review the output data from the AI against 1 and 2 above to determine whether the sample is representative;
  4. Adjust as required.

I would repeat this simple process until I have some confidence that the AI tool is providing reasonable data points.

To further assist you in fine tuning your process, have a look at page 8 of this Artificial Intelligence Australian Ethics Framework produced by the Department of Industry, Innovation and Science. It is a toolkit for ethical AI and I believe these points are also important to consider as part of your implementation process.

AI implementation requires a considered approach and there is no harm in starting small, building up the capability as you go. Do not be afraid to seek experts’ help too, to ensure you have a solid AI strategy moving forwards.

Cover image: Shutterstock

Related articles

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Sign up to our newsletter

Get a weekly digest on the latest in Talent Acquisition.

Deliver this goodness to my inbox!