The results from a survey commissioned by MYOB recently found that candidates do not trust artificial intelligence (AI) to make the correct recruitment decisions. Perhaps. But they will soon.
We just need it to be a bit better than it is. Every time a piece of robotics gets it right, we trust it a bit more.
In his article, Trevor Vas outlined a set of principles for the use of AI in recruitment. I think this is a great first step to building trust, but let me go a step further by sitting on the shoulders of Asimov to take a view on this and boil the ethical standards down to three laws:
- A recruitment bot may not disadvantage a candidate, or through inaction, allow a candidate to be disadvantaged.
- A recruitment bot must select a candidate except where such selection would conflict with the First Law.
- A recruitment bot must protect its own data and logic as long as such protection does not conflict with the First or Second Law.
Law 1 – A recruitment bot may not disadvantage a candidate, or through inaction, allow a candidate to be disadvantaged.
Throughout the process a candidate may not be disadvantaged sounds simple and sensible, but it contains a lot of important details that should not be skipped over.
Every candidate needs to be assessed on their own merits against a pre-determined and defensible criterion. The criterion needs to be the same for all candidates applying for or being applied to a role.
When assessing candidates, it should not matter at all whether they are passive or active, have been sourced or applied, are currently employed or not. All candidates need to go through the same process.
Candidates who are not assessed by the recruitment bot may be disadvantaged, or, more likely, those being assessed by humans may put the others at a disadvantage. As such, a recruitment bot must not let candidates fall outside the process.
And the candidate needs to be at the heart of the process. The candidate is the one paying with time and data – they need to be first in design of the recruitment bot. Not doing so would disadvantage the candidates and potentially lead to a poor candidate experience.
Law 2 – A recruitment bot must select a candidate except where such selection would conflict with the First Law.
The purpose of the recruitment exercise is to select a candidate. Therefore, there needs to be an outcome.
More interestingly, by requiring an outcome, except where doing such would conflict with the First Law, ensures that there can be no bias in the system.
Every candidate needs to be assessed on their own merit against the pre-determined and defensible criterion, then measured directly against each other until there is a single victor.
If no victor can be ascertained, making a decision would disadvantage one of the candidates, breaking the First Law.
Therefore, all the candidates at this stage must be selected and put forward; nothing wrong with interviewing three candidates, or five. If you have more than that the criteria are not tight enough to truly be used to assess candidates in a non-biased way.
Law 3 – A recruitment bot must protect its own data and logic as long as such protection does not conflict with the First or Second Laws.
The programmer deserves a commercial advantage, unless their product contains bias or incorrect algorithms.
By enabling the recruitment bot to maintain its own privacy, commercial advantage can be maintained, but the Third Law forces an audit to ensure that the first two laws are being followed.
The problem is that, as with Asimov’s original three laws, there is no way to actually implement them into a piece of machine learning.
My point is that we don’t need to make this complicated, any agreed rules should be simple enough for developers to work within while ensuring the best results for the candidates, hiring managers, and recruiters.
But we do need to do it now.
Recently I was driving home with my daughter and she asked me to play an Adele song – she sang a little bit of it.
Simon: “Hey Google, please play Photograph by Adele.”
Google: “Sure thing, playing When We Were Young by Adele.”
I got the name wrong, but Google knew what I meant and played me the song I wanted. That is helpful; that is building trust. That is where we are going, fast.
The world is changing and, before we know it, it may be that a robot is looking through your entire history of interactions with Siri, Alexa, and Google to assess how polite you are when dealing with your coworkers…
…you do remember to say please and thank you, right?
Cover image: Shutterstock
The upcoming Future of Talent retreat will explore these laws in detail and come up with a set of ethical standards for the use of AI in Talent Acquisition in Australia. If you are an internal TA executive for a corporate organisation, we would love for you to be part of this trailblazing event. Register your interest here.