We woke up this morning to a thoughtful Financial Times piece on the “risks of relying on robots for fairer staff recruitment”.
While the author advances well-founded concerns for our industry, the risks associated with integrating algorithms into the talent acquisition process are appreciably offset by the benefits: scalability, access to a broader candidate pool, and, vitally, openness.
The algorithms deployed in the human resources space need not be black boxes; at untapt, ours are not. We observe directly the résumé elements that are taken into consideration by a statistical model and we observe the weights it assigns to these elements.
In a decision-making process that involves only humans, these weights are unknown and can form the basis of undesired biases, be they conscious or not.
While no system is flawless, safeguards can be built into the machine-automated variety that make them fair and accountable, further distinguishing them from manual ones. In addition to training the model with diverse data from exemplary sources and scrubbing bias-laden information from applications, the rate at which candidates from underrepresented groups are presented to hiring managers can be purposefully set above historical levels.
In other words, considerations designed explicitly to minimize unwanted bias during recruitment can be developed and monitored within computational frameworks that are less feasible outside of them.
Finally, the abundance of data available in algorithmic systems facilitates automated reporting on how well a hiring company is performing at meeting its diverse interviewing and hiring objectives, introducing accountability where it may not have been previously. Where successfully implemented, these fairer data can then be fed into future iterations on a recruitment algorithm, further attenuating objectionable bias in the long term.