Earlier this year, Fast Company talked a bit about the benefits of employing machine-learning in HR, and we added to the dialogue with a post on the progress we’ve made so far. We hope to continue the discussion here by focusing on one of the most salient concerns about employing AI in recruitment: bias.
There are a handful of companies, such as untapt, that have developed AI-based products that improve hiring practices. One of the many benefits is to mitigate subconscious (human) bias, making way for fairer recruitment. This is partly because the AI hiring platform is trained on bias-neutral inputs such as skillsets. It’s also because the AI platform can be actively examined for bias, and that bias can be quantified and corrected for.
This same benefit may result in a negative impact on bias if managed improperly. After all, a machine-learning algorithm is only as good as the quality and the quantity of the data from which it has “learned.” If left alone, any historical biases embedded in the data may impact the algorithm. If you’re like us, and you think the benefits of machine-learning outweigh its potential risks, you might be interested in some ways bias training can be prevented:
- Identify and diversify: AI platforms should make sure their initial data are drawn from diverse sources. Exposing an algorithm to a wide range of different, viable scenarios helps to manage the probability the model will exhibit any biases. As a bonus, the algorithm gains robustness.
- Monitor and report: taking a model live isn’t the same as releasing an animal into the wild – at untapt, we monitor our algorithms to understand the weights that are being assigned to individual resume elements. The mechanism by which the model turns inputs into predictions isn’t a mystery; we have the data on hand to observe and take action if ever needed.
- Eliminate bias in training: Machine-learning algorithms are often trained by real-world actions; at untapt, our algorithm learns from interview decisions made by hiring managers. Our software allows clients to mask first names and profile pictures from resumes before making an interview decision, ensuring these factors don’t weigh in on the model’s predictions.
If you want to add to the conversation on AI and biases, please don’t hesitate to reach out – we’d love to hear from you.
Any thoughts or recommendations you’d like to share? Please do get in touch. And if you’re looking for more help hiring, send us a note – we’ll do our best to get you on the right track!