I recently came across an article about how Amazon built an AI tool to hire people but had to shut it down beca.... It stated that because the AI combed through predominantly male résumés submitted to Amazon over a 10-year period to accrue data about whom to hire. Consequently, the AI concluded that men were preferable.
Being an entrepreneur in the HR tech industry with a firm belief in gender equality, I wanted to throw in my two cents:
Firstly, algorithms are trained through data. They don’t need lots of data to become useful, however, the information does need to be clean. If the data fed into a machine is skewed to begin with, the algorithm will learn to make biased decisions.
Apart from biased data, these algorithms also learn from human biases. To avoid scenarios like the one experienced by Amazon, there needs to be a conscious effort in terms of designing the system. For example, impress.ai, an AI chatbot software for recruiters, is designed in a way where algorithms learn from data which excludes human biases. Our platform hides aspects of a candidates’ profile like gender, name, and age. With these fields hidden, our AI recruitment tool actively fights subconscious human biases. It also qualifies candidates based on how they respond to the chatbot during competency and knowledge questions.
Secondly, recruiters shouldn’t make hiring decisions on a resume alone. They should be going further than what’s listed on a resume by engaging candidates from the moment they apply for a job and qualifying them based on how they respond to situations they would experience on the job. Given that most job roles receive hundreds of applications, it’s unrealistic for a recruiter to do this manually. However, an AI tool can be used to streamline this process. For example, there are platforms that scrape social profiles or other information about a candidate that can capture a more holistic view of a candidate, their experience, and their thought process without a person being involved.
Also, AI should never be relied on solely for making the final decision. Similar to automation tools, AI was created to help people accomplish everyday tasks faster and easier. It excels in areas involving repetition and volume. Where it falls behind are tasks involving creative thinking, strategy, and problem solving. Recruiting with an AI tool will always need a human component, where AI can start the qualification process, but hand-off the process to an HR professional to make the final hiring decision.
If the AI recruiting platform is built correctly, where there is more thought behind what type of data is being used to train the algorithm as well as UX to limit subconscious human biases from affecting the algorithm, the AI tool can help improve diversity rather than discriminate against a segment of candidates. For example, on impress.ai, recruiters can identify signs of bias because of in-built metrics. Without these, processes based purely on human evaluation tend to have at least some degree of bias, which is next to impossible to detect because data is not being collected.
My final thoughts:
What are your thoughts about Amazon’s biased AI recruitment tool?
I would love to hear them in the comments section below.