Artificial Intelligence (AI) has become increasingly prevalent in various business functions, including recruitment. Many companies are turning to AI tools for screening job candidates, as they promise efficiency and objectivity in the hiring process. However, recent studies have shown that there is a growing concern among job seekers regarding the potential bias of these AI recruiting tools.
According to a latest American Staffing Association Workforce Monitor® online survey conducted by The Harris Poll, 43% of individuals who are considering a new job believe that AI recruiting tools are more biased than humans. This sentiment is compared to just 29% of those who have no immediate plans for a job change.
The distrust of AI with applicants is understandable as AI tools learn from historical data. If the historical data used to train the AI tool is biased, it can inadvertently perpetuate discrimination in the hiring process. This can have serious consequences, not only for job seekers but also for companies that could unknowingly discriminate against qualified candidates. A recent example of the potential pitfalls of AI recruiting tools is a case won by the Equal Employment Opportunity Commission (EEOC) related to age discrimination within an AI hiring platform.
When implementing AI tools, companies should consider:
- building transparency and accountability processes
- conducting regular audits to identify if bias or other unwanted results are beginning to appear
- training on how these tools operate and obtain information
The potential for AI assisted tools to create efficiencies is high; however, human interaction is still needed to manage the overall processes.