AI hiring algorithms are riddled with harmful biases. This is a reflection of the real-life hiring data they were trained on, which is heavily biased.
An estimated 70 percent of companies and 99 percent of Fortune 500 companies use AI in their hiring processes. The consequences are huge, particularly for people who most often experience systemic discrimination in hiring.
Responding to the threat of employment discrimination, New York City lawmakers passed the Automated Employment Decision Tool Act last year, requiring that companies using AI to make employment decisions undergo audits that assess biases in “sex, race-ethnicity, and intersectional categories.” But the much-hyped, first-of-its-kind law falls significantly short in establishing the needed protections.
In addition to lacking enforcement measures and quality control standards, New York’s ordinance crucially left out disability, one of the most frequently reported types of identity-based employment discrimination, from the listed bias assessment categories.
This is not surprising. New York lawmakers such as Mayor Eric Adams are huge proponents of AI. Stricter or broader assessments of AI hiring tools could theoretically lead to their demise as the full scope and inevitability of their algorithmic biases, particularly against disabled applicants, become clear. Tools that are designed to support hiring processes, but fail to uphold basic hiring ethics, are not only useless, but harmful.
Algorithms that power AI hiring tools, developed over years by technicians who write code, are resistant to change, so there is no one corrective action that companies can take to resolve the problem of bias deeply embedded in code. The tool will find countless patterns in the training data (usually a list of past or ideal job holders) to guide its decision-making, thus producing the same biased outcomes.
Such outcomes are amplified for disabled applicants, who have been historically excluded from various types of jobs. The inclusion of more disabled profiles in training models would not solve the problem, either, due to the stubbornness of algorithms and the sheer diversity of disabilities that exist.
“Disability” is an umbrella term that describes a variety of conditions which are not all equally represented in the training data, especially disabilities that intersect with one or more other marginalized identities. Furthermore, AI Hiring tools such as Pymetrics, which compare applicant scores with those of former and current employees considered successful in their role, systematically ignore how employers set their disabled employees up for failure rather than success by denying them workplace accommodations.
Disabled applicants continue to be devalued as candidates by human recruiters and hiring AI alike due their divergence from rigid qualification standards which serve as predictors for future success but have no bearing on actual job performance. For example, someone who left their job for six months due to a chronic illness may have a tough time getting an interview. But while a job recruiter may be able to exercise nuance and extend the necessary accommodations required by the Americans with Disabilities Act to the applicant, an AI hiring tools will adhere to discriminatory expectations.
AI hiring tools have not only automated violations of the Americans with Disabilities Act, but they also created new paradigms for further scrutiny and discrimination. In addition to screening resumes, AI hiring tools attempt to measure and rate applicants’ personality traits and how well they will perform a job based on how they play a video game or speak and move in a video recording. These tools are likely to create discrimination on two fronts. Firstly, the video analysis technology has difficulty just registering the faces of both non-whites and those with disabilities that affect their appearance. Second, these tools cannot discern mental capability and emotions as they claim.
This analysis is based on extremely discriminatory and pseudo-scientific determinations of what constitutes favorable behaviors and expressions. For example, AI analysis of applicant video recordings may determine that someone with a stammer, speech impediment, or speech difference due to hearing loss to be “poorly spoken” or “lack speaking skills.” For others, especially those who are neurodivergent or vision-impaired, it may be difficult to maintain eye contact with the camera, causing AI to perceive them as “unfocused.”
Given the extreme and well-documented biases of AI hiring tools, particularly against disabled applicants, one may wonder why New York City lawmakers passed such an ineffectual bill. It encourages more audits and half-steps, but fails to address the root issue, which is the use of these tools in hiring practices in the first place.
New efforts in the state legislature are similarly misguided in that they merely look to fill gaps in the New York City bill with stronger and more inclusive auditing systems. Even if they are done thoroughly, algorithmic audits are not a solution to the pervasive biases in AI hiring tools.
The very presence of these biases cannot be eradicated until companies stop using AI for hiring and personnel decisions altogether.
Ultimately, New York lawmakers are doing the workforce dirty by sidestepping the call for a ban on AI hiring tools. They are shirking their responsibilities and hiding behind bills that obscure the full extent of discrimination perpetuated by these technologies. They also place the onus of holding companies accountable on the applicants.
Their reluctance to take decisive action prolongs and exacerbates AI-driven employment discrimination, especially for disabled job seekers.
Sarah Roth is an advocacy and communications associate at the Surveillance Technology Oversight Project (STOP), where Becca Delbos was a Fall 2023 advocacy intern.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.