If you’ve recently applied for a job online, the following scenario may sound familiar.
You’ve found an exciting opportunity that matches your skills and qualifications. You spend hours polishing your resume and crafting the perfect cover letter, hoping to land an interview. You cross your fingers and hit “submit.”
Moments later, an automated email lands in your inbox. The company is moving forward with other applicants who more closely fit its needs.
While there’s a common sense of frustration among job seekers in today’s tight labor market, what you might not know is whether employers are using artificial intelligence to screen your application.
Companies increasingly rely on AI-based tools to replace everyday HR functions, from tracking job applications to monitoring employees’ performance to making decisions about promotions and layoffs. A recent report on talent acquisition trends by Korn Ferry, a management consulting firm, found that 82 percent of CEOs and senior leaders said they expect AI will have a significant to extreme impact on their business.
No doubt, employers see advantages in using automated tools as an alternative to costly HR staff. But there are disadvantages, too. Forty percent of leaders surveyed by Korn Ferry worried that their HR teams had inadequate knowledge of AI tools.
These leaders are right to be concerned: Left unchecked, these systems can easily replicate human biases that lead to discrimination. That not only opens companies up to immense liability but also means diverse, qualified talent is likely being filtered out of consideration due to the very tools that are meant to streamline the hiring of strong candidates.
For example, video interviewing software that examines speech patterns to assess an applicant’s problem-solving skills could give a low score to an applicant who has a speech impediment from a disability or an accent because they’re from another country. Turning away an applicant on these grounds could be illegal discrimination, but job seekers in many cases would have no way to know that an algorithm is responsible.
As AI’s use expands across every industry, business leaders and legislators can help make sure workers aren’t harmed along the way. However, to protect themselves, some workers have had early successes targeting AI-facilitated discrimination by filing lawsuits.
Earlier this year, a federal judge in California ruled that a job applicant could move forward with a discrimination lawsuit against HR technology company Workday after he applied to more than 100 jobs using the company’s screening tools. In denying Workday’s motion to dismiss the lawsuit, the court rejected the company’s argument that employers — not software vendors — are responsible for any discrimination that results from using their HR tools.
Last year, the Equal Employment Opportunity Commission settled an age discrimination case that accused online tutoring company iTutorGroup of using AI software to screen out older applicants.
State and local governments have also begun to ramp up efforts to reduce AI bias in the workplace.
In New York City, a law took effect last year that requires employers to audit their use of AI tools in the hiring process and publish the results. However, the law has been widely criticized for its lack of strong enforcement mechanisms and the ease with which employers can opt out of complying.
Illinois recently passed a law requiring employers to notify applicants if they will be subjected to AI tools. New Jersey, California and Texas are also considering legislative responses to this growing problem.
The incoming Trump administration has already promised to repeal President Biden’s sweeping executive order on AI, which directs federal agencies to develop guidance around responsible AI use in the public and private sectors. Without clear federal guidelines, more states and municipalities may try to pass their own legislation regulating employers’ use of AI.
As more laws take effect at the local level, more workers will have legal pathways to hold employers accountable. As a result, companies that deploy these tools face ever more liability.
Employers and lawmakers alike have a responsibility to root out bias from these tools. Regardless of what the law requires, there are important steps employers should immediately take to minimize the risk of AI tools. Businesses should mandate robust training for HR departments on how these tools work and how to monitor for and identify trends that indicate bias.
By regularly auditing the AI hiring tools they rely on, companies can assess and address the negative impacts of AI tools. We are likely to see more legislation mandating bias assessments, and while some requirements are stronger than others, transparency gives job applicants and policymakers a more complete picture.
Employers must provide job applicants with some kind of notice before subjecting their application to an AI-based tool. At a minimum, this notice should include a list of the job qualifications and characteristics that the tool will examine to make its assessment, the sources and types of data the tool will use, and the data retention policy.
Further, candidates should have the ability to opt out of an AI-based process. An applicant who opts-out should not face any penalty for doing so.
Legislators can reinforce this protection for workers by prohibiting mandatory arbitration for any discrimination claims brought as a result of the use of AI tools.
The strongest legislation will include levers for enforcement by both individuals and government agencies. Applicants who believe they faced discrimination because of AI tools should have the right to file a lawsuit against the company that harmed them — including third-party vendors who are outsourced to make hiring decisions.
Likewise, state and local governments must include meaningful financial penalties for companies that violate the law, including damages to make workers whole and penalties to serve as a deterrent.
With no signs of AI slowing down, it is essential that companies proactively address the issues it can cause in the workplace. We’ve already seen some of the harm AI can cause if left unchecked. With businesses using AI tools responsibly, and lawmakers installing sensible guardrails, AI can live up to its promises without harming workers further.
Adam Klein is managing partner at Outten and Golden, a national employment law firm.