HR Hotline: Is Using AI for HR Tasks Legally Risky?

05.24.2023
HR Hotline
HR & Safety

Q: Lately my inbox has been flooded with advertisements for AI tools that will “revolutionize” my company’s ability to accomplish traditional HR tasks, like recruitment and hiring.

As we consider whether it’s worth investing in human resources technologies, are there any legal issues we should be aware of?

A: Yes. In using artificial intelligence to make human resources decisions, you may inadvertently violate state and federal anti-discrimination laws.

The Equal Employment Opportunity Commission issued new guidance May 18, 2023 regarding potential discrimination claims that may arise from an employer’s use of “software, algorithms, and artificial intelligence” in employee hiring. 

Many employers mistakenly believe that, by using AI, ostensibly a neutral, computer-generated tool, it can remove human-generated biases and thereby avoid discriminatory practices.

But this strategy fails to consider that Title VII prohibits both intentional and unintentional discrimination. 

Unintentional Discrimination

The latest EEOC guidance emphasized that an employer’s use of AI can result in unintentional discrimination, otherwise known as “disparate impact” discrimination.

HR Hotline badge

Unlike a Title VII “disparate treatment” claim, where an employee alleges that an employer intentionally discriminated against him or her based on race, color, national origin, religion, or sex, a “disparate impact” claim makes no allegations about an employer’s motive or intentions. 

Instead, it alleges that an employer’s neutral selection procedures have the effect of disproportionately excluding people based on their membership in a protected class. 

A requirement that an applicant meet specific height requirements, for example, may have a disparate impact on women. 

An employer may be able to justify the use of a facially neutral tool—such as ChatGPT—that has an adverse impact on a particular group, only if the tool is “job-related and consistent with business necessity,” and there is no less discriminatory alternative that is equally effective.

Software

The EEOC has identified the following examples of popular human resources software that incorporate algorithmic decision-making:

  • Resume scanners that prioritize applications that use specific keywords;
  • Virtual assistants or chatbots that ask candidates about their qualifications and reject those who don’t meet predefined requirements;
  • Video interviewing software that evaluates applicants based on their facial expressions and speech patterns;
  • Testing software that provides “job fit” scores for applicants based on their personalities, aptitudes, or perceived “cultural fit” based on their performance on a game or test.

Though these software options are facially neutral—in other words, they don’t purposefully attempt to screen out certain groups—their unintended effect may be just that. 

So while an AI system can be programmed to identify those with ideal candidate traits, that same system may also exclude entire groups that are equally qualified, thereby exposing an employer to a disparate impact discrimination claim. 

Though these software options don’t purposefully attempt to screen out certain groups, their unintended effect may be just that. 

In its guidance, the EEOC emphasized that an employer may not rely on the assurances of a software vendor to protect itself against discrimination claims. 

For example, a third party may claim that it has designed and “vetted” the software, and that its administration of the tool ensures against unfair results.

However, if an employer’s use of the tool ultimately disparately impacts one group over another, the employer may be held liable, despite its reliance on the vendor’s assurances. 

For this reason, the guidance encourages employers to self-audit their selection tools on an ongoing basis to determine whether they have a disproportionately negative effect on one protected class. 

The EEOC’s May 2023 guidance addresses only disparate impact claims under Title VII, which prohibits discrimination based on race, color, national origin, religion, and sex. 

ADA Protections

Employers may recall that the agency issued similar technical assistance in May last year regarding the use of AI and its potential to disadvantage job applicants with disabilities.

Some examples of an employer’s use of AI that may violate the Americans with Disabilities Act include:

  • An employer’s failure to provide a reasonable accommodation that’s necessary for an applicant to be rated fairly (for example, an applicant with limited manual dexterity may have difficulty taking a knowledge test that requires the use of a keyboard);
  • A company’s reliance on a tool that screens out a disabled candidate, even though he’s able to do the job with a reasonable accommodation (for example, video interviewing software that scores an applicant’s problem-solving abilities by analyzing speech patterns, when the applicant has a speech impediment); and
  • An employer’s use of AI that violates the ADA’s restrictions on disability-related inquiries and medical examinations.

As artificial intelligence becomes more and more common in the human resources field, employers will be wise to closely research and investigate their options to avoid claims of unlawful bias. 

For now, we need to maintain at least some of the “human” in human resources.


HR problems or issues? Email or call CBIA’s Diane Mokriski at the HR Hotline (860.244.1900) | @HRHotline. The HR Hotline is a free service for CBIA member companies.

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected with CBIA News Digests

The latest news and information delivered directly to your inbox.

CBIA IS FIGHTING TO MAKE CONNECTICUT A TOP STATE FOR BUSINESS, JOBS, AND ECONOMIC GROWTH. A BETTER BUSINESS CLIMATE MEANS A BRIGHTER FUTURE FOR EVERYONE.