4Hoteliers
SEARCH
SHARE THIS PAGE
NEWSLETTERS
CONTACT US
SUBMIT CONTENT
ADVERTISING
Key Considerations When Using Artificial Intelligence In Recruitment And Hiring
By Monica Snyder
Wednesday, 9th September 2020
 

In the new age of remote work and social distancing, more and more employers are showing an interest in artificial intelligence (AI) when it comes to recruiting and hiring new talent.

This includes using AI to automate the sourcing of potential candidates and screen candidates from an existing candidate pool, and utilizing video interviewing tools that can measure a candidate’s strengths based on factors such as facial expression, speech patterns, body language, and vocal tone. Such tools are crafted to filter out and hire candidates that meet certain job-related criteria.

By applying AI in pre-employment assessments and interviews, employers are able to streamline their recruiting processes and screen though a seemingly unmanageable pool of candidates while maintaining social distancing practices and fostering a safe work environment.

This practice, however, needs to be carefully managed to eliminate legal concerns. This article reviews the risks and some recent developments at the state level before concluding with seven key considerations you should take into account when implementing AI-based recruiting and hiring systems.

What Are The Risks?

While there are certain benefits to using AI in hiring and recruitment, there are various risks employers should be aware of when considering this technology. For instance, there are privacy concerns which vary depending on the technology employed and the data collected. While legal protections have expanded to limit employers from overreaching in collecting, storing, and using personal data, you must be careful to navigate state privacy and electronic surveillance laws, as well as potential HIPAA concerns, when implementing this technology.

There is also a risk of potential bias and discrimination. While no reasonable employer would intentionally use an AI program to illegally discriminate against a segment of job applicants, system limitations could lead to inadvertent employment law dangers. AI software is only as good as the data and algorithms that it uses, and data sets can contain implicit racial, gender, or ideological biases, which inherently make the AI system unreliable.

One of the concerns with using AI is that it is often crafted based on the resumes and backgrounds of job seekers who were successfully hired. Therefore, if a company has a history of only hiring a certain type of individual – i.e., white males or younger individuals – the AI tool may prioritize candidates with similar profiles to the current employees in the company. This could obviously put women, minorities, or older individuals at a disadvantage. Moreover, resume scanning tools that evaluate an applicant’s past experience could discriminate against women who are returning to the workforce after some time. Thus, there is a justifiable concern that the AI could disadvantage certain groups of people who do not fit pre-established criteria underpinning the algorithms, which are responsible for deciding who will be the most successful performer for the job in question.

In addition, AI tools that evaluate candidates based on their word choice or expressions could unlawfully discriminate against applicants. For instance, voice recognition programs utilized to screen oral interviews might not be attuned to sort through speech impediments, native accents, or nervous-sounding answers caused by mental impairments.

What Is Being Done To Address These Risks At The State Level?

While using AI in recruitment is not yet regulated on a federal level, there are several states which have enacted or proposed legislation regulating AI in employment. Illinois is the first state to regulate an employer’s use of AI in the hiring process. The Artificial Intelligence Video Interview Act, which has been in effect since January 1, 2020, requires organizations hiring for jobs “based in” Illinois that use “artificial intelligence analysis” of video interviews to comply with certain requirements.

These requirements include:

  • Notifying the applicant that AI may be used to analyze the video;
  • Providing the applicant with information about how the AI works and evaluates general characteristics;
  • Obtaining consent from the applicant to be evaluated using AI;
  • Limiting the distribution and sharing of the video to only those persons whose expertise is necessary to evaluate the applicant; and
  • Destroying the applicant’s video within 30 days upon request by the applicant.

While this statute does not provide for a private right of action or damages, it is certainly something which employers hiring for jobs based in Illinois should be aware of. In addition to Illinois, Maryland has also recently enacted a statute, which takes effect on October 1, 2020, prohibiting the use of facial recognition services without an applicant’s consent. Although Illinois and Maryland are leading the charge in this area, other states such as New York and California have proposed legislation governing the use of AI software in employment decisions. You should make sure to monitor these developments in the jurisdictions in which you do business.

What Should You Do? A 7-Step Plan

Employers interested in using AI technology for recruitment and hiring should proceed with caution. To minimize exposure of liability, you should consider the following seven steps:

  1. Make sure there are reasonable and appropriate systems in place to prevent the unauthorized access to personal data, and develop regular audit protocols for these systems to re-evaluate cybersecurity procedures on a regular basis.
  2. Be transparent with candidates if AI is going to be used in the recruitment and hiring process, including letting candidates know from the outset exactly how the AI will be used. Make sure to also obtain the candidate’s express written consent.
  3. Ensure that the AI does not present any discriminatory barriers to hiring. This includes working with thirty-party vendors providing the AI technology to understand the algorithm, auditing the system before it is deployed, and developing internal processes to assess and remediate any biases that may develop over the course of implementing the tool.
  4. On a related note, make sure to provide accommodations to candidates who are unwilling or unable to use AI during the recruitment and hiring process.
  5. Limit the distribution and sharing of any recordings to only those whose review is necessary to evaluate potential applicants, and keep a record of who has access to each recording to demonstrate reasonableness.
  6. Consult existing state laws and continue to monitor for developing legislation to ensure compliance with applicable law.
  7. Finally, you should seek advice from counsel before implementing a program based on the use of AI software.

Conclusion

As AI technology in recruitment and hiring continues to develop, it may provide a viable solution for many employers to continue to maintain social distancing measures and scale their search processes. The use of these technologies presents both promise as well as potential privacy and discrimination concerns.

To maximize the benefit from using AI technology in hiring and recruitment, you should take steps to ensure that you have a strong understanding of the technology employed and data collected, along with how it is maintained and secured.

As noted above, it is critical that you be transparent with candidates if they intend to use such technology when evaluating them. You should also train your management-level employees on how to use such data in order to reduce any resulting risks.

A comprehensive understanding of these issues, coupled with an appropriate disclosure and notification to candidates, will help to maximize potential benefits and reduce legal risks.

Monica Snyder, an associate in the Boston and New York office of Fisher Phillips, represents management on a variety of employment matters in state and federal court and before administrative agencies. Monica has defended employers against discrimination, sexual harassment and wrongful termination suits under both federal and state laws, including claims brought under Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, the Age Discrimination in Employment Act, and Massachusetts Chapter 151B. Monica also assists employers in their liability prevention efforts by conducting employee training, preparing handbooks and implementing policies, as well as conducting pay equity audits.

Monica regularly speaks on issues involving sexual harassment, including the #metoo movement. She is also an active member of Fisher Phillips’ Pay Equity Practice Group, where she analyzes compensation issues related to gender.

While in law school, Monica served as a judicial intern for the Honorable Patti B. Saris of the United States District Court for the District of Massachusetts and was a Note Editor of the Journal of Science and Technology Law.

www.fisherphillips.com

Global Brand Awareness & Marketing Tools at 4Hoteliers.com ...[Click for More]
 Latest News  (Click title to read article)




 Latest Articles  (Click title to read)




 Most Read Articles  (Click title to read)




~ Important Notice ~
Articles appearing on 4Hoteliers contain copyright material. They are meant for your personal use and may not be reproduced or redistributed. While 4Hoteliers makes every effort to ensure accuracy, we can not be held responsible for the content nor the views expressed, which may not necessarily be those of either the original author or 4Hoteliers or its agents.
© Copyright 4Hoteliers 2001-2024 ~ unless stated otherwise, all rights reserved.
You can read more about 4Hoteliers and our company here
Use of this web site is subject to our
terms & conditions of service and privacy policy