Artificial Intelligence Compliance Risks for Employers! What Should Employers Do To Avoid this New Compliance Risk?
  • CODE : MAFA-0045
  • Duration : 90 Minutes
  • Level : Intermediate
  • Add To Calendar
  • Refer a Friend

Margie Faulk is a senior-level human resources professional with over 15 years of HR management and compliance experience. A current Compliance Advisor for HR Compliance Solutions, LLC, Margie, has worked as an HR Compliance advisor for major corporations and small businesses in the small, large, private, public and Non-profit sectors.  Margie has provided small to large businesses with risk management strategies that protect companies and reduce potential workplace fines and penalties from violation of employment regulations. Margie is bilingual (Spanish) fluent and Bi-cultural.

Margie holds professional human resources certification (PHR) from the HR Certification Institution (HRCI) and SHRM-CP certification from the Society for Human Resources Management. Margie is a member of the Society of Corporate Compliance & Ethics (SCCE).



Whether employers realize it or not, Artificial Intelligence (“AI”) currently is used in most workplace.  Although AI can be tremendously beneficial in the right circumstances, it also can create significant liability for employers who do not leverage it appropriately.

AI is the use of machines to perform tasks traditionally performed by the human brain. It can take many forms. For instance, generative AI, like ChatGPT, can create documents or presentations from scratch. Algorithmic or decision-making AI use algorithms to screen candidates, and video and voice recognition software can rate a candidate’s cultural fit with your organization. Conversational AI, or chatbots, are used to manage initial complaint intake or employee requests for information. Digital assistants can manage calendars, edit and grammar check documents, and create transcripts or outlines of recorded meetings. This list goes on and on.

First, front-line HR managers and procurement folks who routinely source AI hiring tools do not understand the risks. Second, AI vendors will not usually disclose their testing methods and will demand companies provide contractual indemnification and bear all risk for the alleged adverse impact of the tools."

Employers can't rely on a vendor's assurances that its AI tool complies with Title VII of the Civil Rights Act of 1964. If the tool results in an adverse discriminatory impact, the employer may be held liable, the U.S. Equal Employment Opportunity Commission (EEOC) clarified in new technical assistance on May 18. The guidance explained the application of Title VII of the Civil Rights Act of 1964 to automated systems that incorporate artificial intelligence in a range of HR-related uses.

The EEOC puts the burden of compliance squarely on employers. "[I]f an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor," the agency states in its technical assistance guidance.

States are also reviewing their exposure to AI and as a result, they already have laws in place related to the use of artificial intelligence in the workplace. This will impact Employers in multi-state locations, especially remote employees.

Areas Covered

  • Learn how AI impacts the workplace and what the risks are according to the Equal Employment Opportunity Commission (EEOC) given discrimination allegations.
  • Learn what the EEOC's new guidelines are for Employers to avoid risk with the vendor tools they use to identify hiring and other AI software they use to streamline their processes.
  • Learn what court cases are pending that may determine the fate of known Employers.
  • Learn why ChatGPT is huge challenge in the workplace and how it can be a big problem for Employers
  • Learn what the definitions are for AI and how you should proceed without violating policies
  • Learn what the Chatbots are and what is their function
  • Learn what Employers can do to mitigate AI issues and concerns.
  • Learn what states already have AI regulations and what the penalties are for violating those regulations.
  • Learn what policies Employers should have to avoid non-compliance due to the AI tools they are using.
  • Learn how criminal background checks AI tools can violate Ban the Box regulations.
  • Learn what safeguards Employers can put in place that will assist in reducing compliance risks.
  • Learn how training of managers/supervisors and other professionals should be mandated to attend training to circumvent inadvertent violations due to not being prepared.

Who Should Attend

  • All Employers
  • Business Owners
  • Company Leadership
  • Compliance professionals
  • Payroll Administrators
  • HR Professionals
  • Managers/Supervisors
  • Small Business Owners

Why Should You Attend

Current cases include Workday Inc., a maker of AI applicant screening software, which is in the middle of a class action lawsuit that alleges its products promote hiring discrimination. The lawsuit, filed in February 2023 alleges that Workday engaged in illegal age, disability, and race discrimination by selling its customers the company’s applicant-screening tools, which use biased AI algorithms.

Other pending court cases will reveal the risk that Employers are taking. They need to prepare to include policies to protect the company, consumers, and employees.

  • $160.00



Webinar Variants


contact us for your queries :

713-401-9995

support at grceducators.com



  • Contact
  • Membership
  • Subscribe
  • Secure Payment