Developing an Artificial Intelligence Policy that Works

-
Speaker : GREG CHARTIER
-
When : Tuesday, May 20, 2025
-
Time : 01 : 00 PM EST
-
Add To Calendar
Refer a Friend
Dr. Greg Chartier, SPHR, GPHR, SCP, is a Senior Consultant with GLOMACS, specializing in human resource programs at the strategic level. He is a senior human resource professional with experience in healthcare, banking, pharmaceuticals, manufacturing and higher education. His academic qualifications include a Bachelor’s degree from The Citadel, an MBA from Rensselaer Polytechnic Institute and a Ph.D. from Madison University.
As a human resource consultant, Dr. Chartier provides outsourcing and HR management services to firms in the US and is a member of the human resources faculty of two local universities. He is certified by the Human Resources Certification Institute (HRCI) as a Senior Professional and a Global Professional in Human Resources and is a Senior Certified Professional of the Society for Human Resource Management (SHRM) and is a national member of SHRM and the Council of Industry.
Greg is a thought-provoking professional speaker and his wisdom and insights into management and leadership make him an electrifying speaker and seminar leader. His seminars are customized to reinforce company mission, vision, values and culture and the content is practical for team leaders, managers, supervisors and executives alike.
He was a member of the faculty at Pace University, where he worked with the Continuing and Professional Education Programs and the Human Resources Institute at Pace. He was also a member of the faculty of the New York Medical College, where he taught in the Masters’ in Public Health Program.
Currently, he is semi-retired in Western North Carolina working to develop more HR related programs at local colleges.
He is the author of What Law Did You Break Today? A guide to the federal laws and regulations that employers must comply with.
AI isn’t a lawless frontier — there are increasing regulations surrounding its use, particularly when it comes to data privacy, intellectual property, and consumer protections. Without a clear AI policy in place, your company could inadvertently violate these laws, resulting in hefty penalties or potential lawsuits.
AI systems, especially generative AI tools, often rely on vast amounts of data to function, and this data can include sensitive information. Without proper controls, employees may input proprietary or personal data into AI algorithms, exposing it to unauthorized access or data breaches.
AI models are only as good as the data they’re trained on. If the training data is biased, the AI's output may also be biased, potentially leading to unintended discrimination based on characteristics such as race, gender, or age. A corporate AI policy can enforce regular audits and bias reviews of AI-generated content, ensuring fairness and inclusivity.
AI has immense potential, but it also comes with risks. Whether it’s producing inaccurate content or replacing human decision-making inappropriately, AI can lead to ethical challenges. A corporate AI policy provides a clear framework for ethical AI use, guiding how
AI is deployed and ensuring it enhances rather than harms your workforce and reputation. It sets boundaries for AI’s role in decision-making, protecting your organization from reputational harm or unethical practices.
Areas Covered
- Identify key risks related to AI.
- Identify a set of responsible AI principles.
- Create an AI governance structure: identify key governing organizations, their mandate, key roles, and responsibilities.
- Design an AI governance operating model.
- Evaluate policy gaps using a policy framework.
- Use your findings to develop a roadmap and communication plan to govern AI in your organization.
Why Should You Attend
Most organizations default accountability for AI to IT or don’t assign accountability at all. Responsible governance requires the business to take accountability for its approach to AI.
Very few organizations have a formal and structured approach to AI governance:
AI can introduce or intensify risks that affect the entire organization, but most organizations haven’t integrated AI risks into their enterprise risk management framework.
Most organizations don’t assign accountability for AI, or it defaults to the CIO – and yet authority and true accountability remain with the business.
Policies are published without any controls to monitor and enforce compliance.
Topic Background
More than 80% of companies have adopted artificial intelligence (AI) in some way, and 83% of those companies consider AI a top priority in their business strategy.
AI is becoming more integrated into business processes every day, and IT, security, and compliance professionals are on the front lines of managing this dramatic shift. But as AI usage grows, so do the risks—whether it’s data privacy concerns, potential bias, or navigating complex regulations.
That’s why having a clear, comprehensive AI policy is essential to stay ahead of these challenges.
-
$160.00
-
