Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

An Overview of NYC Automated Employment Decision Tool (AEDT)

Published October 15, 2024
Contributors

Anas Baig

Product Marketing Manager at Securiti

Sadaf Ayub Choudary

Data Privacy Analyst at Securiti

CIPP/US

Listen to the content

I. Introduction

In response to the growing use of data processing activities and with artificial intelligence (AI) and machine learning (ML) technologies, the City of New York has enacted Local Law 144, which regulates Automated Employment Decision Tools (AEDTs).

The law imposes specific obligations on employers who utilize these AEDT technologies and regulates automated decision-making tools in employment and hiring practices in New York City. It prohibits employers and employment agencies from using an automated employment decision tool, mandates employers to ensure transparency and accountability, implement non-discriminatory practices, conduct regular bias audits, and notify candidates when AEDT tools are used in hiring, promotion, and other employment considerations.

The law became effective on 1 January 2023, with enforcement starting on 5 July 2023. The Final Rules provide further details on AEDT usage covered under Law 144.

This guide dives into NYC AEDT’s requirements and highlights the steps employers must take to ensure swift compliance with the regulation.

II. What is an Automated Employment Decision Tool (AEDT)?

An Automated Employment Decision Tool (AEDT) is a computational process or software application that utilizes machine learning, statistical modeling, data analytics, or artificial intelligence to assist in employment-related decision-making.

Employers actively utilize the streamlined outputs produced by AEDTs, such as scores, classifications, or recommendations, to “substantially assist or replace discretionary decision-making” in procedures like hiring, promotions, and other employment-related considerations. “To substantially assist or replace discretionary decision making” refers to any automated tool used in employment decisions that:

  • Relies solely on its output without considering other factors.
  • Uses its output as the most heavily weighted criterion among multiple factors.
  • Uses its output to override other conclusions, including those made by humans.

The term "automated employment decision tool" does not include a tool that does not automate, support, substantially assist, or replace discretionary decision-making processes and that does not materially impact natural persons, including, but not limited to, a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.

III. Who Needs to Comply with NYC AEDT

The law applies to employers and employment agencies using automated decision tools (such as algorithms or machine learning models) to assist in hiring or promotion decisions for New York City-based jobs. The tools covered include systems used to evaluate candidates' qualifications or make decisions about promotion.

IV. Key Regulations Under NYC AEDT

A. Annual Bias Audit Requirement

A core element of the NYC AEDT Regulation is the requirement for an annual bias audit of any AEDTs. Employers or employment agencies cannot use or continue to use an AEDT if more than one year has passed since its most recent bias audit.

An independent auditor must conduct the audit, assessing whether the tool’s results show any bias or discrimination based on race, ethnicity, sex, or other protected characteristics. This bias audit must include the following steps:

  • Calculate the selection rate and impact ratio for each category.
  • Perform these calculations separately for:
    • Sex categories (e.g., comparing selection rates between male and female candidates).
    • Race/Ethnicity categories (e.g., comparing selection rates between Hispanic/Latino and Black/African American candidates).
    • Intersectional categories combining sex, ethnicity, and race (e.g., comparing selection rates between Hispanic/Latino males and Black/African American females).
  • If the AEDT classifies candidates or employees into specific groups (e.g., different leadership styles), apply the calculations for each group.
  • Report the number of individuals assessed by the AEDT excluded from the required calculations due to unknown category information.

A category representing less than 2% of the data may be excluded from calculations. The summary must explain the reason for exclusion and include data for the excluded group.

B. Public Disclosure of Annual Bias Audit

Before using an AEDT, employers or employment agencies must make publicly available on their website:

  • The date of the most recent audit,
  • A summary of the results, including:
    • Source and explanation of data used for the audit.
    • The number of individuals assessed falling into an unknown category.
    • Total number of applicants or candidates.
    • Selection or scoring rates.
    • Impact ratios across categories.
  • Distribution date of the AEDT.

The publication requirement can be met by providing an active hyperlink on the website, clearly labeled as a link to the bias audit results and distribution date. This information should be available for at least 6 months after the AEDT’s latest use.

C. Notice to Candidates and Employees

Employers must notify candidates or employees at least 10 business days before using an AEDT. This notice must include:

  • The disclosure that an AEDT is being used to evaluate their application or promotion.
  • The specific job qualifications or characteristics that are being evaluated by the tool.
  • The process for requesting an alternative evaluation method or accommodations if available.
  • Employers must also provide information about the categories of data collected and the tool’s decision-making process if the candidate requests it.

Notification may be sent via email, mail, the employer’s website, or a written policy. By providing notice, individuals are informed about the technology used and how they are being assessed.

D. Data Retention and Transparency

Employers must:

  • Provide clear information on their website regarding AEDT data retention policies, types of data collected, and the source of such data.
  • Include instructions on how to request this information and respond to written requests within 30 days.
  • Provide reasons if disclosure of data would violate local, state, or federal laws or interfere with a law enforcement investigation.

V. Regulatory Authority

The New York City Department of Consumer and Worker Protection (DCWP) is the regulatory authority for the NYC AEDT law. It is responsible for enforcing compliance, issuing regulations and guidelines, and announcing penalties for noncompliance.

VI. Penalties for Non-Compliance

Employers that fail to comply with NYC AEDT requirements may incur penalties, including:

  • Penalty of up to $500 for the first violation and each additional violation occurring on the same day.
  • Subsequent violations may result in penalties ranging from $500 to $1,500 per violation.

Violations can include failing to provide adequate notice or using an AEDT without a bias audit.

VII. Compliance Strategies for Employers

Employers must take certain actions to ensure compliance with NYC AEDT regulations, especially if they utilize or intend to use AEDTs. These include:

a. Conduct Independent Bias Audits

Employers should conduct comprehensive bias audits in collaboration with independent auditors, including examining its inputs, outputs, and processed data, among other factors.

b. Establish Clear Communication Protocols

Employers must establish a mechanism for notifying candidates and employees about using AEDTs and how it may affect their applications or employment status. The notification must be uniform so that all candidates get the same information transparently.

c. Vendor and Technology Assessment

Employers must examine their AEDT technology, particularly if provided by a third-party. It's critical to evaluate if these providers comply with bias audit requirements and whether their tools ensure adequate transparency.

d. Data Transparency

As a good business practice, employers must disclose AEDT's data sources, the tool's methodology, and the duration of data retention. Additionally, employers must explain, upon request, the precise categories of data assessed by the tool and how it impacts hiring choices.

VIII. Conclusion

As employers rapidly embrace AI technology, ensuring compliance with these rules is crucial to establishing a fair and inclusive workplace. Employers must proactively conduct bias audits, explicitly notify candidates and employees of relevant information, and ensure data transparency.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures View More
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures
The U.S. M&A landscape is back in full swing. May witnessed a significant rebound in deal activity, especially for transactions exceeding $100 million, signaling...
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix View More
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix
The Challenge of Navigating Global Data Privacy Laws In today’s privacy-first world, navigating data protection laws and direct marketing compliance requirements is no easy...
What to Know About Quebec’s Act Respecting Health and Social Services Information (AHSSS) View More
What to Know About Quebec’s Act Respecting Health and Social Services Information (AHSSS)
Learn more about Quebec's AHSSS, including its obligations on healthcare providers, researchers, and technology providers, with Securiti's latest blog.
View More
What is Automated Decision-Making Under CPRA Proposed ADMT Regulations
Learn more about automated decision-making (ADM) under California's CPRA, its regulatory approach to the technology, and how to ensure compliance.
View More
Is Your Business Ready for the EU AI Act August 2025 Deadline?
Download the whitepaper to learn where your business is ready for the EU AI Act. Discover who is impacted, prepare for compliance, and learn...
View More
Getting Ready for the EU AI Act: What You Should Know For Effective Compliance
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI.
View More
Enabling Safe Use of Data with Amazon Q
Learn how robust DSPM can help secure Amazon Q data access, automate sensitive data tagging, eliminate ROT data, and maximize AI productivity safely.
Singapore’s PDPA & Consent: Clear Guidelines for Enterprise Leaders View More
Singapore’s PDPA & Consent: Clear Guidelines for Enterprise Leaders
Download the essential infographic for enterprise leaders: A clear, actionable guide to Singapore’s PDPA and consent requirements. Stay compliant and protect your business.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New