AI Bias & Fairness

4 Myths About the NYC AI Bias Law

4 Myths About the NYC AI Bias Law

Studies show that 99% of Fortune 500 companies rely on the aid of talent-sifting software, and 55% of human resources leaders in the U.S. use predictive algorithms to support hiring.1

Given the widespread usage of predictive algorithms by human resources departments and the potential for it to go wrong, New York City is one of the first cities to pass legislation in an effort to prevent negative impacts resulting from automated decision employment tools (AEDT). 

Effective January 1, 2023, New York City Int. No. 1894-A Relating to Automated Decision Tools law will go into effect that restricts New York City employers from using automated employment decision tools unless it has been the subject of an independent bias audit no more than one year prior to its use.

With under three months to go until this law is officially put into place, we wanted to dispel some myths about it. Keep reading to learn more.

Myth #1: The entire end-to-end human resources lifecycle, spanning candidate screening to termination events, is covered by 1894-A.

Fact: Nope. The law only covers hiring and internal promotional employment decisions that occur within New York City (not outside the city). It does not apply to demotions, firing, or downsizing actions. As we witness increased layoffs across industries due to mounting recessionary pressure, it’s regrettable that the law fails to cover “performance management” algorithms that are making automated decisions regarding which employees are on the chopping block.

Myth #2: The law covers any automated process or system used in human resources.

Fact: The law does not cover or materially impact employment decisions made by “a junk mail filter, firewall, antiviral software, calculator, spreadsheet, databases, data set, or other compilation of data.”2

In legislation, AEDT or automated employment decision tools are defined as “computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”

Myth #3: An individual can file a court complaint if they think they’ve been discriminated against by an enterprise’s AEDT. 

Fact: The law makes no mention of “private right of action.” If NYC finds an employer’s AEDT was discriminatory, there is a path for a federal court class action complaint. 

Myth #4: Employers must provide notices to all employees and candidates that an AEDT tool will be used in connection with employee and/or candidate assessments/evaluations.

Fact: Non-residents of New York City are not required to receive this notice, even when applying to a city-based position. The notice only needs to be sent to individuals who live in New York City. It must be made no less than 10 business days before use of AEDT, plus it must list the job qualifications/characteristics that the AEDT used for decision making. Employers have the option of publicly disclosing the type of data used, the data source, and data retention policy for the AEDT on their company website OR providing the above information to employees or candidates upon written request within 30 days of receiving the request. Upon notice, candidates can opt out and request an alternative selection process or accommodation. 

After the law goes into effect on January 1, 2023, regulators know enterprises won’t be able to comply overnight—so naturally there will be an interim grace period to give companies enough time to put processes and tools in place to meet regulation fine print before fines are levied.

Concerns around bias in the automated hiring process have pre-dated the passage of the NYC AI Bias law. Readers who are interested in taking a deeper dive into algorithmic hiring, equity, and bias should check out work by the non-profit group UpTurn. For additional reading, reference “We Need Fairness and Explainability in Algorithmic Hiring” co-authored by John Dickerson, Arthur’s Chief Scientist. Or, delve into the academic paper “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices.” 

In summary, the NYC Bias Law requires an independent third-party audit to assess AEDT’s disparate impact on candidates or employees of a particular gender or race/ethnicity. But, how can you determine if your AEDT’s model outcomes result in disparate impact year-round and not just a point in time for employment decision making? 

Arthur’s Bias tab empowers human resources teams to view predictions (or outcomes) of your AEDT model, segmented by relevant subgroups in the population. An adjustable fairness threshold lets you quickly identify if your model is causing disparate impact for protected classes. 

With Arthur, you can proactively and continuously measure disparate impact with algorithmic bias model monitoring—and act on it to improve the outcomes for both future and current employees you serve.

Interested in seeing the Arthur platform in action? Schedule a demo.