State Legislation
Colorado

SENATE BILL 24-205

SB24-205
State Legislation
New York

AN ACT to amend the civil rights law and the executive law, in relation to the use of artificial intelligence systems

1169--A

Recommendations

Establish clear processes for notifying consumers when they are interacting with an AI system, unless it is obvious. For consequential decisions made by high-risk AI systems, provide advance notice, explain the AI's role and data used, and offer an appeal process with human review.

Implement a robust internal process for employees to anonymously report suspected violations of AI laws or misleading statements about risk management. Provide monthly updates on investigations to the reporting employee.

Ensure that any opt-out options for consequential decisions made by high-risk AI systems are managed according to the specified limits (e.g., one opt-out per six-month period).

Key Compliance Requirements

REQUIREMENT
Civil Rights Law

Auditors must be independent entities and meet specific criteria to avoid conflicts of interest.

Criteria include restrictions on prior services, future competing business, and fee structures.

Applies to: Auditors
REQUIREMENT
Colorado
New York

Deployers must have a risk management policy and program that is regularly reviewed and updated.

The policy and program must be documented, iterative, and aimed at identifying, documenting, and mitigating known or reasonably foreseeable risks of algorithmic discrimination associated with the development or deployment of a high-risk AI system. A single policy and program may cover multiple high-risk AI systems if sufficient for each. Reasonableness is determined by considering NIST's AI Risk Management Framework (Version 1.0) or equivalent, the size and complexity of the entity, the nature, scope, and intended uses of the system, and data sensitivity and volume.

Applies to: Deployers
REQUIREMENT
New York

Deployers must provide a reasonable internal process for employees to anonymously report suspected violations of AI laws or misleading statements about risk management.

The process must include monthly updates to the reporting employee on the investigation and actions taken. Developers and deployers must provide clear notice to all employees working on such systems about their rights and responsibilities, including the right of contractors and subcontractors to use the developer's internal disclosure process. Compliance is presumed if notice is consistently posted in workplaces and provided to new/remote employees, or if annual written notice with acknowledgment is given.

Applies to: Deployers
REQUIREMENT
New York
Colorado

Developers and deployers are legally responsible for the quality, accuracy, bias, and algorithmic discrimination in consequential decisions made by high-risk AI systems.

Responsibility extends to all consequential decisions made by or with the assistance of the AI system.

Applies to: Developers and Deployers
REQUIREMENT
New York

Developers may be exempt from certain duties if they obtain written agreements from deployers confirming the AI system will not be used as a high-risk AI system, implement technical safeguards against high-risk use, clearly state this limitation on their website and in agreements, and maintain records of these agreements for at least five years.

EXEMPTION: From certain duties if conditions are met. Records of agreements must be maintained for at least five years.

Applies to: Developers
REQUIREMENT
New York

Developers of high-risk AI systems must use reasonable care to prevent foreseeable risks of algorithmic discrimination that arise from the use, sale, or sharing of the system or products featuring it.

Reasonable care must be taken to prevent foreseeable risks of algorithmic discrimination.

Applies to: Developers
REQUIREMENT
Colorado

A developer that also serves as a deployer for a high-risk AI system is exempt from generating certain documentation requirements unless the system is provided to an unaffiliated entity acting as a deployer.

EXEMPTION: From certain documentation requirements when the system is used internally.

Applies to: Developers who are also Deployers
PROHIBITION
New York

It is prohibited to develop, deploy, use, or sell AI systems that evaluate or classify the trustworthiness of natural persons based on their social behavior or known or predicted personal or personality characteristics, if the resulting social score leads to differential treatment of certain natural persons or groups thereof in social contexts unrelated to the context in which the data was originally generated or collected, or leads to differential treatment that is unjustified or disproportionate to their social behavior or its gravity, or leads to the infringement of any right guaranteed under the United States Constitution, the New York Constitution, or state or federal law.

Prohibition applies to AI systems used for trustworthiness evaluation based on social behavior if they result in unjustified or disproportionate differential treatment or infringe upon constitutional or legal rights.

Applies to: Developers, Deployers, Users, Sellers of AI Systems
REQUIREMENT
New York

The Attorney General has the discretion to promulgate rules and recommend frameworks for AI system audits.

Audits must assess algorithmic discrimination, auditor independence, and incorporate community feedback.

Applies to: Attorney General
REQUIREMENT
New York

The rights and obligations established under this section are non-waivable by any person, partnership, association, or corporation.

Non-waivable rights and obligations.

Applies to: Any person, partnership, association, or corporation