Senate Refers Amended AI Bill to Judiciary Committee
The state Senate sent legislation targeting the use and growth of artificial intelligence to the legislature’s Judiciary Committee with significant changes April 17.
If enacted, SB 2 will be the first statute in the country that establishes sweeping regulations for AI applications developed and deployed by private industry.
Despite major changes since it was approved by the General Law Committee last month, the bill’s general regulatory framework remains intact.
Sections 1 through 8 still establish a number of reporting requirements for developers and deployers who utilize “High-Risk AI”—essentially any AI system that makes, or is a controlling factor in making, a consequential decision.
Private Rights of Action
The bill defines “consequential decision” as any decision that has a significant impact on the provision or denial to any consumer of, or the cost or terms of “A) any criminal case assessment, any sentencing or plea agreement analysis or any pardon, parole, probation or release decision, (B) any education enrollment or opportunity, (C) any employment or employment opportunity, (D) any essential utility, including, but not limited to, electricity, heat, Internet or telecommunications access, transportation or water, (E) any financial or lending service, (F) any essential government service, (G) any healthcare service, or (H) any housing, insurance or legal service.”
The latest version of SB 2 referred by the Senate removed “essential goods and services” from the definition of “consequential decision”, but added “essential utilit[ies].”
The amendment adopted Wednesday also made a major positive change to the enforcement of the bill’s regulatory requirements, and removed the ability for private parties to bring civil complaints against companies before the state’s Commission on Human Rights and Opportunities.
While the bill has always had language prohibiting private rights of action, the version that was approved by the General Law committee added a provision that required CHRO to enforce the regulations on deployers of high-risk AI.
That version also allowed individuals to bring complaints before the CHRO and have noncompliance with the regulations enforced as a discriminatory act under existing CHRO laws—essentially creating a back-door private right of action.
The amended legislation now gives sole enforcement authority to the Office of Attorney General—similar to the Data Privacy Act that passed in 2022.
Other Changes
Other major changes include:
- Enforcement implementation dates pushed back to Oct. 1, 2025 for high-risk AI systems and Jan. 1, 2026 for synthetic digital content
- Removed the Department of Consumer Protection as an enforcement agency
- Removed the requirement that developers and deployers notify the Attorney General when there is a “reasonable likelihood” that high-risk AI systems caused algorithmic discrimination; now notification is only required when an entity discovers that the systems “actually” cause algorithmic discrimination
- Gives consumers the opportunity to (i) appeal any adverse consequential decision arising from deployment of high-risk AI systems, and if technically feasible, allow for human review; and (ii) if the deployer is a controller under the Data Privacy Act, allow the consumer to submit a notice indicating that the consumer is exercising their rights to opt-out of the processing of such consumer’s personal data for purposes of profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning such consumer
- Require developers to disclose documentation to deployers concerning “relevant information concerning mitigation of algorithmic discrimination and explain ability”
- Modifies the public interest research exemption by (1) removing requirement it be approved, monitored and governed by an institutional review board or similar entities; (2) adding requirement that such research must be “conducted in accordance with (i) 45 CFR Part 46; or (ii) relevant requirements established by the [FDA]
- Modifies the FDA exemption to include general-purpose AI models; also now includes entities that “conduct any research required to support an application for approval from the [FDA]”
- Regulations regarding “synthetic digital content” now only applies to developers; not deployers
- Narrowed the broad public disclosure requirement of all artificially intelligence systems to just high-risk systems
- Adds the ISO/IEC 42001 publication as acceptable guidance to model risk-management policies
For more information, contact CBIA’s Wyatt Bosworth (860.244.1155) | @WyattBosworthCT.
RELATED
EXPLORE BY CATEGORY
Stay Connected with CBIA News Digests
The latest news and information delivered directly to your inbox.