Committee Approves Amended Artificial Intelligence Bill

Issues & Policies

The state legislature’s General Law Committee unanimously approved amended legislation targeting the use and growth of artificial intelligence.

SB 2, championed by committee co-chair Sen. James Maroney, contains many, but not all, consensus items recommended by the AI Working Group earlier this year.

That group was established last year and tasked with making recommendations concerning the ethical and equitable use of AI in state government and regulation of its use in the private sector based on the White House’s Blueprint for an AI Bill of Rights and other similar materials.

The bill favorably reported last week made a number of changes to both the workforce development and regulatory sections originally drafted.

Regulatory Changes

The substitute bill adds both the Department of Consumer Protection and Commission on Human Rights and Opportunities as agencies to enforce the regulatory and reporting requirements imposed on developers and deployers of AI.

The bill originally vested the sole enforcement authority to the Office of the Attorney General.

For example, under Section 11 of the substitute bill, if a business fails to use reasonable care to protect any consumer from any known or reasonably foreseeable risk of algorithmic discrimination, it is a discriminatory practice subject to investigation and enforcement by CHRO. 

If a deployer fails to cure a violation within 60 days, CHRO may bring an enforcement action.

Similar to the enforcement measures required by the OAG and DCP, CHRO must issue a notice of violation to the deployer if the commission determined that it is possible to cure such violation.

If the deployer fails to cure the violation within 60 days, CHRO may bring an enforcement action.

Once the action is commenced, Section 11 stipulates that the deployer can assert an affirmative defense that it used reasonable care if they can prove compliance with the reporting and notification requirements laid out in Section 3 of the bill. 

Those requirements include (1) creating and continually updating an impact assessment; (2) developing a risk management policy that specifies principles, processes and personnel that the employer will use to maintain the program to identify, document and eliminate any known or reasonably foreseeable risks for discrimination; and (3) notifying consumers and state enforcement agencies when discrimination is discovered.

Synthetic Images

Sections 6 and 7 also add new requirements for developers and deployers that utilize AI systems that produce synthetic content.

On the developer side, Section 6 requires developers to (1) ensure that the outputs of the AI system are marked in a machine-readable format and detectable as synthetic digital content and such outputs are market and distinguishable; and (2) ensure that technical solutions are effective and interoperable.

The bill adds new requirements for developers and deployers that utilize AI systems that produce synthetic content.

On the deployer side, Section 7 requires deployers to disclose to consumers that the synthetic digital content has been artificially generated and/or manipulated.

No disclosure is necessary if the synthetic content is in the form of text published to inform the public on any matter of public interest.

To meet this exemption, two conditions must be satisfied: (1) the synthetic content has undergone a process of human review or editorial control; and (2) a person holds editorial responsibility for the publication of such synthetic digital content.

New Funding, Programming

Section 26 creates a new competitive grant program housed within the Department of Economic and Community Development to fund pilot studies conducted for the purpose of using AI to reduce health inequities in the state. 

Section 30 requires the Department of Public Health to conduct a study of, and make recommendations regarding the adoption of, governance standards concerning the use of AI by healthcare providers.

The study includes an assessment of the extent to which health care providers currently use AI, any means available to increase such use, any risks stemming from such use and any means available to monitor the outcomes produced by AI to ensure that such outcomes are having the desired effect on patient outcomes.

Workforce Development

The bill keeps in place the following workforce development initiatives that CBIA supported at the public hearing earlier this month:

  • Section 22: Requires the Office of Workforce Strategy to (1) incorporate AI training into workforce training programs offered by the state; and (2) design and implement an outreach program for promoting access to broadband Internet access service. Both of these initiatives require collaboration with a number of industry and state stakeholders. 
  • Section 23: Requires the Board of Regents, through Charter Oak State College, to establish a Connecticut Citizens AI Academy for the purpose of developing and offering online courses regarding AI and the responsible use of AI with certificates and badges awarded upon completion of these courses. 
  • Section 24: Requires the Board of Regents to establish certificate programs in prompt engineering, AI marketing for small businesses and AI for small business operations at community-technical colleges.
  • Section 25: Requires DECD to (1) develop a plan to offer high-performance computing services to businesses and researchers; (2) establish a confidential computing cluster for businesses and researchers; and (3) conduct a Connecticut AI Symposium to foster stakeholder engagement.

The bill awaits action in the Senate.

For more information, contact CBIA’s Wyatt Bosworth (860.244.1155) | @WyattBosworthCT.


Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected with CBIA News Digests

The latest news and information delivered directly to your inbox.