Public, Industry Debate Sweeping AI Bill

03.07.2024
Issues & Policies

The General Assembly’s General Law Committee held a much-anticipated public hearing Feb. 29 on sweeping legislation targeting the use of artificial intelligence.

The committee is expected to act on SB 2 when it meets March 12.

The bill, championed by committee co-chair Sen. James Maroney (D-Milford), contains many, but not all, consensus items recommended by the AI Working Group earlier this year. 

That group was established last year and tasked with making recommendations concerning the ethical and equitable use of AI in state government and regulation of its use in the private sector based on the White House’s Blueprint for an AI Bill of Rights and other similar materials.

While the group did not reach consensus on regulations related to the private sector, SB 2 nonetheless creates a new regulatory framework for both developers and deployers of AI.

Reporting Requirements

Sections 1 through 7 establish a number of reporting requirements for developers and deployers who utilize “High-Risk AI”–defined as “any artificial intelligence system that, when deployed, makes, or is a controlling factor in making, a consequential decision.” 

In turn, “consequential decision” means “any decision that has a material legal or similarly significant effect on any consumer’s access to, or the availability, cost or terms of, any criminal justice, education enrollment or opportunity, employment, essential good or service, financial or lending service, government service, healthcare service, housing, insurance or legal service.”

Under the bill, both developers and deployers of high-risk AI must use reasonable care to avoid known or reasonably foreseeable risks of algorithmic discrimination when the technology makes consequential decisions.

“An employer looking to deploy a ‘high-risk AI’ technology would need to first develop a risk management policy.

These entities also have a number of reporting requirements to both the public and the Attorney General (the sole enforcement authority of Sections 1 through 7).

For example, an employer looking to deploy a “high-risk AI” technology would need to first develop a risk management policy that must specify principles, processes and personnel that the employer will use in maintaining the risk management program to identify, document, and eliminate any known or reasonably foreseeable risks for discrimination.

This risk management policy must be based on the latest version of the NIST AI Risk Management Framework or any framework designated by the Attorney General.

Impact Assessment

The employer would also have to create and continually update an impact assessment throughout the high-risk AI’s deployment.

For example, this assessment must include (1) a statement of intended use and benefits afforded by the AI; (2) an analysis of whether deployment poses any risks for discrimination and if so, steps to eliminate that risk; (3) description of the technology’s inputs and outputs; (4) the type of data used to retain the AI; (5) performance metrics; (6) transparency measures; and so on.

Employers would have to keep impact assessment records for a minimum of three years.

Employers would have to keep impact assessment records for a minimum of three years, and also display a statement on their website concerning the types of high-risk AI systems currently deployed and how the employer manages discrimination risks.

Similar impact assessments and risk management policies are applicable to both (1) developers of high-risk AI; and (2) developers of generative AI.

The bill also requires entities to disclose to each consumer who who interacts with any AI system that the consumer is interacting with the system; unless: (1) a reasonable person would deem it obvious such person is interacting with the AI system; or (2) the developer or deployer did not directly make the AI system available to consumers

Enforcement

Similar to the Connecticut Data Privacy Act passed two years ago, the Office of the Attorney General is the sole enforcement authority regarding the reporting and notification requirements stated above by the creation of a new CUTPA violation under state law.

Going back to that hypothetical employer above, the bill requires the employer to notify the Attorney General within 90 days of discovering high-risk AI discrimination against a member of the public. 

The bill requires an employer to notify the Attorney General within 90 days of discovering high-risk AI discrimination.

During the first year of the law’s implementation, the Attorney General, prior to initiating violation actions after receiving notice of possible discrimination, must issue a notice of violation to the developer or deployer, and give the entity 60 days to cure the issue before bringing an enforcement action.

After the law has been in effect for one year, the Attorney General has discretion (which must be based on a number of factors under Section 8) on whether to grant an opportunity to cure to the developer or deployer.

The bill also specifies that: (1) there is no private right of action related to the enforcement of Sections 1 through 7; and (2) a rebuttable presumption of reasonable care for the developer or deployer exists if they complied with respective risk management, impact assessment, and public disclosure and notification requirements under the bill.

Industry Concerns, Collaboration

Many trade groups and businesses expressed concern over the breadth and scope of the proposed requirements contemplated under the bill. 

Christopher Gilrein, executive director for Northeast TechNet, participated as a member of the AI Working Group last year and expressed opposition to the regulatory approach taken:

“Sections 1-7 of SB 2 are far more extensive than what the task force recommended from a regulatory perspective,” he told lawmakers.

“Sections 1-7 of SB 2 are far more extensive than what the task force recommended from a regulatory perspective.”

Northeast TechNet’s Christopher Gilrein

“They establish a strict duty of care, create new technical standards, and address issues of transparency, explainability, and bias mitigation.

“As several presenters to the AI Working Group explained, all of these issues are currently being worked on at the federal level, via a number of ongoing workstreams created by the White House Executive Order on Artificial Intelligence.

“Given the complexity of these issues, and the Working Group’s recommendation that Connecticut’s actions align with national and global standards, TechNet recommends that Sections 1-7 of the bill be held until next session to allow the federal process to continue and to ensure that the state does not inadvertently adopt standards that are out of step with national and international models, thereby hampering Connecticut’s competitiveness in this space.”

Working Group Alignment

Andrew Kingman, counsel with the State Privacy & Security Coalition, requested that SB 2 “align more closely with the recommendations of the Artificial Intelligence Working Group” and offered to “work in good faith with the sponsors and stakeholders to achieve the best result possible for Connecticut consumers and businesses.”

CBIA also expressed concerns over the proposed regulations, with assistant counsel Wyatt Bosworth telling committee members the bill “could chill AI innovation in our state if adopted in its proposed form.”

CBIA’s Wyatt Bosworth told lawmakers the bill “could chill AI innovation if adopted in its proposed form.”

“As to the definitions, regulations, and reporting requirements that were not consensus items amongst the AI Working Group (Sections 1-7), CBIA has concerns about the unintended consequences of these sections,” he said.

“CBIA is committed to working with this committee and other stakeholders to achieve an AI regulatory framework that builds on best practices currently being developed across the world, and that fosters innovation, grows our economy, and protects Connecticut residents.”

Workforce Development Initiatives

CBIA and other groups expressed strong support for a number of sections that recognize and implement important investments in workforce development around artificial intelligence.

Many of these programs were developed as a result of industry engagement with the AI Working Group last year.

For example, CBIA, in collaboration with the Connecticut Academy of Science and Engineering, surveyed over 2,000 small businesses to gain insight into (1) how AI is currently being deployed; (2) the challenges employers face with that deployment; and (3) what tools the state and industry can provide to spur innovation and adoption of AI technologies.

CBIA and other groups expressed strong support for a number of sections that recognize and implement important investments in workforce development.

Two key recommendations as a result of the report where adopted in the bill—state investments in: (1) high performance computing; and (2) AI research and student preparation:

  • Section 14: Requires the Office of Workforce Strategy to (1) incorporate AI training into workforce training programs offered by the state; and (2) design and implement an outreach program for promoting access to broadband Internet access service. Both of these initiatives require collaboration with a number of industry and state stakeholders. 
  • Section 15: Requires the Board of Regents, through Charter Oak State College, to establish a “Connecticut Citizens AI Academy” for the purpose of developing and offering online courses regarding AI and the responsible use of AI with certificates and badges awarded upon completion of these courses. 
  • Section 16: Requires the Board of Regents to establish certificate programs in prompt engineering, AI marketing for small businesses and AI for small business operations at community-technical colleges.
  • Section 17: Requires DECD to (1) develop a plan to offer high-performance computing services to businesses and researchers; (2) establish a confidential computing cluster for businesses and researchers; and (3) conduct a “CT AI Symposium” to foster stakeholder engagement.

For more information, contact CBIA’s Wyatt Bosworth (860.244.1155) | @WyattBosworthCT.

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected with CBIA News Digests

The latest news and information delivered directly to your inbox.

CBIA IS FIGHTING TO MAKE CONNECTICUT A TOP STATE FOR BUSINESS, JOBS, AND ECONOMIC GROWTH. A BETTER BUSINESS CLIMATE MEANS A BRIGHTER FUTURE FOR EVERYONE.