Sweeping AI, Online Safety Bill Impacts All Industries

05.08.2026
Issues & Policies

The Connecticut General Assembly this week approved a far-reaching measure addressing artificial intelligence, workforce development, online platforms, and consumer protections.

While SB 5 reflects growing bipartisan interest in artificial intelligence policy, several sections raise serious concerns for employers operating in Connecticut. 

“CBIA engaged extensively throughout the legislative process—working to narrow, clarify, or limit provisions that could disrupt innovation, hiring practices, and business operations—while supporting sections that foster workforce development and responsible innovation,” said CBIA vice president of public policy Chris Davis. 

One of the most consequential provisions of the bill regulates the use of automated employment-related decision technology, including tools used in recruiting, hiring, promotion, discipline, and termination. 

Increased Litigation Risk

While the bill does not outright ban these tools, it imposes new disclosure obligations, notice requirements to applicants and employees, and expands liability exposure by expressly stating that the use of automated decision tools is not a defense in employment discrimination claims.

Employers could face increased litigation risk even when using widely adopted human resources software. 

CBIA opposed these sections, warning lawmakers that the provisions could discourage responsible use of technology designed to reduce bias, expand access to talent, and improve efficiency—particularly for small and mid-sized employers without large HR teams. 

“These provisions could unintentionally penalize employers for using common, lawful tools.”

CBIA’s Chris Davis

“Artificial intelligence is already deeply embedded in modern hiring and human resources systems,” said Davis.

“We were concerned that these provisions could unintentionally penalize employers for using common, lawful tools that help them recruit and manage talent more effectively.” 

Through sustained advocacy, CBIA helped secure important limitations, including clarification of covered technologies, limitations on disclosure requirements, protections for trade secrets, and an enforcement structure that relies on the attorney general rather than private lawsuits. 

With no exclusions for duplicative nondiscrimination laws or industry specific standards, these employment related sections impact all industries. 

Overly Broad Consumer-Focused Rules 

The bill also establishes new rules for artificial intelligence companions and chatbots, including requirements related to disclosures, content moderation, and safeguards for minors. 

While CBIA appreciates the bill’s intent to protect users—especially children—these sections may unintentionally sweep in business-facing tools and customer service technologies that were never designed to function as “companions” or mental health resources. 

The bill classifies violations of these provisions as unfair trade practices.

“Businesses increasingly rely on AI-powered chat tools for customer service, education, and support,” Davis said.

“We are concerned that vague definitions and rigid requirements could create unnecessary compliance burdens or discourage innovation in this rapidly evolving space.” 

The bill classifies violations of these provisions as unfair trade practices, enforceable by the attorney general for adults and with a private right of action for minors, adding regulatory and enforcement risk for companies that deploy AI-driven customer interfaces. 

Uncertainty for Developers 

Another area of concern is the creation of an independent verification organization pilot program, overseen by the Department of Consumer Protection.

The pilot allows third-party entities to verify AI models against certain safety and risk standards. 

CBIA remains concerned that the program does not provide a true legal safe harbor.

While the concept of voluntary verification has merit, CBIA remains concerned that the program does not provide a true legal safe harbor and opens the door to future verification requirements.

Verification evidence is explicitly excluded from consideration in enforcement actions by the attorney general or state agencies, limiting its usefulness for businesses seeking certainty and liability protection. 

“If verification is meant to encourage best practices, it needs to offer real value to the companies that participate,” Davis said. “As currently structured, this program raises questions about cost, benefit, and long-term regulatory expectations.” 

Constitutional Questions 

The bill also includes extensive social media platform regulations, particularly focused on minors, algorithmic content recommendations, warning labels, and usage limits. 

While CBIA is not directly opposed to addressing youth online safety, similar laws in other states have faced serious constitutional challenges in federal courts.

Several provisions in SB 5 raise First Amendment concerns.

Several provisions in SB 5 raise comparable First Amendment and commerce clause concerns. 

“We’ve already seen courts strike down similar laws in other states,” Davis noted.

“That raises questions about whether these provisions will withstand legal scrutiny and whether businesses will face years of uncertainty as the courts sort it out.” 

AI Whistleblower Protections 

SB 5 also establishes new whistleblower protections for employees of so‑called “frontier” AI developers—companies training large, high‑compute foundation models that could pose what the bill defines as “catastrophic risk” to public safety. 

Under these provisions, frontier AI developers are prohibited from retaliating against employees who report, in good faith, concerns related to catastrophic risks such as large‑scale cyberattacks, loss of control over advanced AI systems, or other severe public safety threats.

Certain large frontier developers must also establish internal reporting mechanisms that allow covered employees to submit risk reports anonymously, with periodic updates and oversight by company leadership. 

While the bill is narrowly targeted at a small subset of advanced AI developers operating at the highest levels of computing power, CBIA raised concerns about layering AI‑specific whistleblower mandates on top of Connecticut’s already robust employee protection laws, including existing protections for employees who raise public safety or legal compliance concerns. 

CBIA worked with lawmakers to limit the scope of these provisions.

“Connecticut already has strong whistleblower laws that protect employees from retaliation,” said Davis.

“Our concern was that creating a separate, AI‑specific whistleblower regime could lead to confusion, duplication, and unintended consequences for employers operating in highly technical and competitive fields.” 

CBIA worked with lawmakers to limit the scope of these provisions so they apply only to frontier developers—primarily large, well‑capitalized companies—and not to businesses broadly using AI as part of their operations.

Civil penalties were capped, private rights of action were excluded, and enforcement authority was placed with the attorney general. 

Even with those changes, CBIA remains wary that the provisions could set a precedent for industry‑specific employment mandates tied to emerging technologies, rather than relying on Connecticut’s established, technology‑neutral employment law framework. 

Workforce Development, AI Education 

CBIA strongly supports the bill’s workforce development and education components, including the expansion of the Connecticut AI Academy, additional workforce training programs, partnerships with higher education, and initiatives to help small businesses adopt artificial intelligence productively. 

These provisions align with CBIA’s longstanding focus on closing skills gaps, preparing workers for emerging technologies, and ensuring employers have access to a future-ready workforce. 

“The workforce development sections of this bill move us in the right direction and are a positive example of how policymakers and the business community can work together,” Davis said. 

The bill also directs state agencies to develop an artificial intelligence regulatory sandbox, allowing companies to test innovative products under temporary, reduced regulatory requirements. 

“Connecticut needs to focus on preparing workers and employers for the AI economy, not chasing that innovation out of state.”

Davis

CBIA views this as an important tool for fostering innovation while maintaining appropriate oversight. 

“Although laws and regulations should be created in an unambiguous way so that a sandbox is not needed, the expansive requirements of this legislation gives the regulatory sandbox an opportunity to be a place where innovation happens safely and responsibly without the fear of over-reaching regulatory enforcement,” Davis said.  

CBIA encourages employers to stay engaged as regulatory details emerge—and to share real-world feedback with policymakers as Connecticut navigates this new and complex policy landscape. 

“Connecticut needs to focus on preparing workers and employers for the AI economy, not chasing that innovation out of state with state-specific regulations,” Davis said. 


 For more information, contact CBIA’s Chris Davis (860.244.1931).

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected with CBIA News Digests

The latest news and information delivered directly to your inbox.

CBIA IS FIGHTING TO MAKE CONNECTICUT A TOP STATE FOR BUSINESS, JOBS, AND ECONOMIC GROWTH. A BETTER BUSINESS CLIMATE MEANS A BRIGHTER FUTURE FOR EVERYONE.