Senate Passes Sweeping AI Mandates

The Connecticut Senate this week approved one of the most far‑reaching artificial intelligence policy packages considered by any state legislature to date.
SB 5, which was amended on the Senate floor, was approved on a 32–4 vote and now heads to the House for consideration.
While commonly referred to as an “online safety” bill, the revised version of the bill now spans more than 70 pages and reaches well beyond consumer protections.
It imposes costly new compliance obligations on employers and businesses that use AI‑driven tools, particularly in hiring and employment decisions.
Among its many provisions, SB 5 establishes a new regulatory framework for the use of automated employment‑related decision technology, including software used to screen applicants, rank candidates, evaluate performance, or support promotion, discipline, or termination decisions.
Why SB 5 Matters for Employers
For Connecticut employers already navigating a tight labor market and growing compliance responsibilities, these provisions add new notice, disclosure, and documentation requirements—and create new legal risk when AI tools are used in the workplace.
Under the bill, “automated employment‑related decision technology” includes any system that:
- Processes personal data
- Uses computation to generate outputs such as scores, rankings, predictions, classifications, or recommendations
- Is a substantial factor in making or materially influencing an employment decision
This definition is intentionally broad and could apply to many commonly used tools, including third‑party hiring platforms, resume screening software, assessment tools, scheduling algorithms, and performance analytics systems.
Certain routine technologies—such as word processing, spreadsheets, email, basic data storage, or tools used only incidentally—are excluded, but any system that meaningfully influences employment decisions may fall within scope.
New Employer Responsibilities
Beginning Oct. 1, 2027, employers that deploy covered tools would face several new obligations.
Employers must notify individuals before an employment decision is made if an automated system is used as a substantial factor. The notice must disclose:
- That an automated employment decision tool is being used
- The purpose of the tool and the type of employment decision involved
- The trade name of the technology
- The categories and sources of personal data analyzed
- How the data is assessed
- Employer contact information
In addition, employers must disclose when an applicant or employee is interacting directly with automated systems, unless it would be obvious to a reasonable person.
Where tools are developed by third‑party vendors, SB 5 places obligations on developers to provide deployers (employers) with sufficient information to meet compliance duties—unless the tool was not marketed or intended to materially influence employment decisions.
The bill permits developers and employers to contractually allocate compliance responsibilities, but those roles must be clearly defined in writing.
Trade Secret Protections—With Limits
While SB 5 explicitly preserves protections for trade secrets and proprietary information, employers or vendors who withhold information must explain what is being withheld and why, adding another compliance step.
The bill amends Connecticut’s anti‑discrimination statutes to make clear that the use of automated decision technology is not a defense against claims of employment discrimination.
Courts and regulators may consider evidence of anti‑bias testing or similar proactive efforts, but such testing does not eliminate liability.
For employers, this means AI systems must be treated like any other employment decision tool—with full accountability for discriminatory outcomes, regardless of whether decisions were automated or human‑led.
Enforcement, Legal Exposure
Violations of the automated employment technology provisions would be treated as unfair or deceptive trade practices, enforceable exclusively by the Connecticut Attorney General.
While the bill does not create a private right of action, enforcement could include:
- Civil penalties
- Injunctive relief
- Investigations and compliance mandates
Notably, the Attorney General is given discretion to issue a cure notice for violations through the end of 2027, offering limited flexibility during initial implementation.
What Else Is in Senate Bill 5?
In addition to regulating AI use in employment decisions, SB 5 establishes a sweeping framework covering consumer protection, generative AI, frontier models, online platforms, and state AI governance.
AI Subscription Transparency (Section 1). Subscription‑based providers of AI tools must clearly disclose key terms before charging or renewing a subscription, including functional limitations and the provider’s discretion to restrict access. Violations are enforceable as unfair trade practices by the Attorney General only, with no private right of action.
Frontier AI Models and “Catastrophic Risk” (Section 2). Large developers training high‑compute “foundation models” must establish internal whistleblower protections and reporting processes related to catastrophic risks (e.g., mass harm, major cyberattacks). These provisions apply to a narrow group of large, advanced AI developers, not typical business users.
AI Regulatory Sandbox (Section 3). The Department of Economic and Community Development is directed to design an AI regulatory sandbox, allowing companies to test innovative AI products under reduced regulatory requirements—intended to support innovation and competitiveness.
AI Companions and Mental Health Safeguards (Sections 4–6). Companies offering AI “companions” with human‑like interactions must:
- Disclose that users are interacting with AI, not a human
- Prevent encouragement of self‑harm or violence
- Implement heightened safeguards for minors, including usage limits and parental controls
Violations are enforceable by the Attorney General. These provisions primarily affect consumer‑facing AI platforms, not internal business tools.
Generative AI Content Provenance (Section 15). Large providers of consumer‑facing generative AI (over one million monthly users) must embed content provenance data in AI‑generated or materially altered images, audio, and video, consistent with emerging technical standards. Business‑to‑business uses and many narrow tools are exempt.
AI Verification Pilot Program (Section 33). The Department of Consumer Protection will launch a pilot program approving up to five independent third‑party AI verification organizations. While verification may be admissible in certain civil suits, it does not create a defense in enforcement actions by the state.
Social Media and Algorithmic Feeds for Minors (Section 39). Online platforms that algorithmically recommend content must significantly limit how content is personalized for minors, impose default time limits, restrict notifications, and display prominent health warnings. These provisions target social media platforms, not general business websites.
State Agency AI Oversight (Sections 37–38). State agencies must inventory AI systems, conduct impact assessments, and comply with centralized standards before deploying AI that affects public benefits or individual rights. These requirements apply to government use of AI, not private employers directly.
Workforce, Education, and Small Business Provisions (Sections 17–21, 26, 29). SB 5 creates a Connecticut AI Academy, expands AI workforce training, requires employers to disclose when layoffs are related to AI use, and directs state agencies to help small businesses adopt AI responsibly and competitively.
If Connecticut moves forward with one of the nation’s most expansive AI regulatory frameworks, business clarity and practical compliance will be critical to ensure innovation, fairness, and economic competitiveness move together.
CBIA will continue tracking SB 5 provide updates on any changes affecting Connecticut employers.
For more information, contact CBIA’s Chris Davis (860.244.1931).
RELATED
EXPLORE BY CATEGORY
Stay Connected with CBIA News Digests
The latest news and information delivered directly to your inbox.



