Connecticut is actively pursuing AI regulation through a set of targeted bills that address online safety, responsible AI governance, and employee protections against automated decision-making. While no AI-specific legislation has been enacted yet, the three bills currently moving through the General Assembly are substantial in scope and signal the state's intent to establish meaningful guardrails around artificial intelligence. Connecticut is currently tracking 9 AI-related bills in the 2025-2026 legislative session.
Current Data
Currently tracking 9 bills in Connecticut. 0 enacted, 3 in committee. Data updates automatically.
SB 5: An Act Concerning Online Safety
Senate Bill 5 is Connecticut's online safety initiative, targeting AI-driven content platforms and their impact on users, particularly minors. The bill addresses how artificial intelligence systems curate, recommend, and deliver content online, with an emphasis on platform accountability for algorithmic amplification of harmful material. Connecticut already passed a children's online privacy law in recent sessions, and SB 5 builds on that foundation by directly regulating the AI systems behind content delivery.
Who Is Covered
Operators of online platforms that use AI or algorithmic systems to recommend, rank, or curate content for Connecticut residents. This includes social media platforms, content aggregators, streaming services, and any digital service that uses automated systems to personalize user experiences.
Key Provisions
- Require platforms to disclose how AI algorithms curate and recommend content to users
- Establish safeguards for minors against AI-driven content that promotes self-harm, eating disorders, or other harmful behaviors
- Mandate platform accountability measures for algorithmic amplification of dangerous or misleading content
- Provide mechanisms for users to opt out of AI-driven content recommendation systems
Business Impact
Companies operating content platforms accessible to Connecticut residents should prepare for transparency requirements around their recommendation algorithms. The bill would require documented processes for how AI systems select and prioritize content, with heightened obligations when minors are involved.
SB 86: An Act Addressing Responsible AI Use
Senate Bill 86 is Connecticut's most comprehensive AI governance proposal. It addresses the responsible development, deployment, and use of artificial intelligence across sectors, establishing a broad framework for AI accountability. This bill represents an ambitious attempt to create overarching rules for how organizations build and deploy AI systems within the state.
Who Is Covered
Any business or organization that develops, deploys, or uses AI systems that affect Connecticut residents. This broad scope captures technology companies, healthcare providers, financial institutions, insurers, employers, and government agencies.
Key Provisions
- Establish standards for transparency in AI decision-making, requiring organizations to disclose when AI is used in consequential decisions
- Require impact assessments for high-risk AI systems deployed in areas such as healthcare, finance, employment, and housing
- Create accountability frameworks that mandate human oversight of automated systems making significant decisions about individuals
- Define requirements for AI system documentation, testing, and ongoing monitoring
- Address bias and discrimination in AI outputs, requiring regular audits of algorithmic fairness
Business Impact
If enacted, SB 86 would establish one of the more comprehensive state-level AI governance frameworks in the country. Organizations deploying AI systems in Connecticut would need to conduct risk assessments, maintain documentation of their AI systems, and implement oversight mechanisms. Companies should begin inventorying their AI tools and evaluating which would qualify as high-risk under the proposed framework.
SB 435: An Act Concerning Automated Decision Systems Protections for Employees
Senate Bill 435 focuses specifically on the use of artificial intelligence and automated decision systems in the employment context. As employers increasingly rely on AI for hiring, performance evaluation, scheduling, and workforce management, this bill seeks to protect Connecticut workers from opaque or biased algorithmic decisions that affect their livelihoods.
Who Is Covered
Employers operating in Connecticut who use automated decision systems or AI tools in any aspect of the employment lifecycle, including recruitment, hiring, promotion, termination, compensation, and performance evaluation.
Key Provisions
- Require employers to notify employees and job candidates when automated decision systems are used in employment decisions
- Mandate impact assessments for AI tools used in hiring, promotion, and termination decisions
- Provide employees with the right to request human review of significant AI-driven employment decisions
- Prohibit the use of AI systems that produce discriminatory outcomes in employment based on protected characteristics
- Require employers to maintain records of how automated systems are used and their outcomes
Business Impact
Employers using AI-powered applicant tracking systems, resume screeners, video interview analysis tools, performance monitoring software, or automated scheduling platforms would need to evaluate these tools for compliance. The right-to-human-review provision is particularly significant, as it would require organizations to maintain the capacity for human decision-makers to override or reconsider AI-generated employment decisions.
Key Bills at a Glance
| Bill | Topic | Status | Risk Level |
|---|---|---|---|
| SB 5 | Online safety & AI content platforms | Introduced | Medium |
| SB 86 | Responsible AI use & governance | Introduced | High |
| SB 435 | Employee automated decision protections | Introduced | High |
Connecticut's Privacy Landscape
Connecticut is not starting from scratch on data governance. The state enacted the Connecticut Data Privacy Act (CTDPA), which took effect on July 1, 2023, making it one of the earliest comprehensive state privacy laws in the nation. The CTDPA grants consumers rights over their personal data, including the right to access, correct, delete, and opt out of data processing for targeted advertising and profiling.
The CTDPA's profiling provisions are directly relevant to AI regulation. The law already requires data controllers to conduct data protection assessments for processing activities that present a heightened risk of harm, including profiling that produces legal or similarly significant effects. The three AI bills currently under consideration build on this existing framework, extending protections into specific domains such as online content, employment, and general AI governance.
This existing privacy infrastructure gives Connecticut a practical advantage: businesses already subject to the CTDPA have compliance processes in place that can be adapted to meet the requirements of new AI-specific legislation.
Federal Rules to Watch
Connecticut's AI bills exist alongside evolving federal activity. The Biden administration's Executive Order on AI Safety established guidelines for federal agencies, and Congress continues to debate proposals around algorithmic accountability and AI transparency. Businesses operating in Connecticut should monitor federal developments that could preempt, complement, or conflict with state-level requirements.
Key federal considerations include the proposed NIST AI Risk Management Framework, ongoing FTC enforcement actions against deceptive AI practices, and sector-specific guidance from agencies such as the EEOC on AI in employment. Connecticut's SB 86 in particular may overlap with federal standards, so organizations should plan for compliance at both levels. Visit our federal AI policy tracker for the latest updates.
Compliance Checklist for Connecticut
- Inventory your AI systems — catalog every AI tool and automated decision system your organization uses, noting which ones affect Connecticut residents or employees
- Assess content platform obligations — if you operate a platform that uses AI to recommend content, evaluate your transparency and minor-protection practices against SB 5's proposed requirements
- Conduct AI impact assessments — for high-risk AI systems in healthcare, finance, housing, or employment, begin documenting risk assessments aligned with SB 86's framework
- Audit employment AI tools — review all AI-powered hiring, evaluation, and workforce management tools for bias and ensure human review processes exist per SB 435
- Review CTDPA compliance — confirm your existing Connecticut Data Privacy Act compliance program covers AI-related data processing and profiling activities
- Establish human oversight protocols — ensure your organization can provide meaningful human review of consequential AI decisions, particularly in employment contexts
- Monitor bill progress — all three bills are in the Introduced stage; track committee hearings and amendments for changes that may affect compliance timelines
For a complete index of Connecticut AI legislation, visit our Connecticut AI laws tracker.
This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.
— AI Laws by State Team
Subscribe to the weekly digest →