Session Expired

Your session has expired. Please sign in again to continue where you left off.

Sign In Again

How We Classify AI Bills

Our methodology for categorizing AI legislation, defining topic tags, and identifying liability provisions. Tags are not mutually exclusive — a single bill can carry multiple topic tags.

Topic Tags Are Not Mutually Exclusive

A bill can carry multiple topic tags simultaneously. For example, Colorado SB 24-205 carries "AI Transparency," "Automated Decision-Making," and "Comprehensive AI" tags. A bill tagged "AI Transparency" is not automatically free of penalties or liability provisions — many transparency bills include enforcement mechanisms, fines, or civil action rights.

Tag Definitions

Each tag below reflects a specific regulatory focus area. Bills are assigned tags based on the substantive content of the legislation, not its title or stated intent alone.

Comprehensive AI

Regulates AI broadly across sectors with obligations beyond a single use case. Typically includes governance frameworks, impact assessments, and enforcement mechanisms. Examples: Colorado SB 24-205, EU AI Act-inspired state legislation.

AI Transparency

Requires notice, labeling, or disclosure when AI is used — whether or not penalties attach. A transparency bill is not automatically penalty-free. Many carry fines, civil action rights, or regulatory enforcement provisions.

Automated Decision-Making

Regulates the use of AI or algorithms to make or materially influence decisions affecting individuals — hiring, housing, credit, insurance, benefits, or similar consequential outcomes.

Deepfakes / Synthetic Media

Governs non-consensual deepfakes, election-related synthetic media, digital replicas, or AI-generated impersonation content. Includes both criminal and civil frameworks.

AI in Employment

Covers AI use in hiring, firing, performance evaluation, worker monitoring, and workplace surveillance. Overlaps with Automated Decision-Making when employment decisions are algorithmic.

Generative AI / Foundation Models

Addresses training data disclosure, watermarking requirements, model-level transparency obligations, and governance of large-scale generative AI systems.

AI in Healthcare

Regulates AI use in clinical decision support, diagnostics, patient data analysis, or health insurance determinations.

AI in Education

Covers AI use in student assessment, admissions, educational content generation, or school surveillance systems.

AI in Government

Governs the use of AI by state agencies, law enforcement, or public institutions, including procurement standards and algorithmic accountability requirements.

AI in Insurance

Regulates AI-driven underwriting, claims processing, pricing models, or risk assessment in the insurance industry.

AI in Political Advertising

Requires disclosure or restricts the use of AI-generated content in political campaigns, advertising, or election-related communications.

When Transparency Bills Include Liability

Transparency does not mean penalty-free

A common misconception is that "transparency" bills only require disclosure with no consequences for non-compliance. In practice, many transparency-tagged bills include financial penalties, civil action rights, or regulatory enforcement mechanisms. We flag these with a "Penalties apply" badge on bill cards.

Examples of transparency bills with liability provisions:

When a bill carries the "AI Transparency" tag and our data indicates it also includes penalties, fines, or civil action provisions, we display a "Penalties apply" badge on the bill card. This is derived from the bill's business_impact field and key_provisions data — we do not invent or assume penalty provisions.

How We Assign Tags

Our classification pipeline operates in multiple stages:

  1. AI-assisted classification — Each bill is analyzed by our AI pipeline, which reads the bill text, identifies key regulatory mechanisms, and assigns initial topic tags based on substantive content.
  2. Keyword heuristics — Supplementary rules check for specific legal terms (e.g., "impact assessment," "algorithmic audit," "private right of action") to catch provisions that broad classification may miss.
  3. Human review — Editors review AI-assigned tags, correct misclassifications, and apply manual overrides for edge cases or bills with unusual structure.
  4. Re-classification on amendment — When a bill is amended, it re-enters the classification pipeline. Tags may change as provisions are added, removed, or modified.

Confidence and Updates

Bills are assigned a fact-check confidence score reflecting how recently and thoroughly they have been verified. On bill detail pages, you'll see a confidence badge (Verified, Reviewed, Under Review, or Pending) indicating the current verification state.

When bills are amended, they are re-classified automatically. The "Data Updated" date on each bill detail page shows when our records were last refreshed.

Questions or Corrections?

If you believe a bill is misclassified or missing a tag, email [email protected] with the bill number and your suggested correction. We review every submission.