A Quick, Non-Scientific Look At A.I. Testimony In State Legislatures

For a long time, policymaking has been built on the idea that knowledge is power. The more you know, the better decisions you make. But what happens when knowledge is no longer scarce? When any question can be answered instantly? When every legislator, staffer, and stakeholder has access to the same baseline information? 

At the state level, the policy firehose is already on: MultiState reports 635 AI-related bills introduced in 2024, 1,208 in 2025, and 1,561 introduced as of March 2026 across 45 states—with a methodology note that their definition is intentionally broad (GenAI, deepfakes, task forces, autonomous vehicles, etc.). 

Meanwhile, legislatures aren’t just regulating AI—they’re using it. NCSL reports legislative staff use GenAI for things like hearing transcription, drafting, translation, and research, and its 2025 survey found 44% of respondents said they were currently using generative AI for legislative work. 

Last week, I embarked on a “non-scientific” review of testimony from a state legislature that publishes full hearing records. In some cases, the bills I reviewed had 50 or more written submissions from organizations, advocates, and individuals. Reading through them, a pattern started to emerge (this analysis is intentionally cautious: it avoids the two common errors (1) “everything is AI,” and (2) “none of it is”): 

  • Different authors. Same structure.

  • Different organizations. Same phrasing.

  • Arguments that were technically sound, but oddly generic—like they had been assembled from the same underlying template.

  • Conservatively, the analysis estimates ~8–16% of testimony may be AI-assisted or template-assisted, and 

  • ~1–6% may be largely AI-drafted.

To be clear, this is not an academic study. But based on common indicators—repetition, tone, structure—it’s reasonable to make a directional estimate. 

And while that exercise was fun, there is no reliable, non-invasive way to “prove AI authorship” of a short public comment. This is why many institutions are focusing on managing volume and provenance rather than trying to “detect AI” perfectly. Brookings has explained how generative A.I. can amplify mass comment campaigns by making them appear less duplicative, complicating how agencies group and respond to comments. 

But just because an argument was shaped by A.I. doesn’t make it invalid. But it does mean something important has changed. A.I. isn’t just influencing policy outcomes. It’s influencing the inputs policymakers rely on.

This is where things start to get interesting.

A.I. isn’t just changing how private companies operate. It’s starting to change how state legislatures function—often in ways that haven’t been fully acknowledged yet. At a basic level, legislative staff are already using A.I. for:

  • summarizing bills

  • drafting internal memos

  • preparing briefing materials

  • analyzing stakeholder positions

That alone is a meaningful shift. But it goes deeper than that.

GovTech reported Iowa using an A.I. tool (“Legible”) to help staff and lawmakers track and evaluate bills, positioning the tool as a workflow accelerator. A presentation before the Nevada legislature describes instances where a bill sponsor used Bing AI to draft an amendment and then used AI again to draft another bill (presented as examples of AI in bill drafting contexts). 

Traditionally, bill drafting has been a very human process. Legislators identify an issue. Staff research it. Attorneys translate policy ideas into legal language. There’s iteration, negotiation, back-and-forth. It’s slow for a reason—precision matters.  A.I. is starting to compress that process.

Today, a staffer can:

  • upload existing statutes and ask for comparable language

  • generate draft bill text based on policy goals

  • summarize how other states have handled similar issues

  • identify conflicts with existing law

None of this replaces legislative counsel. But it changes the workflow. The first draft—the thing that used to take days or weeks—can now happen in minutes.

IT also raises a few questions that don’t have clean answers yet:

  • If the barrier to drafting legislation drops, do we see more bills introduced?

  • If A.I. can generate “good enough” legal language quickly, does that shift how much scrutiny early drafts receive?

  • And if multiple offices are using similar tools, do bills start to look… the same?

Because that’s already happening in another part of the process…

----

Note:  Why Hawaii was Chosen and what was analyzed

  • Written testimony packets for Hawaii committee hearings are widely available and LegiScan hosts many of the PDFs in an analyzable format

  • Five 2026 bills with high submission volume (each ≥50 submissions) were analyzed from LegiScan-hosted testimony PDFs:

    • SB 433: 285 submissions. 

    • SB 2575: 96 submissions. 

    • SB 2720: 109 submissions. 

    • SB 2576: 86 submissions. 

    • SB 2845: 103 submissions.

  • Method (non-academic, conservative):Identify each submission via the “Submitted on” header; measure exact duplicates and near-duplicates among longer letters; flag obvious copy/paste errors like referencing the wrong bill number; look for repeated structure and generic phrasing.

  • Findings (directional):Across these five bills, near-duplicate long-letter clustering was small but real, and “wrong bill number” slips appeared in some testimony sets (notably SB 433 and SB 2845). 

  • Exploratory estimates (clearly labeled):A conservative interpretation is that roughly 8–16% of submissions are AI-assisted or template-assisted, with roughly 1–6% likely largely AI-drafted, based on those measurable “automation-shaped” signals combined with how widespread workplace GenAI use already is (Gallup).

  • This is intentionally cautious: it avoids the two common errors (1) “everything is AI,” and (2) “none of it is.”

Next
Next

Artificial Intelligence Didn’t Sneak Up On Us. It Jumped Us.