GenAI in the US Congress and State Legislatures

US Congress

House of Representatives

The US Congress has initiated a cautious yet proactive exploration of GenAI technologies to support legislative processes. In 2023, some Members of Congress attracted attention for using ChatGPT and other LLMs to compose speeches,¹ formulate questions, and draft bills.² Acknowledging the fast-paced adoption, the House of Representatives began taking measured steps to explore and integrate AI capabilities into Congressional operations.

In April 2023, the House’s Chief Administrative Officer (CAO) established a voluntary “AI working group” at the direction of the Committee on House Administration (CHA), formalizing a structure through which Congressional staff could begin to safely explore new tools like generative language models and anonymously share their findings.³ The CAO accompanied the creation of this group with a published collection of best practices on safeguarding sensitive data and clearly communicated which commercial AI services were institutionally authorized for use.⁴

In June 2023, CHA enlisted a nonpartisan detailee from the GAO’s Science, Technology Assessment, and Analytics (STAA) office to research and provide counsel on AI policies for Congressional operations. Her work includes convening Members and staff with AI expertise or interest, gathering insights and concerns, consulting with Executive branch experts, and promoting consistency in any emerging Legislative and Executive branch policies. In September 2023, the committee issued its inaugural “Flash Report,”⁵ instructing support agencies and the internal operational offices of the House to publicly catalog current and potential AI uses — a step towards aligning AI adoption. Further guidance from the Committee urged the agencies to create comprehensive AI-related governance documents aligned with the US National Institute of Standards and Technology (NIST) AI Risk Management Framework⁶ and to inform the committee about any AI advisory committees, pilots, or staff upskilling initiatives. Subsequent CHA Flash Reports have been issued on a monthly basis.⁷

During a September 14, 2023 Congressional hackathon, then-House Speaker Kevin McCarthy emphasized the significance of balancing the risks and the potential to employ new automated technologies to better serve the public: “I understand the fear, but I also want to understand the opportunity. And I don’t want to see things only happening in the private sector.”

Senate

Similarly, the Senate Sergeant at Arms Chief Information Officer (SAA CIO) established an internal staff AI working group in November 2023 to foster experimentation and explore use cases. The chamber issued internal guidance in December 2023 authorizing use of Google’s Bard, OpenAI’s ChatGPT, and Microsoft Copilot with required compensating safety controls to decrease risk.

The directive issued by the Senate Sergeant at Arms’ Chief Information Officer (SSA CIO) allows use of the three AI services accompanied by safety controls. This will allow staff to experiment and become familiar with the technology while maintaining a cautious approach to wider deployment. The guidelines make it clear that human involvement remains a cornerstone of responsible usage.⁸

Legislative Workflow Disruptions

Co-author Marci Harris recently warned that “without measured steps to understand and responsibly deploy these technologies, our democratic institutions risk being overrun or out-maneuvered by outsiders deploying this technology to influence them.”⁹

While new GenAI tools are already providing productivity gains in some areas of the legislative workflow, they are also creating disruptions — and these likely portend more significant issues ahead. In January 2023, Stanford’s John Nay warned of a coming era of “large language models as lobbyists”¹⁰ and in March, Harvard’s Nathan E. Sanders and Bruce Schneier warned that LLMs would enable "high quality political messaging quickly and at scale."¹¹ These predictions are becoming reality as lobbyists¹² and academics¹³ begin to experiment with GenAI tools to target lawmakers.

In September 2023, POPVOX Foundation joined the Westminster Foundation for Democracy to provide evidence to the UK House of Commons Public Administration and Constitutional Affairs Committee and cautioned that:

Given the proliferation of GenAI since December 2022, it may be that 2022 is the last year in human history in which we can be certain that new texts were written by humans. While the addition of GenAI tools to the field of lobbying and advocacy may not be problematic in itself, these tools do raise new questions.¹⁴

While the proactive approaches to GenAI taken by the US House and Senate are commendable, the reality is that failing to understand and respond to these new technologies is not optional. Legislatures can either take steps to keep pace or be overwhelmed by the brute force changes that these technologies are bringing. For example:

Increased volume of advocacy comments

The Brookings Institution’s Bridget C. E. Dooling and Mark Febrizio warned in May 2023 that AI-enabled technologies will make it much easier to create a large volume of comments (both genuine and fake) that could potentially flood government consultation processes.¹⁵ Of course, legislative offices are among the most highly targeted audiences of advocacy communications and anecdotally, offices are already seeing an increase in message volume. The newly established House Digital Services team is monitoring these developments and exploring possible solutions.

Ultimately, the response to increased input volumes due to LLM-generated comments or calls should not be LLM-generated responses that just establish an AI-talking-to-AI loop removing the constituent and the lawmaker from the process completely. Researchers like Beth Simone Noveck at Northeastern University’s GovLab are looking into ways to use AI to upgrade these processes and “Reboot Democracy”¹⁶ for a better, more informative, and more satisfying constituent engagement experience. (AI-enabled engagement approaches will be discussed extensively in Volume 2 of this report.)

Issues with LLM-drafted bill text

For the professional lawyers in the House Office of Legislative Counsel (HOLC), LLM tools have resulted in a surge of Members and staff “drafting” bill text using third-party LLM services. And while these drafts may be formatted to resemble a bill, it is often the case that the LLM-produced text would not have the policy impact that the lawmaker intends — as was recently the case in New York¹⁷ — or could lead to other unintended interactions with United States Code. Although policymakers and their staff may assume that providing a draft to legislative counsel with LLM-generated text would streamline the process and allow for expedited review, the opposite is actually true. LLM tools are helping lawmakers and staff more easily explore policy ideas and turn them into proposals, which is increasing the volume of drafts submitted to the HOLC. The lawyers reviewing these submissions, however, must still undertake the same review process as always, without any corresponding increase in resources or tools to help them handle the increased workload.

A recent survey found that 82% of lawyers believe AI can be used for legal work, with 51% thinking it should be used.​¹⁸ In law firms, AI is being used for everything from e-discovery to legal research, to document management and automation, reducing the time spent on manual tasks, enabling legal professionals to devote more time to client-focused activities. Over time, it will be necessary for legislatures to ensure that this same kind of augmented efficiency is available for the professional lawyers who draft legislation, while preserving human oversight. (Examples and strategies for incorporating AI into legislative drafting will be discussed further in Volume 2.)

Privacy and Security Issues with Commercial Tools

Another significant issue, despite the clear guidance issued by the House CAO and Senate SAA urging staff to only use commercial LLM tools for research and avoid posting non-public information in chat windows — is that these recommendations are unenforceable. Unapproved uses are occurring and will become more common as staff and Members come to rely on these tools in other parts of their lives, which is why it is essential for the institution to take steps to provide more secure and private ways to access these technologies beyond commercial subscriptions.

GAO is leading the way with its “Project Galileo” — an internal LLM that deployed in September. GAO’s Chief Data Scientist Taka Ariga recently explained the impetus (in an event keynote speech): “We want to deploy our own large language model inside our own computer infrastructure so that we control what comes in [and] what goes out.”¹⁹ Project Galileo leverages a commercial pre-trained model to access the corpus of GAO data. “We can actually experiment with a variety of learning techniques on how to point that generative AI to GAO’s corpus – that really rich historical data that we have.”

These are just a few examples of LLM-enabled disruptions that will only increase over time and require legislative institutions’ attention and increased technical capacity to address. They likewise point to the need for legislative bodies to begin preparing to increase their own technical capacity to respond in kind. (Later volumes in this series will look at approaches legislatures are taking to leverage AI tools to process, understand, and respond to these challenges.)

“The goal is to have thoughtfully implemented AI that can equip Congress to improve its capacity and meet rising constituent expectations in an increasingly digital age.”²⁰

State and Local Governments in the US

Several state legislatures have begun experimenting with GenAI. In January 2023, Massachusetts state Senator Barry Finegold introduced a privacy bill crafted with the aid of ChatGPT, marking one of the first instances of AI-generated legislative text.²¹ This innovative approach didn't remain an isolated case for long, as legislators in other states swiftly embraced AI to draft bills encompassing a range of topics from gambling regulation to non-controversial ceremonial resolutions. The rapid exploratory adoption of AI triggered states to initiate the formulation of policies governing accountable and ethical AI use. On August 15, 2023, California became the first state legislature to adopt an AI-penned resolution (Senate Concurrent Resolution No. 17), outlining principles for responsible AI use.²² On April 13, 2023, Alaska state Representative Jesse Sumner used Microsoft Copilot to draft a bill aimed at legalizing gambling on Alaska ferries.²³ These instances underscore the growing inclination towards AI-assisted legislative drafting across various states, each with its unique legislative requirements and focus areas.

In exploring AI-enabled use cases beyond legislative drafting, multiple state legislatures have created advisory committees to oversee the technology’s deployment, inventory its use, and strive to understand tools in greater detail. In June, the Louisiana legislature directed the state’s Joint Committee on Technology and Cybersecurity to research AI’s potential impact on “operations, procurement, and policy” within the government.²⁴ In the same month, Texas Governor Greg Abbott signed into law House Bill 2060 to create an AI advisory council to monitor state agency use of AI and determine the need for a code of ethics.²⁵ North Dakota, Puerto Rico, and West Virginia created similar councils during the 2023 legislative session.²⁶ In July, Connecticut’s General Assembly enacted Public Act No. 23-16 directing the state’s Office of Policy Management to “develop and establish policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence and are in use by state agencies.”²⁷

As state legislatures begin to grapple with the introduction of AI tools across their own entities and the private sector, the need is growing for legislators to be more educated and familiar with the technology that many are beginning to look to regulate. According to the National Conference of State Legislatures, twenty-five states, Puerto Rico, and the District of Columbia had AI-related legislation introduced for consideration in the 2023 legislative session.²⁸


¹ “Rep. Jake Auchincloss Uses Chatgpt Artificial Intelligence To Write House Speech,” Rep. Auchincloss’ Website (January 26, 2023)

² Cassie Semyon and David Mendez, “Lieu’s ChatGPT resolution seeks better understanding of AI,” Spectrum News (March 1, 2023)

³ “House Digital Service Invitation to Join AI Working Group,” POPVOX Foundation

⁴ “Office of the Chief Administrative Officer Notice to All House Staff,” POPVOX Foundation

⁵ Marci Harris and Aubrey Wilson, “House ‘flash report’ on AI signals capacity gains,” FedScoop (September 18, 2023)

⁶ NIST AI Risk Management Framework

⁷ Committee on House Administration Subcommittee on Modernization Reports

⁸ Aubrey Wilson, “The Senate Issues Guidelines for Responsible Internal AI Usage,” POPVOX Foundation (December 14, 2023)

⁹ Alexandra Kelley, “Senate’s top tech official greenlights research use of generative AI,” FCW/NextGov (December 19, 2023)

¹⁰ John Nay, “Large Language Models as Lobbyists,” Stanford Law School (January 6, 2023)

¹¹ Nathan E. Sanders and Bruce Schneier, “How AI Could Write our Laws,” MIT Technology Review (March 13, 2023)

¹² Kate Ackley, “Lobbyists Flirt with AI While Remaining Cautious of its Promises,” Bloomberg Government (September 8, 2023)

¹³ Steve Israel, “Here’s What Happened When ChatGPT Wrote to Elected Politicians,” The New Republic (March 30, 2023)

¹⁴ “How artificial intelligence and large language models may impact transparency” Joint submission by WFD and POPVOX Foundation in response to the call for evidence for the post-legislative scrutiny inquiry of the UK Lobbying Act 2014 (September 20, 2023)

¹⁵ Bridget C. E. Dooling and Mark Febrizio, “Robotic rulemaking,” Brookings (April 4, 2023)

¹⁶ “Reboot Democracy,” The GovLab at the Burnes Center for Social Change

¹⁷ Grace Ashford, “A.I. Wrote a Housing Bill. Critics Say It’s Not Intelligent,” The New York Times (July 14, 2023)

¹⁸ “AI’s impact on law firms of every size,” Thomson Reuters (August 15, 2023)

¹⁹ Cate Burgan, “GAO Building GenAI Tools to Face Internal Challenges,” MeriTalk (November 9, 2023)

²⁰ Kate Ackley, “Lobbyists Flirt with AI While Remaining Cautious of its Promises,” Bloomberg Government (September 8, 2023)

²¹ Mojer Chatterjee, “AI just wrote a bill to regulate itself,” Politico (July 19, 2023)

²² “State Legislature Adopts Resolution on AI ... Drafted by AI,” Tribune News Service (August 15, 2023)

²³ Jeff Landfield, “Alaska House representative uses artificial intelligence to write bill legalizing gambling on Alaska ferries,” Alaska Landmine (November 21, 2023)

²⁴ Lousiana Senate Concurrent Resolution No. 49 (2023)

²⁵ Keaton Peters, “More than a third of state agencies are using AI. Texas is beginning to examine its potential impact,” The Texas Tribune (January 2, 2024)

²⁶ “Artificial Intelligence 2023 Legislation,” National Conference of State Legislatures (September 27, 2023)

²⁷ “An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy,” Connecticut Substitute Senate Bill No. 1103 (July 1, 2023)

²⁸ “Artificial Intelligence 2023 Legislation,” National Conference of State Legislatures (September 27, 2023)

Previous
Previous

The Game Changer: GenAI

Next
Next

International Collaboration and Examples