Unlocking the Code: How Legislators can use AI to Demystify Regulation
BY KATHRYN LUEDKE
Emerging technologies are presenting promising new solutions to the seemingly age-old problem of navigating government regulations. The US’s Code of Federal Regulations (CFR) is over 200,000 pages across 200 volumes. Ensuring that the code is up to date or efficiently written is not only time consuming, but expensive, particularly if this goal were pursued through a line-by-line human review. With correct prompting and human oversight, AI can assist lawmakers and their staff in effectively and efficiently reviewing large volumes of information like the CFR to highlight themes, identify code related to specific legislation, and search the code more broadly. It can also more effectively identify relationships between rules and agencies, helping to eliminate redundancies and inefficiencies in policy’s implementation. AI-enabled tools can empower policymakers with the ability to complete tedious tasks faster and perform more exploration of the CFR by giving direct answers to questions such as “how many regulations are associated with 44 U.S.C. §§ 3501-3521?” or “What code cites the Evidence Act?” The use of this emerging technology in these ways (and more!) can streamline staffer access to regulatory knowledge through easier code searching and information access.
Use Cases: How to integrate AI to better understand and interact with the US Code of Federal Regulations
AI, particularly GPTs and Large Language Models (LLMs), can be prompted to identify regulations associated with a specific legislation by a user asking “What regulatory code is associated with the PRA?” or “Can you tell me what code cites §§ 1435?” Prompts such as these allow the user to easily access a starting point for their research regarding regulations associated with a piece of legislation, across multiple agencies.
GPTs and LLMs are designed to be approachable and intuitive, allowing the user to search for information regarding a regulation, law, or bill using different names, such as the legislative number, name, or even abbreviation. By providing more flexibility to the user regarding how to search for information, AI-enabled technologies can streamline policymaker and staff access to consistent information.
The capabilities of GPTs allow staff to pursue cross-jurisdictional research questions in new, exploratory ways. For example, a staffer can inquire, “How does the Evidence Act overlap with the PRA?” or “What executive orders support the PRA?”
When prompted with intention, AI can provide staff and policymakers with valuable, relevant information and sources to expand their awareness and understanding of outside information. For example, a GPT can be used to compare regulatory code to industry best practice to identify outdated code in need of revision.
AI-enabled tools can be used to quickly provide summaries of a piece of legislation, its implementation, how it compares to another legislative item, and its potential effects on existing regulatory code.
Snapshot: Examples of how AI is being explored to aid agencies in the informal rulemaking process
Here are three examples, taken from the Reg Map by ICF, of how GenAI could be used in the informal rulemaking process:
Developing a Rule
GenAI or a GPT could help summarize resources used to create a rule, including information on industry best practices, helping to write the NPRM (notice of proposed rulemaking) for review. A GPT could easily aggregate all the statutory authority.
Send Proposed Rule to the Office of Management and Budget (OMB)
A GenAI tool could help OMB and Office of Information and Regulatory Affairs (OIRA) identify overlapping rules and authorities, helping facilitate interagency review coordination.
Analyze Public Comments
Comment analysis is already automated, but GenAI can help identify comments generated by bots and filter for real and substantive comments to streamline workflows.
Mitigating risks in tool integration
Illusion: Responsible use of AI requires human intervention.
AI cannot and should not be cited as the sole ideator behind an idea, especially in something as consequential and subjective as government regulation and rulemaking. GenAI-enabled tools should be used to help the human process of understanding, reviewing, and advancing regulations. An individual will always need to review and verify AI-produced information.
Interpretation: Regulatory language evolves and is nuanced.
Regulatory language is drafted with precise intention to ensure proper implementation of a policy’s goals. However, due to the nature of how language and law can evolve and be open for interpretation, rules are often challenged to further clarify their impact. AI’s ability — or lack thereof — to calculate these nuances should always be taken into consideration.
Incorrect, Incomplete Information: An AI model is only as good as its training.
Any AI-generated response should be verified. Although GenAI has demonstrated promise when answering simple questions, such as “Where in the CFR is the Evidence Act codified,” it is less capable in producing quality answers when asked more qualitative exploratory questions. It is vital to understand the limitations of the model being used and what information-base it has been trained on. Specialized inquiries are more likely to require a specialized model that has been trained on specific data and materials.
Relevance: Maintenance is required to maintain an effective LLM or GPT.
Like any good software, an LLM or GPT needs to be managed consistently to perform its best. Policymakes and staff should ensure that the GenAI-enabled tool they are using is being continually updated with new federal regulations, legislative code, best practices, and training. If not, that limitation needs to be calculated by the user when validating the information the model produces. For customized models, owners should ensure consistent maintenance and train their AI on new data to ensure better responses.
Regulatory Review: Leveled Up
Ideas for how policymakers and staff can integrate regulatory review beyond their individual workflows:
Institute an annual “clean-up” of regulations and rules
AI can help legislative staff review the CFR to pinpoint redundancies and bottlenecking (i.e., multiple agency requests funneling through a centralized authority). For example, this work is already being done by Deloitte’s Government Innovations Lab, where they identified 18,000 redundant or similar passages in 2017. Staffers can then use information on redundancies and bottlenecking to improve their legislative code as the draft, or avoid creating further redundancies.
Identify out-of-date and ineffective regulations related to implementation
GenAI models can be specialized and trained on industry best practices, allowing legislators and their staff to compare current regulatory code against industry best practices. This can help identify regulations that are out of date and inefficient when implemented which, in turn, can help agencies and contractors create better products and avoid pitfalls.
Create processes to fast-track regulatory changes
Policymakers in Congress and regulatory experts at OMB should explore partnering together to create a process to fast-track updates to out-dated and redundant regulations, outside of the informal rulemaking process to expedite the clean-up process.
Additional resources for further reading
The GAO and other agencies are beginning to compile thorough guidelines on how to integrate AI into workflows across the federal government. To learn more about these initiatives, check out:
AI Accountability Framework from the GAO
Government Insights Lab at Deloitte
AI Use Cases at AI.gov