The Senate Issues Guidelines for Responsible Internal AI Usage
The US Senate has officially taken the leap into the artificial intelligence (AI) era, issuing guidelines and best practices for staff to follow, in lockstep with internal guidance established by the US House of Representatives in October. The new Senate guidance, authorizing the use of OpenAI’s ChatGPT, Microsoft’s Bing Chat AI, and Google’s Bard for research and evaluation purposes, is a notable step in the institution’s experimentation of how AI-enabled tools can augment staff capacity, boosting productivity and efficiency.
We commend the Senate for conducting a risk analysis of a diverse array of AI tools and offering a broad toolkit for staff exploration of this emerging technology. Not only will this assist the institution’s adoption of these new tools, it provides an opportunity for senators and legislative staff to have hands-on engagement with a technology the chamber is crafting policy to address. This is essential for promoting timely and competent oversight of AI.
The directive issued by the Senate Sergeant at Arms’ Chief Information Officer (SSA CIO) allows use of the three AI services accompanied by safety controls. This will allow staff to experiment and become familiar with the technology while maintaining a cautious approach to wider deployment. The guidelines make it clear that human involvement remains a cornerstone of responsible usage.
Senate staffers should thoroughly review the risk assessments provided by the SSA CIO to become familiar with potential hazards and fully understand the required controls before using these tools. The guidance makes clear that staff are only allowed to use the authorized platforms for research and evaluation purposes, and exclusively with non-sensitive data. It further stipulates that staff should verify the accuracy of all information generated by AI tools through trusted resources.
The guidelines and best practices set forth by the Senate strike an appropriate balance between caution and innovation. They empower staff to engage with AI tools with a clear understanding and realistic expectations, fostering a healthy environment for experimentation and learning in the evolving world of AI.