What Does AI Policy in the States Look Like?
Wright & Souers | a Council of State Governments feature
For years, the use of artificial intelligence has been ingrained into the everyday lives of Americans. Users can unlock their computers or mobile phones using AI-powered biometrics, while social media platforms employ complex algorithms to personalize and moderate content. Even email uses natural language processing powered by AI to detect spam and filter emails.
Today, AI systems continue to advance — and show no signs of slowing down. According to a study conducted by OpenAI, the amount of computational power used for AI training has doubled every year since 2012. Computing power is one of the main factors driving the advancement of AI, leading to the development of new AI systems.
Challenges of AI for State Governments
The rapid advancement of AI systems has posed significant regulatory challenges for states. State policymakers must decide what role their state will play in governing the design, development and use of AI systems, as well as how to ensure compliance with these laws. This will require state officials to answer complex policy questions with limited precedent to follow.
Says Matt Perault, director at the Center on Technology Policy at the University of North Carolina at Chapel Hill …
Any emerging technology is challenging for policymakers because it presents new questions that might not have been raised by previous technologies,” “The other issue that is important to keep in mind is that AI is not just about risks, it’s also about benefits for our society. You want to ensure with any new regulation that you are mitigating harms, but not to such an extent that you’re having a disproportionate negative impact on the potential for realizing benefits as well. It’s important for any regulation to adequately balance risk mitigation and benefit maximization.
For example, both public and private sector employers have begun using automated systems powered by AI to assist them in evaluating, rating and making other significant decisions about job applicants and employees. Using these tools can save employers money in terms of the time and manpower allotted to these tasks.
However, some of these tools have been shown to discriminate against certain candidates, including candidates with disabilities.
“There’s been a focus on using existing civil rights law to address issues related to bias and discrimination,” Perault said. “That seems appropriate and helpful, from my standpoint, that there are elements of current law that we can use to address some of the potential harms.”
While issue-specific approaches are helpful, Perault added that there may be a need for industry-wide comprehensive approaches.
Guiding Principles for States
As a starting point, policymakers can address AI governance by identifying the principles with which the design, development and use of AI systems should align. To guide states in these efforts, the White House Office of Science and Technology Policy issued the “Blueprint for an AI Bill of Rights” in October 2022. The Blueprint states that AI systems should be designed, developed and deployed according to principles that bolster democratic values, protect civil rights, preserve civil liberties and ensure privacy …
Those principles, as defined by legal scholars, AI technologists and the White House, can assist state policymakers in trying to ensure that AI governance aligns with the objectives detailed in the Blueprint. They include interdisciplinary collaboration, protection from unsafe or ineffective systems, data privacy, transparency, protection from discrimination and accountability.
An additional resource, the “Artificial Intelligence Risk Management Framework,” was published in January 2023 by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST). This guidance document, which was developed with direction from Congress, included input from private- and public-sector organizations that may seek guidance in managing the risks associated producing AI technologies. NIST has also since published its “AI RMF Playbook, “AI RMF Roadmap” and “AI RMF Crosswalk.”
Looking Forward
As laboratories for innovation, states have taken many different approaches to regulating AI systems that align with these guiding principles. Over the past five years, 17 states have enacted 29 bills that focus on AI regulation. Of these bills, 12 have focused on ensuring data privacy and accountability. This legislation hails from California, Colorado, Connecticut, Delaware, Illinois, Indiana, Iowa, Louisiana, Maryland, Montana, New York, Oregon, Tennessee, Texas, Vermont, Virginia and Washington.
Since 2018, the number of AI-related bills proposed and enacted in state legislatures has grown. States such as California, Colorado and Virginia have laid the groundwork for establishing AI-related data privacy laws, as well as measures aimed at enforcing these laws. States can turn to policy recommendations from existing AI-focused task forces and working groups to better understand how to address AI-related issues in their local context.
Read more …
RACHEL WRIGHT is a Policy Analyst at the Council of State Governments. LEXINGTON SOUERS is a Communications Associate also at the Council of State Governments.