How can local governments regulate generative AI? Just ask ChatGPT

Dr. Alan R. Shark, American City & County

July 23, 2023

3 Min Read
How can local governments regulate generative AI? Just ask ChatGPT

Hardly a day goes by without some new warning or danger being reported about artificial intelligence (AI) and, in particular, generative AI. State and local governments have little choice but to confront AI with proactive policies regarding their usage policy given the vast amounts of personal data collected and stored on government systems. The governor of Connecticut recently signed a bill into law calling for an AI Bill of Rights aimed at protecting citizens. The legislation follows a similar initiative from the White House, which presented a framework for an AI Bill of Rights at the federal level. Professors across the nation worry that students will turn to generative AI to research and write class assignments and even dissertations. To add to the growing list of warnings and concerns, the Federal Trade Commission (FTC) has opened an investigation into the company Open AI that makes ChatGPT over whether the chatbot has harmed consumers.

Public managers share many of the same concerns with private sector managers when it comes to limiting or even prohibiting workers from using generative AI at work. The shared concern is that employees will input sensitive information in return for the solutions they seek. The problem is that once data is entered into an open AI system, there is no control over what happens to the content and where and when in some form such data might be shared with others.

Even the companies who offer generative AI cannot assure people what happens with any data entered into their systems. Ironically, they designed the very “black boxes” where no one is certain what happens inside—not even the programmers. This growing list of concerns includes the fact that information that looks incredibly real, well-laid out without any typos, can provide information, citations, which are absolutely false. It did not take long for the term “hallucinations” to explain such phenomena.

Given generative AI’s complexity, it is near impossible to identify the source(s) for data models, and all such systems lack transparency and accountability. Even sophisticated users have found it difficult if not impossible to assess the veracity of written reports and other forms of information. Complex algorithms are at the heart of today’s generative AI systems where there are inherent dangers in mixing reliable source data with bad, with the possibility of promulgating unintended bias and or prejudice. Worries extend to the protecting of intellectual property and the possibility that bad actors (both domestic and international) could manipulate or manufacture data sources and propaganda, disinformation or misinformation, adversely impacting public opinion, tilting an election, turning policies upside down—all of which would further erode public trust.

With so much at stake, governments at all levels are struggling with creating policies that offer legal remedies, the protection of privacy and liabilities from unintended consequences. So as a starting point, ChatGPT was asked, “How can local governments regulate generative AI?”

According to ChatGPT: “Regulating generative AI at the local government level can be a complex task, but here are some potential approaches that local governments can consider:

  • Establishing ethical guidelines: Local governments can create ethical guidelines that outline the responsible use of generative AI technology. These guidelines can address issues such as bias, privacy, transparency and accountability. They can provide a framework for developers and users to follow when creating or implementing generative AI systems.

To read the complete article, visit American City & County.

 

About the Author

Dr. Alan R. Shark

American City & County

Dr. Alan R. Shark is the executive director for the Public Technology Institute (PTI), a division of Fusion Learning Partners; and associate professor for the Schar School of Policy and Government, George Mason University, where he is also an affiliate faculty member at the Center for Advancing Human-Machine Partnership (CAHMP). Shark is a fellow of the National Academy of Public Administration and co-chair of the Standing Panel on Technology Leadership. Shark also hosts the bi-monthly podcast “Sharkbytes.net.” Dr. Shark acknowledges collaboration with generative AI in developing certain materials.

Subscribe to receive Urgent Communications Newsletters
Catch up on the latest tech, media, and telecoms news from across the critical communications community