Biden’s artificial-intelligence executive order covers broad concerns
President Joe Biden has issued an executive order (EO) establishing new standards for AI safety and security, with an eye to also ensuring the privacy of American citizens.
To bolster AI safety and security standards, the EO will require developers of advanced AIs and large language models (LLMs) such as ChatGPT to share critical information, including safety test results, with the US government; develop standards and tools to ensure the safety of these AI systems; prevent AI-engineered dangerous biological materials; prevent AI-enabled fraud and deception; create a cybersecurity program to develop AI tools as well as address vulnerabilities; and require a national security memorandum to be drafted by the National Security Council and the White House chief of staff.
While it’s an ambitious agenda, Jake Williams, former US National Security Agency (NSA) hacker and faculty member at IANS Research, commented that these orders are largely intended to protect society as a whole and will have little impact on organizations.
“The EO places emphasis on detection of AI-generated content and creating measures to ensure the authenticity of content,” Williams noted in an emailed statement. “While this will likely appease many in government who are profoundly concerned about deepfake content, as a practical matter, generation technologies will always outpace those used for detection. Furthermore, many AI detection systems would require levels of privacy intrusion that most would find unacceptable.”
President Biden has also requested that Congress pass bipartisan data legislation that would be aimed at validating data collection processes; strengthening research and technology that protects user privacy; prioritizing “privacy-preserving techniques,” and developing guidelines for federal agencies to adhere to and evaluate the effectiveness of such techniques.
To read the complete article, visit Dark Reading.