White House AI Advisor Buchanan Says US Is “Catching Up”
Biden’s new executive order demands transparency for the world’s biggest AI models.
President Biden recently signed an executive order demanding greater transparency from major tech companies in their development of artificial intelligence (AI) models and tools, as well as outlining new rules for AI’s usage. The order covers a wide range of AI applications that have the potential to impact people’s lives, including those involving corporate decision-making in areas like housing, hiring, and the criminal justice system. It also requires increased transparency from companies engaged in AI tool creation and development.
The order does not directly subject all large language models and AI tools released by major tech companies like Meta, Google, Microsoft, and OpenAI in the past year to safety testing. The threshold for such testing is set relatively high, and most available models currently do not meet the criteria outlined in the executive order. While these tech giants agreed to uphold standards of responsibility and training in their AI work earlier this year, the new order places additional demands on transparency and testing.
Companies that meet the threshold must notify the federal government of their AI work and share safety testing results before making their creations public. The National Institute of Standards and Technology is tasked with establishing rigorous standards for this testing, including examining the potential of generative AI tools for harmful applications. For instance, Meta’s Llama 2 model can already provide instructions on weaponizing anthrax, as reported.
Ben Buchanan, an advisor to the White House Office of Science and Technology Policy, expressed that while the tech industry had a head start, the government has been working quickly to catch up. Conversations were held with major tech companies, but they primarily revolved around the voluntary commitments established in July rather than the executive order.
The order defines a threshold for companies developing foundation models posing a significant national security risk. Models using more than 10 to the power of 26 FLOPs (Flop is a measure of the speed of a computer in operations per second) in training are subject to this threshold, with the Department of Commerce having the flexibility to adjust it to keep pace with rapidly evolving technology. Models trained primarily on biological sequence data have a lower threshold of 10 to the power of 23 FLOPs due to the heightened risks in that field.
While the order provides a directive for various government agencies to take action, the timeline for setting these standards varies. Regulations enforcing disclosure under the Defense Production Act must be implemented within 90 days. The development of standards may take longer, but the order offers provisional guidance on the types of red team testing required.
The transparency requirements mainly apply to future AI models, making pre-release red team testing for models already in use a challenging endeavor. The government’s actions and international cooperation demonstrate the seriousness with which they approach the AI landscape.
Source: Business Insider
Share:
Biden’s new executive order demands transparency for the world’s biggest AI models.
President Biden recently signed an executive order demanding greater transparency from major tech companies in their development of artificial intelligence (AI) models and tools, as well as outlining new rules for AI’s usage. The order covers a wide range of AI applications that have the potential to impact people’s lives, including those involving corporate decision-making in areas like housing, hiring, and the criminal justice system. It also requires increased transparency from companies engaged in AI tool creation and development.
The order does not directly subject all large language models and AI tools released by major tech companies like Meta, Google, Microsoft, and OpenAI in the past year to safety testing. The threshold for such testing is set relatively high, and most available models currently do not meet the criteria outlined in the executive order. While these tech giants agreed to uphold standards of responsibility and training in their AI work earlier this year, the new order places additional demands on transparency and testing.
Companies that meet the threshold must notify the federal government of their AI work and share safety testing results before making their creations public. The National Institute of Standards and Technology is tasked with establishing rigorous standards for this testing, including examining the potential of generative AI tools for harmful applications. For instance, Meta’s Llama 2 model can already provide instructions on weaponizing anthrax, as reported.
Ben Buchanan, an advisor to the White House Office of Science and Technology Policy, expressed that while the tech industry had a head start, the government has been working quickly to catch up. Conversations were held with major tech companies, but they primarily revolved around the voluntary commitments established in July rather than the executive order.
The order defines a threshold for companies developing foundation models posing a significant national security risk. Models using more than 10 to the power of 26 FLOPs (Flop is a measure of the speed of a computer in operations per second) in training are subject to this threshold, with the Department of Commerce having the flexibility to adjust it to keep pace with rapidly evolving technology. Models trained primarily on biological sequence data have a lower threshold of 10 to the power of 23 FLOPs due to the heightened risks in that field.
While the order provides a directive for various government agencies to take action, the timeline for setting these standards varies. Regulations enforcing disclosure under the Defense Production Act must be implemented within 90 days. The development of standards may take longer, but the order offers provisional guidance on the types of red team testing required.
The transparency requirements mainly apply to future AI models, making pre-release red team testing for models already in use a challenging endeavor. The government’s actions and international cooperation demonstrate the seriousness with which they approach the AI landscape.
Source: Business Insider