Unveiling the Global AI Governance

The Ernest & Young report sheds light on new regulations.

 In October, Ernst & Young (EY), one of the Big Four accounting firms, has released a report named “Policy Trends and Considerations to Build Confidence in AI: The Artificial Intelligence (AI) Global Regulatory Landscape,” with the objective of elucidating the regulatory framework for AI on a global scale, offering policymakers and businesses a guide to comprehend and navigate this intricate landscape.

The report has analyzed eight major jurisdictions with notable legislative and regulatory activity in the field of artificial intelligence: the European Union, Canada, China, Japan, Korea, Singapore, the United Kingdom, and the United States. The report uncovers that, despite diverse cultural and regulatory settings, these jurisdictions have common goals and strategies in AI governance.

They all strive to mitigate potential harms arising from AI while maximizing societal benefits. Additionally, they align with the OECD (Organization for Economic Co-operation and Development) AI principles, as endorsed by the G20 in 2019, placing emphasis on human rights, transparency, risk management, and other ethical considerations.

The European Union has adopted one of the most proactive positions on a global scale by introducing a comprehensive AI Act, that aims to establish mandatory regulations for high-risk applications of artificial intelligence, including areas like biometric identification and critical infrastructure. In contrast, the report asserts that the United States has embraced a less stringent approach, emphasizing voluntary industry guidance and rules tailored to specific sectors.

The EY’s analysis of the U.S. AI regulation does not take into account Biden’s executive order issued on October 30th, which signifies a new robust stance by the U.S. Government in managing AI. This order goes beyond the usual approach pf providing suggestions and rules for specific industries. The executive order expands on the promises made earlier this year by 15 tech companies, such as Microsoft and Google. These commitments involve letting external parties test their AI systems before making them public and working on methods to recognize content created by AI. The executive order says that creators of AI systems must tell the government how safe their creations are before showing them to everyone. If these AI models might be a problem for the country in terms of security, money, or health, the government needs to know about it. The executive order also deals with other issues such as immigration, biotechnology, and labour.

 

Click here to download the Full Report

Source: Venture Beat

Published On: November 28, 2023Categories: News

Share:

The Future of AAA Gaming: Navigating New Horizons
ChatGPT's First Anniversary: Marking a Year of Transformative Influence in AI
The Ernest & Young report sheds light on new regulations.

 In October, Ernst & Young (EY), one of the Big Four accounting firms, has released a report named “Policy Trends and Considerations to Build Confidence in AI: The Artificial Intelligence (AI) Global Regulatory Landscape,” with the objective of elucidating the regulatory framework for AI on a global scale, offering policymakers and businesses a guide to comprehend and navigate this intricate landscape.

The report has analyzed eight major jurisdictions with notable legislative and regulatory activity in the field of artificial intelligence: the European Union, Canada, China, Japan, Korea, Singapore, the United Kingdom, and the United States. The report uncovers that, despite diverse cultural and regulatory settings, these jurisdictions have common goals and strategies in AI governance.

They all strive to mitigate potential harms arising from AI while maximizing societal benefits. Additionally, they align with the OECD (Organization for Economic Co-operation and Development) AI principles, as endorsed by the G20 in 2019, placing emphasis on human rights, transparency, risk management, and other ethical considerations.

The European Union has adopted one of the most proactive positions on a global scale by introducing a comprehensive AI Act, that aims to establish mandatory regulations for high-risk applications of artificial intelligence, including areas like biometric identification and critical infrastructure. In contrast, the report asserts that the United States has embraced a less stringent approach, emphasizing voluntary industry guidance and rules tailored to specific sectors.

The EY’s analysis of the U.S. AI regulation does not take into account Biden’s executive order issued on October 30th, which signifies a new robust stance by the U.S. Government in managing AI. This order goes beyond the usual approach pf providing suggestions and rules for specific industries. The executive order expands on the promises made earlier this year by 15 tech companies, such as Microsoft and Google. These commitments involve letting external parties test their AI systems before making them public and working on methods to recognize content created by AI. The executive order says that creators of AI systems must tell the government how safe their creations are before showing them to everyone. If these AI models might be a problem for the country in terms of security, money, or health, the government needs to know about it. The executive order also deals with other issues such as immigration, biotechnology, and labour.

 

Click here to download the Full Report

Source: Venture Beat

Published On: November 28, 2023Categories: News

Share:

The Future of AAA Gaming: Navigating New Horizons
ChatGPT's First Anniversary: Marking a Year of Transformative Influence in AI