Securiti has initiated an AI Regulation Digest, providing a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. This information will be regularly updated on our website, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.
1. Saudi Arabia’s Generative AI Guide for the Public
Country: Saudi Arabia
Date: 3 January
Summary: Saudi Arabia has published a Generative AI Guide for the public. It provides guidelines regarding the adoption and use of generative artificial intelligence systems. It also includes examples based on common scenarios that entities may address. Additionally, it highlights the challenges and considerations associated with the use of generative artificial intelligence, proposing principles for responsible use. Recommended best practices are also provided. Read more.
2. Japan's AI Strategy Council Draft AI Operator Guidelines
Country: Japan
Date: 5 January
Summary: Japan's AI Strategy Council releases draft AI Operator Guidelines. The AI Strategy Council's draft AI Operator Guidelines, unveiled on December 21, 2023, during its seventh AI Strategy Conference, delineate essential principles for AI developers, providers, and users. For AI developers, the guidelines emphasize proper data handling, ensuring the AI model's appropriateness, implementing robust security measures, and transparently communicating with stakeholders.
AI providers are encouraged to adhere to defined usage scopes, maintain thorough documentation, address vulnerabilities, and communicate pertinent information. AI users are directed to exercise responsible AI use, be cognizant of biases, prioritize privacy, implement security measures, communicate with stakeholders, and comply with regulations. Together, these guidelines aim to establish a comprehensive framework for responsible AI development and deployment, fostering ethical practices and informed decision-making across the AI ecosystem. Read more.
British Standards Institution released British Standard BS ISO/IEC 42001:2023
Country: UK
Date: 17 January
Summary: The British Standards Institution released British Standard BS ISO/IEC 42001:2023 on January 16, 2024, aligning with ISO/IEC 42001:2023, to guide organizations in the responsible use of artificial intelligence (AI). The standard focuses on addressing issues such as non-transparent automated decision-making and the preference for machine learning over human-coded logic. It provides a framework for establishing, implementing, maintaining, and improving AI management systems, emphasizing safeguards and impact-based risk assessments.
The goal is to cultivate a quality-centric culture within organizations, encouraging responsible contributions to the design and provision of AI-enabled products and services for societal benefit. This standard is notably referenced in the UK Government's National AI Strategy, serving as a crucial step towards establishing guardrails for the safe, ethical, and responsible use of AI. Read more.
4. China's Draft Guidelines for AI Industry
Country: China
Date: 18 January
Summary: China introduces draft guidelines to standardize the AI industry. Calls for standarization of, among other things, AI terminology, architecture, testing, evaluation, data services, chips, sensors, computing devices, security and governance. (Guidelines attached in Chinese). Read more.
5. Australia Introducing Mandatory Safety Guidelines for AI
Country: Australia
Date: 18 January
Summary: The government will consider mandatory safeguards for those who develop or deploy AI systems in legitimate, high-risk settings. The government will consider possible legislative vehicles for introducing mandatory safety guardrails for AI in high-risk settings in close consultation with industry and the community. Read more.
6. Danish Digital Agency (Digitaliseringsstyrelsen) has Released a Guide for Generative Artificial Intelligence (AI)
Country: Denmark
Date: 19 January
Summary: The Danish Digital Agency (Digitaliseringsstyrelsen) has released a guide for companies addressing the responsible use of generative artificial intelligence (AI). The guide offers recommendations to ensure the responsible deployment of generative AI tools. Among these recommendations, companies are advised to identify relevant generative AI tools based on their business needs, establish guidelines for their use, and implement an organizational framework encompassing knowledge-building, learning, development, testing, and validation processes.
Additionally, the guide highlights potential drawbacks associated with generative AI, including the risk of bias, factually incorrect answers, and data breaches. To mitigate these risks, the guide suggests measures such as implementing robust data security, enhancing information quality control, and providing employee training on data privacy. Overall, the guide aims to assist companies in navigating the responsible and ethical utilization of generative AI technologies. Read more.
Conclusion
Securiti's AI Regulation round serves as an invaluable resource for staying ahead of the latest global developments in the AI industry. Our commitment to providing timely updates ensures that you have access to crucial information and a better understanding of the evolving regulatory landscape in the field of AI.
The team has also created a dedicated page showcasing 'An Overview of Emerging Global AI Regulations’ around the world. Click here to delve deeper and learn more about the evolving landscape of global AI regulations.