Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

Global AI Regulations Roundup: Top Stories of October 2024

Published October 24, 2024 / Updated October 30, 2024

Securiti has initiated an AI Regulation digest, providing a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. Our website will regularly update this information, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.

EU Jurisdiction

1. Netherlands Publishes Guide For AI Act Compliance Aimed At Entrepreneurs & Businesses

Date: 19 October, 2024
Summary: The government of the Netherlands has released its guide on the AI Act, meant for entrepreneurs and organizations involved in AI development. The guide provides rules for responsible AI usage that ensure public safety, health, and protection of fundamental rights. Its recommendations include risk assessments to classify AI systems per regulatory requirements. Any practices that pose unacceptable risks, such as behavioral manipulation, exploitation of vulnerabilities, social scoring, and certain forms of biometric identification, are prohibited. At the same time, systems classified as “high-risk” must comply with strict criteria before deployment.

Lastly, the guide provides a potential implementation plan for the regulation that will come into effect in mid-2027. Certain AI systems will be restricted from February 2025.

Asia Jurisdiction

Date: 2nd October, 2024
Summary: The Japanese AI Safety Institute (AISI) has released its Guide to Evaluation Perspectives on AI Safety. The guide is meant for AI developers and providers while aligning with the AISI's principles of human centricity, safety, fairness, privacy, security, transparency, and accountability.

The guide includes recommendations on safety evaluations by AI system type and impact and proposals for mechanisms that uphold these principles, such as controlling toxic outputs, preventing misinformation, and ensuring data quality. The guide further recommends the evaluations be conducted by those involved in AI development throughout various phases, including data collection, model training, and system validation, to enhance AI safety and responsibility. Read more.

2. Indonesia's Ministry of Communication and Information Announces Plans for Innovation-Focused AI Regulations

Date: 2nd October, 2024
Summary: Indonesia's Ministry of Communication and Information (Kominfo) announced plans to develop AI regulations focusing on innovation. These plans will take global developments into account while addressing AI's cross-sectorial uses. Kominfo highlighted Indonesia's preexisting Personal Data Protection Law (PDPL) and the 2023 AI Ethics Circular as examples of regulations that stress inclusivity, security, and intellectual property rights. Read more.

3. Japan’s AI Safety Institute's Latest Guide Provides Details on Red Teaming Methodology

Date: 3rd October, 2024
Summary: The Japanese AI Safety Institute (AISI) has released its Guide to Red Teaming Methodology on AI Safety. The document provides details related to key considerations when evaluating an AI system from an attacker's point of view. It not only assesses the effectiveness of AI safety measures via various methods, including black box, white box, and grey box testing, across production, staging, and development environments but also discusses attack methods, such as automated tools and data poisoning, and recommends conducting red teaming exercises before AI system releases and while they're being used. Read more.

4. Indonesia Becomes First Southeast Asian Country to Complete AI Readiness Assessment Using UNESCO's Methodology

Date: 4th October, 2024
Summary: Indonesia has become the first Southeast Asian country to complete the AI Readiness Assessment by using UNESCO's methodology. The subsequent report highlights AI's social impacts on rural employment and urban ethical adoption, identifies bias-related information gaps, and recommends the establishment of a National AI Agency for ethical governance while also emphasizing the need for equal access to education and infrastructure for all researchers and startups to aid in better coordination and collaboration. Read more.

Securiti's AI Regulation round is an invaluable resource for staying ahead of the latest global developments in the AI industry. Our commitment to timely updates ensures that you have access to crucial information and a better understanding of the evolving AI regulatory landscape.

5. Office of the Australian Information Commissioner's New Guidance Contains Key Privacy Considerations For AI Developers

Date: 22 October, 2024
Summary: The Office of the Australian Information Commissioner (OAIC) has released guidance on privacy considerations when developing Generative AI models. The guidance contains key privacy considerations for AI system developers and emphasizes the application of the Privacy Act of 1988 to the collection, use, and disclosure of personal information when training AI models, even if the data is publicly available.

Key takeaways from the guidance include the following:

  • Developers must ensure accuracy by using high-quality datasets;
  • Developers must have appropriate consent when dealing with sensitive information;
  • Developers must communicate their privacy practices through the privacy policy;
  • Developers must take a privacy-by-design approach by conducting privacy impact assessments and ensuring that personal information is only used for its intended purpose or obtaining additional consent when necessary.

Using personal data for AI training without appropriate consent or a primary purpose will lead to regulatory risks. Hence, developers must adopt cautious practices, especially when collecting data through methods like web scraping or third-party datasets.

Date: 21 October, 2024
Summary: The OAIC released its guidance on commercially available AI. The guidelines focus on obligations related to personal information used in AI systems. Key points of the guidelines include the following:

  • Organizations must assess whether their AI products are suitable for the intended use in the context of privacy risks;
  • The privacy policy must explain how AI is used, especially for public tools like chatbots;
  • All AI-generated information must conform with the privacy laws;
  • Organizations must not input sensitive information into public AI tools due to high privacy risks.

Furthermore, there are additional checklists to evaluate and ensure AI products are being used responsibly.

The team has also created a dedicated page showcasing 'An Overview of Emerging Global AI Regulations’ worldwide. Click here to delve deeper and learn more about the evolving landscape of global AI regulations.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New