Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View

Veeamon Tour'26 - Data & AI Trust CONVERGE for the Agentic Era

View

Global AI Regulations Roundup: Top Stories of October 2024

Contributors

Anas Baig

Product Marketing Manager at Securiti

Muhammad Ismail

Assoc. Data Privacy Analyst at Securiti

Syed Tatheer Kazmi

Data Privacy Analyst

CIPP/Europe

Salma Khan

Data Privacy Analyst at Securiti

CIPP/Asia

Usman Tariq

Senior Global Compliance Analyst at Securiti

CIPP/US

Rohma Fatima Qayyum

Associate Data Privacy Analyst at Securiti

Published October 24, 2024 / Updated November 17, 2025

Securiti has initiated an AI Regulation digest, providing a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. Our website will regularly update this information, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.

EU Jurisdiction

1. Netherlands Publishes Guide On AI Act for Organizations Involved In AI Development

Date: 19 October, 2024
Summary: The government of the Netherlands has released its guide on the AI Act, meant for entrepreneurs and organizations involved in AI development. The guide provides rules for responsible AI usage that ensure public safety, health, and protection of fundamental rights. Its recommendations include risk assessments to classify AI systems per regulatory requirements. Any practices that pose unacceptable risks, such as behavioral manipulation, exploitation of vulnerabilities, social scoring, and certain forms of biometric identification, are prohibited. At the same time, systems classified as “high-risk” must comply with strict criteria before deployment.

Lastly, the guide provides a potential implementation plan for the regulation that will come into effect in mid-2027. Certain AI systems will be restricted from February 2025. Read More.

Asia Jurisdiction

Date: 2nd October, 2024
Summary: The Japanese AI Safety Institute (AISI) has released its Guide to Evaluation Perspectives on AI Safety. The guide is meant for AI developers and providers while aligning with the AISI's principles of human centricity, safety, fairness, privacy, security, transparency, and accountability.

The guide includes recommendations on safety evaluations by AI system type and impact and proposals for mechanisms that uphold these principles, such as controlling toxic outputs, preventing misinformation, and ensuring data quality. The guide further recommends the evaluations be conducted by those involved in AI development throughout various phases, including data collection, model training, and system validation, to enhance AI safety and responsibility. Read more.

3. Indonesia's Ministry of Communication and Information Announces Plans for Innovation-Focused AI Regulations

Date: 2nd October, 2024
Summary: Indonesia's Ministry of Communication and Information (Kominfo) announced plans to develop AI regulations focusing on innovation. These plans will take global developments into account while addressing AI's cross-sectorial uses. Kominfo highlighted Indonesia's preexisting Personal Data Protection Law (PDPL) and the 2023 AI Ethics Circular as examples of regulations that stress inclusivity, security, and intellectual property rights. Read more.

4. Japan’s AI Safety Institute's Latest Guide Provides Details on Red Teaming Methodology

Date: 3rd October, 2024
Summary: The Japanese AI Safety Institute (AISI) has released its Guide to Red Teaming Methodology on AI Safety. The document provides details related to key considerations when evaluating an AI system from an attacker's point of view. It not only assesses the effectiveness of AI safety measures via various methods, including black box, white box, and grey box testing, across production, staging, and development environments but also discusses attack methods, such as automated tools and data poisoning, and recommends conducting red teaming exercises before AI system releases and while they're being used. Read more.

5. Indonesia Becomes First Southeast Asian Country to Complete AI Readiness Assessment Using UNESCO's Methodology

Date: 4th October, 2024
Summary: Indonesia has become the first Southeast Asian country to complete the AI Readiness Assessment by using UNESCO's methodology. The subsequent report highlights AI's social impacts on rural employment and urban ethical adoption, identifies bias-related information gaps, and recommends the establishment of a National AI Agency for ethical governance while also emphasizing the need for equal access to education and infrastructure for all researchers and startups to aid in better coordination and collaboration. Read more.

6. Office of the Australian Information Commissioner's New Guidance Contains Key Privacy Considerations For AI Developers

Date: 22 October, 2024
Summary: The Office of the Australian Information Commissioner (OAIC) has released guidance on privacy considerations when developing Generative AI models. The guidance contains key privacy considerations for AI system developers and emphasizes the application of the Privacy Act of 1988 to the collection, use, and disclosure of personal information when training AI models, even if the data is publicly available.

Key takeaways from the guidance include the following:

  • Developers must ensure accuracy by using high-quality datasets;
  • Developers must have appropriate consent when dealing with sensitive information;
  • Developers must communicate their privacy practices through the privacy policy;
  • Developers must take a privacy-by-design approach by conducting privacy impact assessments and ensuring that personal information is only used for its intended purpose or obtaining additional consent when necessary.

Using personal data for AI training without appropriate consent or a primary purpose will lead to regulatory risks. Hence, developers must adopt cautious practices, especially when collecting data through methods like web scraping or third-party datasets. Read more.

Date: 21 October, 2024
Summary: The OAIC released its guidance on commercially available AI. The guidelines focus on obligations related to personal information used in AI systems. Key points of the guidelines include the following:

  • Organizations must assess whether their AI products are suitable for the intended use in the context of privacy risks;
  • The privacy policy must explain how AI is used, especially for public tools like chatbots;
  • All AI-generated information must conform with the privacy laws;
  • Organizations must not input sensitive information into public AI tools due to high privacy risks.

Furthermore, there are additional checklists to evaluate and ensure AI products are being used responsibly. Read more.

Securiti's AI Regulation round is an invaluable resource for staying ahead of the latest global developments in the AI industry. Our commitment to timely updates ensures that you have access to crucial information and a better understanding of the evolving AI regulatory landscape.

The team has also created a dedicated page showcasing 'An Overview of Emerging Global AI Regulations’ worldwide. Click here to delve deeper and learn more about the evolving landscape of global AI regulations.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Rehan Jalil, Veeam on Agent Commander : theCUBE + NYSE Wired: Cyber Security Leaders
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight
Future-Proofing for the Privacy Professional
Watch Now View
Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Latest
View More
Building Sovereign AI with HPE Private Cloud AI and Veeam Securiti Gencore AI
How HPE Private Cloud AI, NVIDIA acceleration, and Veeam Securiti Gencore AI support secure, governed enterprise AI with policy enforcement across RAG, assistant, and agentic workflows.
View More
Securiti.ai Names Accenture as 2025 Partner of the Year
In a continued celebration of impactful collaboration in DataAI Security, Securiti.ai, a Veeam company, has honored Accenture as its 2025 Partner of the Year....
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
View More
Agentic AI & Privacy: Governing Autonomous AI Agents in the Enterprise
Learn how to govern agentic AI in the enterprise. Manage privacy risks, control data access, enforce policies and ensure compliance for autonomous AI agents.
View More
Opt-Outs That Stick: Consent Withdrawal Across Marketing, SaaS & GenAI
Securiti's whitepaper provides a detailed overview of various consent withdrawal requirements across marketing, SaaS, and GenAI. Read now to learn more.
View More
ROT Data Minimization
Eliminate redundant, obsolete, and trivial (ROT) data to improve AI accuracy, reduce storage costs, and minimize security and compliance risks at scale.
View More
Agent Commander: Solution Brief
Learn how Agent Commander detects AI agents, protects enterprise data with runtime guardrails, and undoes AI errors - enabling secure, compliant AI adoption at...
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New