Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Global AI Regulations Roundup: Top Stories of August 2025

Contributors

Yasir Nawaz

Digital Content Producer at Securiti

Rohma Fatima Qayyum

Associate Data Privacy Analyst at Securiti

Muhammad Ismail

Assoc. Data Privacy Analyst at Securiti

Aamina Shekha

Associate Data Privacy Analyst at Securiti

Editorial Note

From Principles to Power Plays in AI Regulation

AI governance is entering a new phase: no longer about lofty principles, but about power and control. The EU’s GPAI obligations, China’s ethical measures, and U.S. state laws show regulators are converging on one theme: AI must operate within enforceable guardrails. What’s striking is the divergence of priorities: Europe anchors on transparency and copyright, Asia blends innovation with state oversight, while U.S. states test sector-specific bans and liability models. The trend suggests a fragmented but accelerating race to define AI’s limits. Expect a future where global companies face a patchwork of binding obligations, pushing them to over-comply, while governments compete not just to regulate AI but to shape the very standards that dictate its global use.

Watch: July's AI Pulse - All Major Highlights

A quick overview of global AI headlines you cannot afford to miss.

North & South America Jurisdiction

1. Colorado Legislature Votes to Delay AI Act’s Effective Date

August 26, 2025
Colorado, United States

The Colorado legislature has voted to extend the effective date of the Colorado AI Act- the first comprehensive state AI law in the U.S. from February 1, 2026, to June 30, 2026. The move came after lawmakers were unable to reach a consensus on proposed amendments during a special session held from August 21-26.

Despite four amendment bills being introduced, disagreements among legislators, civil society, and a divided technology and business lobby prevented compromise. The delay provides additional time for debate when the legislature reconvenes on January 14, 2026, ahead of the law’s new implementation deadline.

Read More

2. Illinois Ban AI-Driven Therapy Services

August 1, 2025
Illinois, United States

Illinois has enacted the Wellness and Oversight for Psychological Resources Act, one of the first state laws to regulate AI in therapy. The Act, effective immediately, prohibits AI systems and chatbots from delivering professional therapy or making therapeutic decisions. Only licensed professionals may provide such services, and they may not delegate independent therapeutic communication or recommendations to AI.

AI can still be used for administrative tasks, such as scheduling and billing, and for limited supplementary support like clinical documentation, but only with patient consent. Violations carry penalties of up to $10,000 per incident. The law reflects growing concern over AI-powered chatbots providing harmful or inaccurate mental health advice.

Read More

Europe & Africa Jurisdiction

3. European Commission Launches Consultation on DMA Review & AI Sector

August 27, 2025

The European Commission has sought feedback on how the Digital Markets Act (DMA) ensures free competition within the digital markets, including the AI sector, in its call for evidence.

This review is aimed at assessing the law’s effectiveness as well as potential areas for improvement. Any feedback gained will be used to create a report to be presented to the European Parliament, Council, and the European Economic and Social Committee in May 2026. The deadline is set to September 24, 2025, for all feedback.

Read More

4. French & German Bodies Release Joint Paper on Securing AI Systems Through Zero Trust Principles

August 11, 2025
Germany & France

The Federal Office for Information Security (BSI) in Germany and the Cybersecurity Agency (ANSSI) in France have released a joint statement on securing AI systems through zero trust principles. Their paper, “Design Principles for LLM-based Systems with Zero Trust,” addresses challenges posed by LLMs by extending the traditional zero trust approaches to AI-specific threats.

The guidance recommends protection of sensitive AI components like model weights and training data from any unauthorized access, continuously monitoring AI inputs and outputs for suspicious activity, implementing defenses against AI attacks like data poisoning and model evasion, limiting AI system access rights, ensuring transparent AI decision-making, and maintaining human oversight for critical decisions.

Read More

5. EIOPA Issues Opinion on AI Use in Insurance

August 6, 2025

The European Insurance and Occupational Pensions Authority (EIOPA) has published an Opinion clarifying how existing insurance-sector legislation applies to the use of AI systems. The guidance aims to help national supervisors and insurers interpret frameworks such as the Insurance Distribution Directive and Solvency II Directive in light of the EU AI Act, which classifies AI systems used in life and health insurance pricing and risk assessment as high-risk.

The Opinion does not create new obligations but sets supervisory expectations around governance and risk management. It emphasizes principles such as data governance, record-keeping, fairness, cybersecurity, explainability, and human oversight, following a risk-based and proportionate approach. EIOPA also intends to issue further guidance on specific AI use cases and emerging issues in the sector.

Read More

6. European Commission Publishes GPAI Code of Practice Signatories

August 4, 2025

The European Commission has released the list of 25 organisations that have signed the General-Purpose AI (GPAI) Code of Practice, a voluntary framework supporting compliance with the AI Act. Signatories include global leaders such as Google, Microsoft, OpenAI, Anthropic, Mistral AI, Amazon, IBM, and Cohere, alongside several European AI firms.

Notably, X signed only the Safety and Security chapter, meaning it must demonstrate compliance with transparency and copyright obligations through alternative methods. The Commission’s publication of the list underscores growing industry alignment with the EU’s approach to AI accountability and governance.

Read More

7. EU AI Act Obligations for General-Purpose AI Models Now in Force

August 2, 2025

The AI Act obligations for providers of general-purpose AI (GPAI) models have officially taken effect across the EU, ushering in new requirements for transparency, copyright, and responsible development. Providers must now disclose training data summaries, safeguard copyright, and ensure models meet baseline safety standards.

The Commission has issued guidelines clarifying compliance duties and confirmed the GPAI Code of Practice as a voluntary tool to ease implementation and provide legal certainty. While most providers must meet transparency and copyright obligations by August 2, 2025, models already on the market have until August 2, 2027, to comply. More advanced models with systemic risks face stricter obligations, including Commission notification and enhanced safety measures.

Read More

8. EU AI Board Confirms GPAI Code of Practice as Compliance Tool

August 1, 2025

The EU’s AI Board has formally approved the General-Purpose AI (GPAI) Code of Practice, confirming it as an adequate voluntary mechanism for providers of GPAI models to demonstrate compliance with the AI Act. Published in July 2025 after a multi-stakeholder drafting process, the Code addresses obligations around transparency, copyright, and safety.

By adhering to the Code, GPAI providers can reduce administrative burdens and gain greater legal certainty when placing their models on the EU market. The approval highlights the EU’s push to create practical pathways for compliance as GPAI obligations under the AI Act begin to take effect.

Read More

August 1, 2025

The European Parliamentary Research Service (EPRS) has released a report examining the copyright implications of generative AI in the context of large language model (LLM) training. The study highlights two central challenges: attribution, i.e., whether outputs meaningfully derive from training data, and novelty, i.e., whether outputs represent genuine new creations or statistical reproductions of existing works.

The report recommends three key steps: requiring AI developers to track and disclose training data with independent audits, creating compensation systems that reflect the statistical influence of creators’ work, and developing open standards to ensure collaboration between creators, regulators, and researchers. EPRS concludes that the EU is well-positioned to set global standards for transparency, attribution, and accountability in generative AI development.

Read More

Asia Jurisdiction

10. China Releases Draft Measures on Ethical AI Management

August 22, 2025
China

China’s Ministry of Industry and Information Technology (MIIT) has issued draft Administrative Measures for the Ethical Management of AI for public comment. The rules apply to AI R&D and applications that pose risks to life, health, dignity, the environment, or public order.

They establish a four-tier review system including expedited reviews within 72 hours for urgent cases and require institutions to register projects, report to a national platform, and comply with oversight or face penalties. The draft highlights China’s push to pair rapid AI innovation with ethical governance and global standard-setting.

Read More

11. India Releases Framework for Responsible AI in Finance (FREE-AI Report)

August 13, 2025
India

The Reserve Bank of India (RBI) has published the report of its Committee on the Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the Financial Sector. Established in December 2024, the Committee engaged with a broad set of stakeholders before finalizing its recommendations.

The report sets out 7 “Sutras” as guiding principles for AI adoption in finance, supported by 26 actionable recommendations across six strategic pillars. The framework emphasizes balancing innovation with risk management, ensuring AI deployment enhances efficiency, trust, and resilience without compromising consumer protection or financial stability.

The report is now available on the RBI’s website for wider review and discussion.

12. Nepal Approves National AI Policy

August 11, 2025
Nepal

Nepal has adopted its first National AI Policy, 2025, aimed at fostering ethical, secure, and inclusive use of AI while supporting research, startups, and public–private partnerships. The policy establishes legal and regulatory frameworks, calls for an AI Regulation Council and a National AI Centre, and commits to reviews every two years.

By expanding infrastructure such as data centres and high-speed connectivity, and embedding safeguards for privacy, rights, and security, the policy seeks to integrate AI across key sectors and position Nepal as a competitive player in global AI markets.

Read More

13. Indonesia Opens Consultation on National AI Roadmap and Ethics Guidelines

August 11, 2025
Indonesia

Indonesia’s Ministry of Communications & Informatics has launched a public consultation on its National AI Roadmap and Draft AI Ethics Guidelines. Drawing from global standards set by UNESCO, OECD, and ASEAN, the guidelines emphasize inclusivity, transparency, and accountability in AI development and use.

By embedding these international principles, Indonesia aims to position itself as a regional leader in shaping responsible AI governance across ASEAN. Public feedback is open untilAugust 22, 2025.

Read More

14. Saudi Arabia’s SDAIA Publishes Report on Agentic AI

August 8, 2025
Saudi Arabia

The Saudi Data & AI Authority (SDAIA) has published a report on Agentic AI, defining its six core capabilities: perception, reasoning, learning, action-taking, communication, and autonomous operation. The report charts AI’s evolution from early rule-based systems to today’s generative agents powered by large language models, which can collaborate to achieve complex goals.

Framing Agentic AI as a pillar of Saudi Vision 2030, SDAIA highlights its potential to accelerate digital transformation and support the Kingdom’s transition to a knowledge-driven economy.

Read More

15. APEC Digital and AI Ministers Issue Joint Statement

August 4, 2025

At the first-ever APEC Digital and AI Ministerial Meeting in Incheon, ministers from across APEC economies issued a joint statement committing to responsible digital transformation and trusted AI development. The statement emphasized three priorities: advancing AI innovation to address socio-economic challenges, expanding digital connectivity, and ensuring a safe and reliable digital ecosystem.

Ministers also recognized Korea’s leadership in spearheading a new APEC AI initiative, expected by the end of 2025, and pledged to sustain momentum under the APEC Internet and Digital Economy Roadmap.

Read More

16. Australia’s OAIC Closes Inquiries into I-MED, Harrison.ai, and Annalise.ai

August 1, 2025
Australia

The Office of the Australian Information Commissioner (OAIC) has closed preliminary inquiries into I-MED Radiology Network, Harrison.ai, and Annalise.ai regarding the disclosure of medical imaging data for AI model training.

The OAIC reviewed whether I-MED’s sharing of patient data between 2020 and 2022 complied with the Australian Privacy Principles (APPs). The Commissioner found that the data had been sufficiently de-identified and was therefore no longer considered personal information under the Privacy Act.

While no regulatory action will be taken, the OAIC emphasized that developing AI models with large datasets remains a high-risk activity and highlighted this case as an example of good de-identification and contractual safeguards.

Read More

WHAT'S NEXT:
Key Privacy Developments to Watch For

Taiwan’s Generative AI Competition Review: The Fair Trade Commission’s consultation on generative AI’s impact on competition, covering hardware supply chains, deployment, and risks like dominance and collusion, is open until September 7, 2025.

China’s AI Content Labeling Rules: Effective September 1, 2025, platforms must embed visible labels and metadata on AI-generated text, images, audio, and video, while also flagging suspected synthetic content.

Croatian and German data protection authorities’ (AZOP-BfD)I Guidelines on Personal Data Protection in AI Trainings: Croatian and German data protection authorities are consulting until August 30, 2025, on protecting personal data in AI training, particularly around memorization risks in large language models. The results may influence Europe-wide standards.

California’s Advances 3 Major AI Bills: California is advancing three major AI bills:  Assembly Bill 1064 (Leading Ethical AI Development for Kids Act), aimed to regulate the use of children’s personal information by AI systems, Senate Bill 1018 (Automated Decisions Safety Act) regulating automated decision systems (ADS), and Senate Bill 420 (California AI Transparency Act).

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Shrink The Blast Radius: Automate Data Minimization with DSPM View More
Shrink The Blast Radius
Recently, DaVita disclosed a ransomware incident that ultimately impacted about 2.7 million people, and it’s already booked $13.5M in related costs this quarter. Healthcare...
Why I Joined Securiti View More
Why I Joined Securiti
I’m beyond excited to join Securiti.ai as a sales leader at this pivotal moment in their journey. The decision was clear, driven by three...
View More
EU Publishes Template for Public Summaries of AI Training Content
The EU released the Explanatory Notice and Template for the Public Summary of Training Content for General-Purpose AI (GPAI) Models. Learn more.
Decoding Saudi Arabia’s Cybersecurity Risk Management Framework View More
Decoding Saudi Arabia’s Cybersecurity Risk Management Framework
Discover the Kingdom of Saudi Arabia’s National Framework for Cybersecurity Risk Management by the NCA. Learn how TLP, risk assessment and proactive strategies protect...
Redefining Data Privacy Careers in the Age of AI View More
Redefining Data Privacy Careers in the Age of AI
Securiti's whitepaper provides a detailed overview of the impact AI is poised to have on data privacy jobs and what it means for professionals...
View More
Financial Data & AI: A DSPM Playbook for Secure Innovation
Learn how financial institutions can secure sensitive data and AI with DSPM. Explore real-world risks, DORA compliance, responsible AI, and strategies to strengthen cyber...
Navigating the Minnesota Consumer Data Privacy Act (MCDPA) View More
Navigating the Minnesota Consumer Data Privacy Act (MCDPA): Key Details
Download the infographic to learn about the Minnesota Consumer Data Privacy Act (MCDPA) applicability, obligations, key features, definitions, exemptions, and penalties.
EU AI Act Mapping: A Step-by-Step Compliance Roadmap View More
EU AI Act Mapping: A Step-by-Step Compliance Roadmap
Explore the EU AI Act Mapping infographic—a step-by-step compliance roadmap to help organizations understand key requirements, assess risk, and align AI systems with EU...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New