Beneath the headlines, a deeper pattern is emerging: regulators are no longer debating whether AI should be governed; they are defining how.
Three structural shifts stand out. First, AI is being pulled firmly into existing legal regimes. Authorities are signaling that autonomy does not dilute accountability; it amplifies it. Second, attention is moving from model outputs to system architecture: identity controls, interoperability, lifecycle risk, and data flows. Governance is becoming technical, not just declaratory. Third, the focus is shifting from spectacular harms to routine risk. The question is no longer only whether AI can be abused, but whether it can be trusted during ordinary operations.
For organisations, the implication is clear: compliance will increasingly hinge on demonstrable controls: logging, access restrictions, risk testing, and governance embedded by design. The era of high-level AI principles is giving way to operational scrutiny.
North & South America Jurisdiction
1. Connecticut AG Issues Memorandum on Application of Existing Laws to AI
February 25, 2026 Connecticut, United States
Connecticut Attorney General William Tong released a memorandum clarifying how existing state laws apply to artificial intelligence systems.
The advisory outlines enforcement pathways under Connecticut’s civil rights laws, privacy and data security statutes, consumer protection laws, and antitrust framework. It emphasizes that AI-driven decisions in areas such as employment, tenant screening, lending, insurance, and targeted advertising remain subject to established legal obligations.
The memorandum signals that Connecticut does not view AI as operating in a regulatory vacuum. Instead, regulators intend to apply existing statutory tools to address discrimination, misuse of personal data, unfair trade practices, and harmful algorithmic outcomes.
The move reflects a growing state-level trend: leveraging existing legal frameworks to govern AI deployment while broader AI-specific legislation continues to evolve.
2. Court of Appeal for British Columbia Upholds Privacy Mandate Against Clearview AI
February 18, 2026 British Columbia, Canada
The British Columbia Court of Appeal dismissed Clearview AI’s appeal and upheld the provincial privacy regulator’s order finding Clearview contravened BC’s Personal Information Protection Act (PIPA) by scraping and processing facial images of British Columbians from public websites without consent.
The Court confirmed PIPA applies to Clearview due to a “real and substantial connection” between its online activities and BC, rejected arguments that the data was “publicly available” for exemption purposes, and upheld the enforceability of the remedial order requiring Clearview to stop offering services in BC and make best efforts to cease collection and delete affected facial data.
For AI and biometrics, the ruling reinforces that large-scale facial recognition datasets built through scraping remain high-risk and consent-dependent, even when images are publicly accessible online.
3. NIST Launches AI Agent Standards Initiative to Boost Security and Interoperability
February 17, 2026 United States
NIST’s Center for AI Standards and Innovation (CAISI) announced a new AI Agent Standards Initiative focused on developing industry-led standards and protocols to support secure, interoperable adoption of AI agents capable of autonomous actions across digital systems.
The initiative will advance work across three areas: (1) supporting standards development and U.S. engagement in international standards bodies, (2) fostering open-source protocols for agent interoperability, and (3) strengthening research on agent security, identity, and authorization.
NIST is also seeking public input through an RFI on AI agent security (due March 9) and an AI agent identity/authorization concept paper (due April 2), with sector-focused listening sessions planned from April.
The announcement signals increasing U.S. focus on building trust infrastructure for autonomous AI systems before broad enterprise deployment.
4. Brazilian Authorities Took Action Against the Generation of Explicit Content by an AI Tool of X Corp.
February 11, 2026 Brazil
Brazil’s data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), alongside the Federal Public Prosecutor's Office (MPF) and National Consumer Secretariat (Senacon), issued a joint recommendation to X Corp. over serious safety concerns linked to its AI tool, Grok.
Authorities found the system continued generating explicit content involving minors and non-consenting adults despite prior warnings. ANPD has given X Corp. five days to demonstrate that the outputs have been effectively blocked, while MPF has required monthly transparency reports on deepfakes and harmful AI-generated content.
Failure to comply could result in significant fines and potential criminal exposure.
The action signals Brazil’s intensifying regulatory scrutiny over generative AI safeguards and platform accountability.
5. Spain’s AEPD Issues Guidance on Data Protection Risks of Agentic AI
February 18, 2026 Spain
Spain’s data protection authority, Agencia Española de Protección de Datos (AEPD), has released guidance on data protection considerations when deploying agentic AI systems capable of autonomous decision-making and multi-step task execution.
The authority warns that agentic systems may expand processing beyond intended purposes, autonomously combine datasets, and create confidentiality or integrity risks not fully anticipated by traditional compliance models.
The AEPD recommends a clear technical understanding of system behavior before deployment, strict application of data protection by design and by default, and targeted Data Protection Impact Assessments focusing on autonomy and chained access risks. It also urges limiting system permissions and implementing continuous logging and monitoring.
The guidance reflects growing regulatory scrutiny as autonomous AI systems move into enterprise environments.
6. Dutch DPA Warns Against Use of OpenClaw AI Agent Over Security Risks
February 12, 2026 Netherlands
The Dutch Data Protection Authority, Autoriteit Persoonsgegevens (AP), has issued a public warning against the use of OpenClaw and similar open-source AI agents, citing serious cybersecurity and data protection risks.
OpenClaw operates locally and requires broad access to emails, files, and connected services, which the AP describes as creating “Trojan horse”-like vulnerabilities. The authority flagged malware risks in certain plugins, exposure to prompt injection attacks, and critical flaws that could enable remote code execution and full device compromise. The AP also urged parents to check whether children have installed such tools and called at EU level for clarification that autonomous AI agents fall within the scope of the AI Act.
The warning highlights rising regulatory concern over security risks linked to agentic AI tools.
7. BEUC Warns Digital Omnibus Could Weaken AI Act Protections
February 5, 2026 European Union
The European Consumer Organisation (BEUC) has warned that the European Commission’s Digital Omnibus proposal may dilute key consumer safeguards secured under the EU AI Act.
BEUC cautions that proposed simplifications risk weakening high-risk AI system registration obligations and mandatory AI literacy requirements designed to ensure transparency for consumers. The organisation also stresses that any lighter compliance regime should remain limited to SMEs and startups, rather than extending regulatory advantages to large established companies.
The intervention adds to broader concerns that the Omnibus reform could disrupt carefully negotiated digital rights protections within the EU’s AI governance framework.
8. UK ICO Opens Formal Investigation into Grok AI Over Harmful Content Risks
February 3, 2026 United Kingdom
The Information Commissioner's Office (ICO) has launched formal investigations into X Internet Unlimited Company and X.AI LLC over their processing of personal data in connection with the Grok AI system.
The investigation follows reports that Grok generated non-consensual sexualised images, including involving children. The ICO will examine whether personal data was processed lawfully, fairly and transparently, and whether appropriate safeguards were built into the system’s design and deployment. The regulator is coordinating with Ofcom and international authorities, and has not yet reached a finding of infringement. Under UK GDPR and the Data Protection Act 2018, potential fines could reach £17.5 million or 4% of global turnover.
The move signals heightened UK scrutiny of AI systems that generate synthetic or manipulated imagery.
9. Germany’s Federal Financial Supervisory Authority (BAFIN) Issues Guidance on ICT Risks in AI Use Under DORA
February 1, 2026 Germany
Germany’s financial regulator, Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin), has published guidance on managing ICT risks linked to artificial intelligence in financial institutions.
The non-binding guidance supports compliance with the EU’s Digital Operational Resilience Act (DORA), focusing on ICT risk management and third-party ICT risk oversight across the AI lifecycle, from data acquisition and model development to deployment and decommissioning.
BaFin stresses that AI systems must be embedded within existing ICT risk frameworks, with resilience and security safeguards applied at every stage. The guidance is particularly relevant for institutions under the Capital Requirements Regulation and insurers supervised under Solvency II.
The move reflects increasing supervisory attention to operational and cyber resilience risks associated with AI deployment in the financial sector.
10. China Cracks Down on Unlabeled AI-Generated Content, Shuts 13,421 Accounts
February 12, 2026 China
China’s internet regulator, the Cyberspace Administration of China (CAC), announced it has shut down 13,421 online accounts for publishing AI-generated content without required AI identification labels.
Authorities removed more than 543,000 pieces of illegal or misleading content, including AI face-swapping and voice-cloned impersonations of public figures on platforms such as WeChat and Douyin. Some accounts were also found selling unauthorized AI-generated videos of celebrities.
The CAC stated it will continue strict enforcement against unlabeled AI-generated misinformation, signaling ongoing regulatory focus on transparency, synthetic media governance, and online ecosystem integrity.
11. Korea & Singapore AI Safety Institutes Test AI Agents for “Benign” Data Leakage
February 1, 2026 Singapore
The Korea and Singapore AI Safety Institutes completed a joint testing exercise assessing whether AI agents can execute realistic multi-step tasks without leaking sensitive data during routine (non-malicious) use.
The study evaluated common agent archetypes (customer service, enterprise productivity, personal productivity) across tool-based workflows (e.g., email, calendar, databases) and focused on three leakage patterns: lack of data awareness, lack of audience awareness, and failure to follow data-handling policies.
Key findings show meaningful leakage risk even in normal task execution, with large variation by model. In Singapore’s runs, “fully safe” completion rates ranged from 57% to 14% across tested models; Korea’s results followed a similar trend, ranging from 35% to 3%.
The results reinforce that agent deployments need stronger guardrails, testing, and monitoring, especially where tools touch sensitive systems and accounts.
Utah AI Transparency Act (HB 286): Utah’s AI Transparency Act continues to advance through the legislative process. If enacted, it would introduce new disclosure obligations around AI use, reinforcing state-level momentum toward transparency requirements in consumer-facing AI systems.
Vietnam AI law- Effective 1 March 2026: Vietnam’s new AI Law, built on a risk-based framework, will soon take effect. Organisations deploying AI in Vietnam should prepare for classification requirements, governance expectations for high-risk systems, and oversight by the Ministry of Science and Technology.
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...
Spotlight Talks
Spotlight
50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
The promise of AI Agents is staggering— intelligent systems that make decisions, use tools, automate complex workflows act as force multipliers for every knowledge...
Boards are tuned in to the AI conversation, but there’s a blind spot many organizations still haven’t named: risk silos. Everyone agrees AI governance...
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
Access the whitepaper and discover how financial institutions eliminate Shadow AI, enforce real-time AI policies, and secure sensitive data with a unified DataAI control...
A breakdown of new data privacy laws in Indiana, Kentucky, and Rhode Island—key obligations, consumer rights, enforcement timelines, and what businesses should do now.
Map the OWASP Top 10 risks for agentic AI to enterprise-grade controls, identity, data security, guardrails, monitoring, and governance to stop autonomous AI abuse.
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI. Category:...
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...