Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Navigating the AI Regulatory Maze : A Guide for Businesses in 2025

Published January 1, 2024
Contributors

Anas Baig

Product Marketing Manager at Securiti

Omer Imran Malik

Data Privacy Legal Manager, Securiti

FIP, CIPT, CIPM, CIPP/US

Listen to the content

The advent of Artificial Intelligence (AI), specifically Generative AI (GenAI), has pushed technology to greater heights. It has finally transcended beyond the world of fiction or dreams into reality. From powering virtual assistants and self-driving cars to enhancing medical diagnosis, AI is helping industries break boundaries and explore new venues.

As AI continues to fuel industries, businesses must learn how they harness its potential while safely navigating the complex maze of AI regulations.

Globally, world leaders are in a relentless pursuit of introducing AI laws to put a leash on this ever-evolving technology and enable its safe use. Consequently, businesses must thoroughly understand such applicable AI laws to avoid costly legal penalties, reputational damages, and missed opportunities.

Read on to learn more about the importance of compliance with global artificial intelligence regulations and what businesses need to know about the laws proposed or enacted to date.

The Importance of AI Regulatory Compliance for Businesses

AI is the most wonderful technological breakthrough that we’ve seen over the past decades. However, amidst its evolution, businesses must not ignore the one most critical aspect: AI regulatory compliance. The significance of compliance isn’t simply limited to avoiding regulatory fines, but it goes way beyond that.

Prevent Regulatory Fines & Penalties

Besides boundless opportunities and innovations, AI brought many security, privacy, and ethical concerns. For instance, AI may produce biased content due to partial training data. With employees leveraging AI for their day-to-day tasks, sharing sensitive or intellectual property (IP) data with AI prompts has become a serious concern. Issues like these have prompted world leaders to propose robust AI regulations and frameworks to regulate its use.

With regulatory policies come the penalties for violations or non-compliance, which may further lead businesses to financial and reputational loss. Hence, businesses must adhere to applicable regulations to avoid penalties and repercussions.

Improve Customer Trust

Good practices drive trust, and trust drives successful businesses. Tech advancements have made customers aware of their surroundings more than ever before. While customers understand how beneficial it is to leverage AI, they also know that its unsupervised or improper use may lead to various risks, such as biasedness, data leaks, or IP data exposure.

Regulatory compliance is the most effective way to reflect how, as a business, you prioritize data security, integrity, and privacy. It also delivers your commitment to demonstrating the ethical use of AI and relevant technologies and, most importantly, transparency.

Gain Access to International Markets

Successful businesses need access to international markets to grow into multi-national organizations. However, to achieve that, businesses must first meet the regulatory requirements in those markets. Regulations, in general, aren’t limited to individual regions. In fact, every country has its regulatory laws that businesses must meet to conduct their operations, especially the cross-border data transfer rules. A robust AI regulation strategy can help businesses streamline their privacy operations, especially pertaining to data mapping and cross-border data transfers, to gain access to global markets.

Drive Innovations Securely

Regulations do not hinder or impede innovations. On the contrary, laws like AI regulations are designed to foster innovation responsibly, safely, and ethically. Regulations help businesses set boundaries around the customer's privacy, their rights, transparency to business practices, and enhanced data security measures, to name a few. By demonstrating compliance, businesses can work towards exploring new venues, innovations, and opportunities without worrying about the risks associated with compliance.

Global AI Regulations in 2023 & Beyond

With the inception of GenAI technologies like ChatGPT, there has been a significant rise in the development of new AI applications as well as AI laws. Businesses must stay proactive in understanding the core components of such laws and start preparing to get ahead of the curve.

Following is a brief overview of some of the recent developments in the AI regulatory landscape:

Brazil Bill of Law 2338

In May 2023, the Brazilian Senate introduced the Bill of Law 2338/2023. Still under consideration by the legislature, the bill proposes the establishment of guardrails around AI accessibility and users’ privacy rights. Once enacted, the regulation shall apply to individuals and entities that either employ or use artificial intelligence systems or supply them, collectively referred to as AI agents. Following are some of the salient features of the proposed regulation:

Rights of Individuals Affected by AI Systems

Firstly, the bill empowers users who are affected by AI systems with data privacy rights, such as:

  • Right to prior information related to users’ interaction with the AI.
  • Right to an explanation of any sort of AI-recommended decision or suggestion.
  • Right to challenge such decisions or recommendations.
  • Right to human interference or determination for AI-related decisions.
  • Right to privacy and protection of personal data.

Individuals are further granted the right to receive, before contracting or using the system, clear information about the user of the AI system, its description, associated AI operator, categories of personal data used for training AI, and security or reliability of the AI system.

Assessment for Risk Categorization

The suppliers of AI systems and applications must conduct an extensive preliminary risk assessment before placing the system on the market or putting it into service. As a result of the risk assessment, the AI system is to be categorized as either an excessive risk or a high-risk AI system.

Excessive Risk

The excessive risk category includes those AI systems that either employ subliminal techniques that influence individuals in a way that is harmful to their health or safety or systems that exploit any weaknesses in a specific group of individuals, such as mental health, etc.

High Risk

The category of high risk includes those AI systems that are used for purposes like safety devices for critical infrastructures, biometric identification, criminal investigation, educational or professional training, recruiting, and autonomous vehicles, to name a few.

AI Governance Guidelines

The regulation requires AI agents (suppliers and operators) to establish a governance framework while making sure it includes policies and processes to ensure the following:

  • Transparency of the AI system’s interaction with individuals.
  • Transparency with regard to governance measures associated with the development and use of AI systems.
  • Data processing in compliance with applicable laws and regulations.
  • Adoption of security measures.
  • Additional appropriate measures related to high-risk AI systems.

Algorithmic Impact Assessment

AI agents must conduct an algorithmic impact assessment for high-risk AI systems. The assessment must be conducted by a team of professionals with legal and technical knowledge to consider and record known and foreseeable risks of the AI system and appropriate mitigation measures.

Civil Liabilities

The AI agents are responsible for repairing any damages caused by the high-risk AI system. For non-high-risk AI systems, the agent causing the harm will be deemed guilty, and the burden of proof shall lie in favor of the victim.

EU AI Act

The European Union never stays behind regarding the rights and privacy of EU citizens. The European Commission proposed the Artificial Intelligence Act (AI Act) in April 2021. The act was introduced to ensure regulated and safer development and use of AI technologies. However, unlike the General Data Protection Regulation (GDPR), the AI Act takes more of a risk-based approach to regulating the development and deployment of AI. The severity of the provisions and the pertinent penalties for violations depend on the risk category.

Unacceptable Risk

This risk category includes AI systems deemed to be a threat to individuals, their livelihood, or privacy. Hence, the AI Act completely bans such systems that fall under the unacceptable risk category. It includes systems that:

  • Affect the cognitive behavior of individuals, such as voice-activated toys, etc.
  • Group individuals as per their personal or behavioral characteristics.
  • Offer real-time biometric identification.

High-Risk

AI tools or applications negatively impacting individuals' rights or safety are categorized as high-risk systems. This risk category is further divided into two sub-categories:

  • AI systems used as safety components in products subject to third-party ex-ante conformity assessment.
  • AI systems that fall under eight specific areas listed in Annexure III to the AI Act, such as biometric identification, critical infrastructure management, education training, employment, etc.

Low or Minimal Risk

This category includes AI systems that pose minimal or no risks to the safety of their users, such as AI-enabled video games or spam filters. These AI systems are only subject to transparency requirements.

The severity of the provisions and the penalties depend on the type of risk category. However, the AI Act clearly indicates that those who are found to be violating the act would face fines of up to 30 million euros or 6% of global profits.

Connecticut AI Law

On June 7, 2023, Connecticut Governor Ned Lamont signed into law the Senate Bill 1103 - An Act Concerning Artificial Intelligence, Automated Decision-making and Personal Data Privacy (Act). The Act establishes an AI task force and an Office of Artificial Intelligence to propose an AI bill of Rights.

The Act also requires the Department of Administrative Services to create an inventory of all AI systems used by any state agency. The inventory should contain the following information:

  • Name of the vendor and the AI system.
  • Information regarding its general capabilities.
  • Status regarding any impact assessment carried out on the system.

The bill further tasks the Department of Administrative Services to carry out ongoing impact assessments of systems that use AI to ensure that the system does not result in any unlawful discrimination or unlawful disparate impact on the individuals.

Illinois AI Video Interview Act

In January 2020, the US state of Illinois passed the Artificial Intelligence Video Interview Act. It is a brief AI regulation that specifically targets employers deploying AI systems for conducting and analyzing video interviews of candidates.

The regulation obligates employers in the state of Illinois to notify job applicants before the interview about the use of AI for video interviews. The applicants must also be informed about how the AI works and help evaluate the interview. More importantly, employers may not proceed with the AI video interview without the applicant's explicit consent.

The AI Video Interview Act further restricts employers from sharing the video interview with anyone unless the person is an expert in interview evaluation. In the case of a request for deletion from the applicant, employers must oblige to the request and delete the video within 30 days of receipt.

China Administration of Automated Deep Synthesis

The People's Republic of China doesn’t have a dedicated AI regulation. However, the Administration of Deep Synthesis of Internet-based Information Services may contain provisions related to using deep synthetic services, including examples like deepfakes and similar AI-generated content.

The provisions related to the employment or use of deep synthetic AI services may be categorized into three components:

General Provisions

The regulation prohibits using AI deep synthesis services that produce any content banned by PRC laws or that endangers national security or the economy. Such services are prohibited from producing any false news and must be managed responsibly.

Data and Technology Management

The providers of such services must reinforce the management of training data used for deep synthesis and must adhere to the relevant data protection laws if the training data includes personal information.

In case of violation of any provision, the public security body shall impose relevant penalties or punishment. In the event of any criminal activity, investigations related to crime shall be carried out.

Shanghai Regulations

Businesses that deploy or use AI technologies for their business operations in the region of Shanghai are all required to adhere to the Shanghai AI regulation. The regulation applies to a wide range of activities, including but not limited to industrial governance and development and AI Science and Technology (S&T) innovations.

The local municipal departments of Economy and Information are the main bodies responsible for planning and implementing the regulations and developments in the AI industry. The Shanghai AI regulation is designed to reinforce the new generation of AI S&T resources, fostering a granular level of integration of AI technologies.

Shenzhen AI Regulations

The Shenzhen AI regulation is focused on fostering the development of AI-powered industry in the region, especially in the Shenzhen Special Economic Zone. Like the EU AI Act, Shenzhen AI regulation also takes a risk-based approach to govern AI use. For instance, high-risk AI systems require assessments to identify risks in the early stages of the system. The medium and the low-risk categories include AI systems that must oblige to the pre-disclosure and post-tracking provisions. Moreover, AI systems that are categorized as low-risk are allowed to undergo trials and testing without local norms if they comply with international standards.

UK Data Protection and Digital Information (No. 2) Bill

Introduced in March 2023, the UK Data Protection and Digital Information (No. 2) Bill (the Bill) passed its second hearing in April 2023. The Bill proposes changes to the UK General Data Protection Regulation (UK GDPR) while including AI and automated decision-making provisions. Among other key considerations and suggestions, the Bill also recommends provisions for addressing the risks associated with AI-powered automated decision-making and determining required data protection controls.

Canada Artificial Intelligence and Data Act (AIDA)

In June 2022, the Canadian government proposed the Artificial Intelligence and Data Act (AIDA) as part of its Bill C-27. AIDA aims to set out new measures to regulate international and inter-provincial trade and commerce in AI systems and establish common requirements for designing, developing, and using AI systems.

If enacted, the law shall apply to entities that design, develop, or make an AI system available for use. The regulation primarily aims to regulate the high-impact AI systems and provides for a number of compliance obligations for the covered entities, focusing on identifying, mitigating, and notifying any potential risks of harm to the consumers. Some key obligations of the covered entities include disclosure requirements, data anonymization, risk assessments, and notification requirements in relation to material harm caused by AI systems.

How Securiti Can Help

As AI applications and technologies proliferate, they will give rise to more security and privacy concerns. Hence, it is reasonable to believe that businesses will see more AI regulations proposed and passed in the coming years.

Built on a Data Command Center (UDC) framework, Securiti’s Data Command Center is designed to help organizations enable the responsible use of AI through contextual data insights and unified data controls.

Via a single DataCommand.Center, organizations can gain deeper contextual insights into sensitive data, enable data classification with high accuracy, establish granular controls around sensitive data access, foster secure data sharing with differential privacy, and streamline compliance automation with integrated regulatory intelligence.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
Inside Echoleak View More
Inside Echoleak
How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...
What is AI Security Posture Management (AI-SPM)? View More
What is AI Security Posture Management (AI-SPM)?
AI SPM stands for AI Security Posture Management. It represents a comprehensive approach to ensure the security and integrity of AI systems throughout the...
View More
Data Security & GDPR Compliance: What You Need to Know
Learn the importance of data security in ensuring GDPR compliance. Implement robust data security measures to prevent non-compliance with the GDPR.
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
Understanding Data Regulations in Australia’s Telecom Sector View More
Understanding Data Regulations in Australia’s Telecom Sector
Gain insights into the key data regulations in Australia’s telecommunication sector. Learn how Securiti helps ensure swift compliance.
Top 3 Key Predictions on GenAI's Transformational Impact in 2025 View More
Top 3 Key Predictions on GenAI’s Transformational Impact in 2025
Discover how a leading Chief Data Officer (CDO) breaks down top predictions for GenAI’s transformative impact on operations and innovation in 2025.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New