IDC Names Securiti a Worldwide Leader in Data Privacy
ViewListen to the content
The advent of Artificial Intelligence (AI), specifically Generative AI (GenAI), has pushed technology to greater heights. It has finally transcended beyond the world of fiction or dreams into reality. From powering virtual assistants and self-driving cars to enhancing medical diagnosis, AI is helping industries break boundaries and explore new venues.
As AI continues to fuel industries, businesses must learn how they harness its potential while safely navigating the complex maze of AI regulations.
Globally, world leaders are in a relentless pursuit of introducing AI laws to put a leash on this ever-evolving technology and enable its safe use. Consequently, businesses must thoroughly understand such applicable AI laws to avoid costly legal penalties, reputational damages, and missed opportunities.
Read on to learn more about the importance of compliance with global artificial intelligence regulations and what businesses need to know about the laws proposed or enacted to date.
AI is the most wonderful technological breakthrough that we’ve seen over the past decades. However, amidst its evolution, businesses must not ignore the one most critical aspect: AI regulatory compliance. The significance of compliance isn’t simply limited to avoiding regulatory fines, but it goes way beyond that.
Besides boundless opportunities and innovations, AI brought many security, privacy, and ethical concerns. For instance, AI may produce biased content due to partial training data. With employees leveraging AI for their day-to-day tasks, sharing sensitive or intellectual property (IP) data with AI prompts has become a serious concern. Issues like these have prompted world leaders to propose robust AI regulations and frameworks to regulate its use.
With regulatory policies come the penalties for violations or non-compliance, which may further lead businesses to financial and reputational loss. Hence, businesses must adhere to applicable regulations to avoid penalties and repercussions.
Good practices drive trust, and trust drives successful businesses. Tech advancements have made customers aware of their surroundings more than ever before. While customers understand how beneficial it is to leverage AI, they also know that its unsupervised or improper use may lead to various risks, such as biasedness, data leaks, or IP data exposure.
Regulatory compliance is the most effective way to reflect how, as a business, you prioritize data security, integrity, and privacy. It also delivers your commitment to demonstrating the ethical use of AI and relevant technologies and, most importantly, transparency.
Successful businesses need access to international markets to grow into multi-national organizations. However, to achieve that, businesses must first meet the regulatory requirements in those markets. Regulations, in general, aren’t limited to individual regions. In fact, every country has its regulatory laws that businesses must meet to conduct their operations, especially the cross-border data transfer rules. A robust AI regulation strategy can help businesses streamline their privacy operations, especially pertaining to data mapping and cross-border data transfers, to gain access to global markets.
Regulations do not hinder or impede innovations. On the contrary, laws like AI regulations are designed to foster innovation responsibly, safely, and ethically. Regulations help businesses set boundaries around the customer's privacy, their rights, transparency to business practices, and enhanced data security measures, to name a few. By demonstrating compliance, businesses can work towards exploring new venues, innovations, and opportunities without worrying about the risks associated with compliance.
With the inception of GenAI technologies like ChatGPT, there has been a significant rise in the development of new AI applications as well as AI laws. Businesses must stay proactive in understanding the core components of such laws and start preparing to get ahead of the curve.
Following is a brief overview of some of the recent developments in the AI regulatory landscape:
In May 2023, the Brazilian Senate introduced the Bill of Law 2338/2023. Still under consideration by the legislature, the bill proposes the establishment of guardrails around AI accessibility and users’ privacy rights. Once enacted, the regulation shall apply to individuals and entities that either employ or use artificial intelligence systems or supply them, collectively referred to as AI agents. Following are some of the salient features of the proposed regulation:
Firstly, the bill empowers users who are affected by AI systems with data privacy rights, such as:
Individuals are further granted the right to receive, before contracting or using the system, clear information about the user of the AI system, its description, associated AI operator, categories of personal data used for training AI, and security or reliability of the AI system.
The suppliers of AI systems and applications must conduct an extensive preliminary risk assessment before placing the system on the market or putting it into service. As a result of the risk assessment, the AI system is to be categorized as either an excessive risk or a high-risk AI system.
The excessive risk category includes those AI systems that either employ subliminal techniques that influence individuals in a way that is harmful to their health or safety or systems that exploit any weaknesses in a specific group of individuals, such as mental health, etc.
The category of high risk includes those AI systems that are used for purposes like safety devices for critical infrastructures, biometric identification, criminal investigation, educational or professional training, recruiting, and autonomous vehicles, to name a few.
The regulation requires AI agents (suppliers and operators) to establish a governance framework while making sure it includes policies and processes to ensure the following:
AI agents must conduct an algorithmic impact assessment for high-risk AI systems. The assessment must be conducted by a team of professionals with legal and technical knowledge to consider and record known and foreseeable risks of the AI system and appropriate mitigation measures.
The AI agents are responsible for repairing any damages caused by the high-risk AI system. For non-high-risk AI systems, the agent causing the harm will be deemed guilty, and the burden of proof shall lie in favor of the victim.
The European Union never stays behind regarding the rights and privacy of EU citizens. The European Commission proposed the Artificial Intelligence Act (AI Act) in April 2021. The act was introduced to ensure regulated and safer development and use of AI technologies. However, unlike the General Data Protection Regulation (GDPR), the AI Act takes more of a risk-based approach to regulating the development and deployment of AI. The severity of the provisions and the pertinent penalties for violations depend on the risk category.
This risk category includes AI systems deemed to be a threat to individuals, their livelihood, or privacy. Hence, the AI Act completely bans such systems that fall under the unacceptable risk category. It includes systems that:
AI tools or applications negatively impacting individuals' rights or safety are categorized as high-risk systems. This risk category is further divided into two sub-categories:
This category includes AI systems that pose minimal or no risks to the safety of their users, such as AI-enabled video games or spam filters. These AI systems are only subject to transparency requirements.
The severity of the provisions and the penalties depend on the type of risk category. However, the AI Act clearly indicates that those who are found to be violating the act would face fines of up to 30 million euros or 6% of global profits.
On June 7, 2023, Connecticut Governor Ned Lamont signed into law the Senate Bill 1103 - An Act Concerning Artificial Intelligence, Automated Decision-making and Personal Data Privacy (Act). The Act establishes an AI task force and an Office of Artificial Intelligence to propose an AI bill of Rights.
The Act also requires the Department of Administrative Services to create an inventory of all AI systems used by any state agency. The inventory should contain the following information:
The bill further tasks the Department of Administrative Services to carry out ongoing impact assessments of systems that use AI to ensure that the system does not result in any unlawful discrimination or unlawful disparate impact on the individuals.
In January 2020, the US state of Illinois passed the Artificial Intelligence Video Interview Act. It is a brief AI regulation that specifically targets employers deploying AI systems for conducting and analyzing video interviews of candidates.
The regulation obligates employers in the state of Illinois to notify job applicants before the interview about the use of AI for video interviews. The applicants must also be informed about how the AI works and help evaluate the interview. More importantly, employers may not proceed with the AI video interview without the applicant's explicit consent.
The AI Video Interview Act further restricts employers from sharing the video interview with anyone unless the person is an expert in interview evaluation. In the case of a request for deletion from the applicant, employers must oblige to the request and delete the video within 30 days of receipt.
The People's Republic of China doesn’t have a dedicated AI regulation. However, the Administration of Deep Synthesis of Internet-based Information Services may contain provisions related to using deep synthetic services, including examples like deepfakes and similar AI-generated content.
The provisions related to the employment or use of deep synthetic AI services may be categorized into three components:
The regulation prohibits using AI deep synthesis services that produce any content banned by PRC laws or that endangers national security or the economy. Such services are prohibited from producing any false news and must be managed responsibly.
The providers of such services must reinforce the management of training data used for deep synthesis and must adhere to the relevant data protection laws if the training data includes personal information.
In case of violation of any provision, the public security body shall impose relevant penalties or punishment. In the event of any criminal activity, investigations related to crime shall be carried out.
Businesses that deploy or use AI technologies for their business operations in the region of Shanghai are all required to adhere to the Shanghai AI regulation. The regulation applies to a wide range of activities, including but not limited to industrial governance and development and AI Science and Technology (S&T) innovations.
The local municipal departments of Economy and Information are the main bodies responsible for planning and implementing the regulations and developments in the AI industry. The Shanghai AI regulation is designed to reinforce the new generation of AI S&T resources, fostering a granular level of integration of AI technologies.
The Shenzhen AI regulation is focused on fostering the development of AI-powered industry in the region, especially in the Shenzhen Special Economic Zone. Like the EU AI Act, Shenzhen AI regulation also takes a risk-based approach to govern AI use. For instance, high-risk AI systems require assessments to identify risks in the early stages of the system. The medium and the low-risk categories include AI systems that must oblige to the pre-disclosure and post-tracking provisions. Moreover, AI systems that are categorized as low-risk are allowed to undergo trials and testing without local norms if they comply with international standards.
Introduced in March 2023, the UK Data Protection and Digital Information (No. 2) Bill (the Bill) passed its second hearing in April 2023. The Bill proposes changes to the UK General Data Protection Regulation (UK GDPR) while including AI and automated decision-making provisions. Among other key considerations and suggestions, the Bill also recommends provisions for addressing the risks associated with AI-powered automated decision-making and determining required data protection controls.
In June 2022, the Canadian government proposed the Artificial Intelligence and Data Act (AIDA) as part of its Bill C-27. AIDA aims to set out new measures to regulate international and inter-provincial trade and commerce in AI systems and establish common requirements for designing, developing, and using AI systems.
If enacted, the law shall apply to entities that design, develop, or make an AI system available for use. The regulation primarily aims to regulate the high-impact AI systems and provides for a number of compliance obligations for the covered entities, focusing on identifying, mitigating, and notifying any potential risks of harm to the consumers. Some key obligations of the covered entities include disclosure requirements, data anonymization, risk assessments, and notification requirements in relation to material harm caused by AI systems.
As AI applications and technologies proliferate, they will give rise to more security and privacy concerns. Hence, it is reasonable to believe that businesses will see more AI regulations proposed and passed in the coming years.
Built on a Unified Data Controls (UDC) framework, Securiti’s Data Command Center is designed to help organizations enable the responsible use of AI through contextual data insights and unified data controls.
Via a single DataCommand.Center, organizations can gain deeper contextual insights into sensitive data, enable data classification with high accuracy, establish granular controls around sensitive data access, foster secure data sharing with differential privacy, and streamline compliance automation with integrated regulatory intelligence.
Get all the latest information, law updates and more delivered to your inbox
November 14, 2023
What is AI Governance? At its core, AI Governance refers to a defined set of policies, practices, guidelines, processes, and rules an organization establishes...
November 6, 2023
The Personal Information Protection Commission (PIPC) released its guidance on the safe use of personal information in the age of AI on August 3,...
November 3, 2023
On September 21, 2023, New Zealand's Office of the Privacy Commissioner (OPC) published guidance on Artificial Intelligence and the Information Privacy Principles (IPPs). This...
At Securiti, our mission is to enable enterprises to safely harness the incredible power of data and the cloud by controlling the complex security, privacy and compliance risks.
Copyright © 2023 Securiti · Sitemap · XML Sitemap
info@securiti.ai
300 Santana Row Suite 450. San Jose,
CA 95128