IDC Names Securiti a Worldwide Leader in Data Privacy
ViewListen to the content
The European Commission tabled the proposal of the Artificial Intelligence Regulation on April 21, 2021. Since then, numerous Council Presidencies have recommended revisions and amendments to the Proposal. On 3rd May 2022, the European Parliament adopted the final text.
The final text will be voted on jointly by the Market and Consumer Protection (IMCO) and the Civil Liberties, Justice, and Home Affairs (LIBE) committees in late September. The final adopted text is available here. Once the final text is adopted by the European Parliament and member states, it will become directly enforceable across the EU.
The EU Artificial Intelligence Act is the first law on Artificial Intelligence in the world that aims to facilitate a single market for AI applications. It lays out general guidelines for applying AI-driven systems, products, and services within the EU territory with an aim to protect the fundamental rights and interests of individuals. The AI Regulation deals with the protection of both personal and non-personal data.
The Proposed Regulation applies to providers putting AI systems on the market and users of AI systems. Providers means entities that develop and place AI systems on the European Union market. It also applies to AI users and businesses that use AI systems for non-personal reasons.
This Regulation applies to:
The following are exempted from the EU Artificial Intelligence Act:
The proposed legislation calls for establishing the European Artificial Intelligence Board (EAIB) as a new enforcement authority at the Union level. The EAIB will be responsible for establishing codes of conduct. Member states are also required to designate one or more national authorities to ensure compliance with the provisions of the Regulation.
An Artificial Intelligence System (AI system) is software created using one or more of the techniques and approaches mentioned in Annex I that generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
Annex I provides a finite list of software that is classified as AI systems, such as machine learning approaches, statistical approaches, logic, and knowledge-based approaches.
The provider is defined as a "natural or legal person," "public authority," "agency," or "other body" that creates or commissions the creation of an artificial intelligence (AI) system to commercialize it or deploy it in service under its name or trademark, whether in exchange for money or for free.
An importer is defined as any natural or legal person based in the Union who sells or employs an AI system bearing the name or trademark of a legal or natural person based outside the Union;
User refers to any natural or legal person, public authority, agency, or other body using an AI system under its control, except when it is used during personal, non-professional activity.
Any natural or legal person established in the Union who has been permitted in writing by the maker of an AI system to conduct and carry out on that maker's behalf the duties and processes specified by this Regulation.
Law enforcement refers to actions taken by law enforcement officials to prevent, investigate, uncover, or bring criminal charges or carry out criminal sentences, including defending against and preventing threats to public safety.
National supervisory authority refers to the body to which a Member State delegates responsibility for carrying out and enforcing this Regulation, coordinating the tasks assigned to that Member State, serving as the Commission's sole point of contact, and speaking on that Member State's behalf at the European Artificial Intelligence Board.
Under the Proposed Regulation, AI systems are divided into four risk categories. The obligations of AI systems vary depending on the level of risk and the risk category they fall into.
Systems falling under this category clearly endanger people's safety, livelihood, and fundamental rights of people. Such AI systems are prohibited to be used.
The following artificial intelligence systems cannot be placed, operated, or used:
This includes artificial intelligence systems that create a high risk to human safety, health, or fundamental rights of individuals. Such artificial intelligence (AI) systems are permitted to be used subject to certain conditions and ex-ante conformity assessment.
High-risk AI systems must ensure documentation, data quality, traceability, human oversight, data accuracy, cybersecurity, and robustness to limit any risks to the fundamental rights of individuals.
In addition, high-risk AI systems have transparency requirements, i.e., providers of those AI systems must inform end-users that they are interacting with an AI system. Moreover, organizations must conduct ex-ante conformity assessments prior to putting into market such high-risk AI systems.
The European Commission lists the following as high-risk AI systems:
The AI systems that fall under this category must adhere to stringent disclosure requirements. Unless it is clear from the context and circumstances of the use, providers of such AI systems must ensure that natural persons are notified that they are engaging with an AI system.
This gives natural people the ability to decide for themselves whether to use the AI system in a particular situation or not. Users of the following AI systems, for instance, are required to be transparent:
The transparency obligations do not apply to AI systems that have been authorized by law for law enforcement purposes unless such systems are available for the public to report a criminal offense.
This category contains AI systems like spam filters or video games that use AI technology but pose little to no harm to citizens’ safety or rights. Most AI systems fit into this category, and the Regulation permits unrestricted use of these applications without imposing any new requirements.
The AI Regulation Proposal is without prejudice to the GDPR, and it must be read together with the GDPR regarding the fulfillment of data subjects’ rights.
The AI Regulation requires high-risk AI systems and Limited risk AI systems to keep individuals informed that they are interacting with an AI system unless it is clearly evident from the context and circumstances of the use.
The organizations subject to the Proposed AI Regulation are required to facilitate data subjects’ rights fulfillment as per the provisions of the GDPR wherever personal data processing is involved. The AI Regulation complements Article 22 of the GDPR, which grants individuals the right to object to automated decision-making.
If the Regulation is violated, there could be serious penalties:
Organizations that process personal data through the use of AI systems must align their operations and ensure their practices comply with EU Artificial Intelligence Act by:
As countries witness a profound transition in the digital landscape, automating privacy and security processes for quick action is essential. Organizations must become even more privacy-conscious in their operations and diligent custodians of their customer's data.
Securiti uses the PrivacyOps architecture to provide end-to-end automation for businesses, combining reliability, intelligence, and simplicity. Securiti can assist you in complying with EU Artificial Intelligence Act and other privacy and security standards worldwide. Examine how it functions. Request a demo right now.
The European Union’s Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive law on AI that aims to facilitate a single market for AI applications. It specifies guidelines for applying AI-driven systems, products, and services within the EU territory to protect individuals' fundamental rights and interests and protect personal and non-personal data.
Article 14 outlines the need for Human Oversight where high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that natural persons can effectively oversee them during the period in which the AI system is in use.
The EU AI Act applies to providers putting AI systems on the market and users of AI systems. Providers are entities that develop and place AI systems in the EU market. It also applies to AI users and businesses that use AI systems for non-personal reasons.
The EU AI Act was approved by the European Parliament on June 14, 2023. Now that negotiations have begun, the final text will be approved by European institutions.
Omer Imran Malik (CIPP/US, CIPM) is a data privacy and technology lawyer with significant experience in advising governments, technology companies, NGOs and legislative think-thanks on data privacy and technology related legal issues and is an expert in modeling legal models for legal technology. He has been a prominent contributor to numerous esteemed publications, including Dawn News, IAPP and has spoken at the World Ethical Data Forum as well.
His in-depth knowledge and extensive experience in the industry make him a trusted source for cutting-edge insights and information in the ever-evolving world of data privacy, technology and AI related legal developments.
Get all the latest information, law updates and more delivered to your inbox
September 21, 2023
Introduction The emergence of Generative AI has ushered in a new era of innovation in the ever-evolving technological landscape that pushes the boundaries of...
September 13, 2023
Kuwait didn’t have any data protection law until the Communication and Information Technology Regulatory Authority (CITRA) introduced the Data Privacy Protection Regulation (DPPR). The...
September 12, 2023
Following the end of the Brexit Implementation Period on 31 December 2020, the United Kingdom is no longer subject to the European Union General...
At Securiti, our mission is to enable enterprises to safely harness the incredible power of data and the cloud by controlling the complex security, privacy and compliance risks.
Copyright © 2023 Securiti · Sitemap · XML Sitemap
info@securiti.ai
Securiti, Inc.
300 Santana Row
Suite 450
San Jose, CA 95128