IDC Names Securiti a Worldwide Leader in Data Privacy
ViewListen to the content
The European Union’s (EU) General Data Protection Regulation (GDPR) has emerged as a significant legal framework governing data privacy and protection. As the use of Artificial Intelligence (AI) continues to expand across industries, the impact of GDPR on AI has become increasingly relevant.
The inception of AI dates back to the 1950s when researchers and scientists envisioned the possibility of creating machines that could simulate human intelligence. Fast forward to 2023, groundbreaking developments in AI over the past decade have made AI an integral part of our modern society, impacting numerous industries, from healthcare and finance to transportation and entertainment.
While AI brings several opportunities, it also raises significant concerns due to its excessive reliance on data. Therefore, it becomes necessary to assess the use and operation of AI systems in light of the requirements of data privacy laws. Data privacy laws regulate the data lifecycle, including data collection and consent, data minimization and purpose limitation, transparency, algorithm bias and discrimination, and the security and protection of an individual’s data.
Regarding the impact of GDPR on AI, on 25 June 2020, the European Parliament published a study addressing the relationship between the GDPR and AI. The study analyzed how AI is regulated within the GDPR and examined the extent to which AI fits into the GDPR’s conceptual framework. The study's findings emphasize that while the GDPR can be used to regulate AI, it does not give controllers enough direction, and its prescriptions need to be expanded and concretized.
In response to the European Parliament’s initial study, the EU initiated the development of rules on AI with its proposed EU AI Act. The AI Act is one of the first global comprehensive AI laws proposed to regulate the development and use of AI systems. The proposed law seeks to ensure that AI systems used in the EU are transparent, reliable, and safe and that they respect fundamental rights and values.
Since its enactment on May 25, 2018, the GDPR has given individuals greater control over their personal data and established guidelines for how organizations should collect, process, and store data. As AI systems often rely on processing massive amounts of data, including personal data, to learn and enhance their performance, therefore, when designing and implementing AI systems, GDPR principles, rights, and provisions become crucial:
Companies developing or using AI should determine whether they process personal data, and if so, what legal basis is applicable for their processing activities. They should comply with all requirements in relation to the legal basis they rely on. For example, if they rely on consent, they must ensure that the consent is free, informed, specific, and unambiguous.
AI developers and systems should collect and process only such personal data as is necessary to accomplish their intended purposes. In addition, the processing of personal data must be restricted to necessary purposes and only be done for legitimate, explicit, and specified purposes.
The GDPR emphasizes using anonymization and pseudonymization techniques to safeguard personal data and enhance an individual’s privacy. The GDPR does not regard anonymized data as personal data. Pseudonymization lowers the risk of re-identifying personal data. However, pseudonymized data is still considered personal data. Anonymization and pseudonymization are essential techniques in relation to the operation of AI systems that process personal data.
AI systems processing the personal data of individuals must maintain accurate and up-to-date records of such personal data and not retain it for longer than necessary.
GDPR mandates organizations making decisions based solely on automated processing that produce legal or similarly significant effects for data subjects to inform them of such activity, provide meaningful information about the logic involved, and explain the significance and envisaged consequences of the processing. The information should be specific and easily accessible.
The GDPR strongly emphasizes including data protection measures in AI systems from the design stage and throughout the lifecycle. Organizations must incorporate privacy-preserving technologies and default configurations to guarantee data security and privacy in AI applications.
According to Article 35 of the GDPR, organizations must undertake DPIAs for AI applications that pose a significant threat to the rights and freedoms of individuals. Before deploying AI systems, these assessments assist in identifying and mitigating potential data protection risks.
Organizations must be responsible for the data processing by their AI systems and ensure that AI applications handling personal data are equipped with security algorithms to safeguard the personal data. Further, such organizations should also adopt other appropriate technical and organizational measures in line with the nature of the risk posed by their processing activities.
GDPR emphasizes ensuring appropriate safeguards for cross-border transfers of personal data. Therefore, organizations developing and using AI systems need to ensure that if they transfer any personal data internationally, they have sufficient controls in place, such as Binding Corporate Rules (BCRs) or Standard Contractual Clauses (SCCs).
AI systems must respect and adhere to the data subject rights granted under GDPR, such as the right to access, the right to rectification, the right to erasure, the right to restrict processing, the right to data portability, and the right to object. The GDPR also prohibits subjecting individuals to automated decision-making unless one of the specified exceptions applies, i.e. contractual, explicit consent, and legal authorization.
AI is evolving rapidly, which underscores the importance of organizations understanding the implications of the usage of AI on data processing while ensuring compliance with GDPR.
Article 6 and Article 7 of the GDPR mandate that organizations must have a lawful basis for processing personal data. Consent is one of the lawful bases for processing personal data. Consent must be free, specific, informed, and an unambiguous indication of the data subject’s wishes. Additionally, organizations should provide individuals the right to withdraw their consent at any time.
If an organization developing or using AI processes personal data and relies on consent as the legal basis for such processing, it should ensure that it provides the data subjects with sufficient information regarding the processing as per the GDPR and obtains valid consent.
According to Articles 12, 13, and 14 of GDPR, which address transparency and the right to information, businesses must inform individuals about how their personal data is processed in a clear, understandable, and easily accessible manner. This includes informing data subjects on the use of automated decision-making, such as profiling, and the logic behind it. Organizations should be open and honest about how AI is used, how the personal data of individuals is processed, the importance of AI-driven decisions, and any dangers or side effects that might be involved.
Deep learning algorithms, in particular, can be quite complicated and function as "black boxes." It is difficult to provide people with a clear and understandable description of the complex layers of computations and transformations that take place within these algorithms.
AI models continuously update and learn, adapting to new environments and data. Due to the model's dynamic nature, it can be difficult to provide explanations because the model's behavior may change over time, and earlier ones may become outdated.
Some AI models and algorithms are protected by intellectual property laws and are developed using proprietary methods. Concerns about losing competitive advantage may arise if these models' inner workings are disclosed.
AI systems have the capacity to learn from biased input and make possibly biased decisions. Ethical issues can arise when attempting to justify choices made with biased data. Thus, this process must be handled carefully. When providing information about the processing of personal data by AI systems, organizations should consider the foregoing factors and provide accurate and easily understandable information to the data subjects.
Article 5(1)(c) of GDPR outlines the principle of data minimization and requires that personal data be adequate, relevant, and limited to what is necessary in relation to the purposes for which it is processed. In the context of AI, this means that organizations should only gather and process the minimum personal information required to carry out a particular processing operation.
It is crucial for AI applications to identify the data necessary for model training and decision-making without relying on excessive or irrelevant data. This significantly reduces sensitive data exposure, minimizes privacy concerns, and guarantees adherence to the data minimization principle.
GDPR’s Article 5(1)(b) outlines the purpose limitation principle and states that personal data must be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes is generally not be considered to be incompatible with the initial purposes, provided appropriate safeguards are ensured for protecting the rights and freedoms of data subjects. AI applications must comply with the intended purposes stated to data subjects at the time of collecting their data. The purpose limitation principle will be violated if the same data is used for unrelated purposes without additional consent or an otherwise applicable legal basis.
Large datasets can be necessary for optimizing the performance and accuracy of certain AI models. Since AI models thrive on abundant data for learning, this need can impede stringent data minimization efforts. AI systems may also find patterns or connections in data that the data controller did not initially intend or anticipate. As a result, it may become difficult to uphold the purpose limitation principle since new data uses may arise.
Data from third-party sources may be incorporated into AI models to improve their performance, making it difficult to ensure that this new data complies with GDPR guidelines, particularly if it was not initially obtained for that purpose. Therefore, organizations need to keep the principles of data minimization and purpose limitation at the forefront when planning and executing their processing operations to ensure their compliance with the GDPR.
GDPR places a general prohibition on decision-making based solely on automated processing, which produces legal effects concerning the data subject or similarly significantly affects them. This prohibition does not apply if the decision:
It is important to note that if the decision is necessary for the purposes of a contract or is based on consent, the data controller is required to implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express their point of view and to contest the decision.
Automated decision-making based on sensitive personal data shall not be conducted unless the data subject provides their explicit consent or the processing is necessary for reasons of substantial public interest, in accordance with EU or member state law, and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.
Article 32 of GDPR requires organizations to implement appropriate organizational and technical safeguards in place to guarantee a level of security suitable for the risk posed by any processing operations. This includes taking precautions to guard against accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data transmitted, stored, or otherwise processed. AI applications that handle personal data must follow appropriate security measures to protect the information processed by them.
Article 25 of GDPR introduces the concept of "data protection by design and default." This principle mandates organizations consider data security and privacy concerns from the design stage of AI systems and throughout their cycle. It further requires incorporating privacy elements into the architecture of AI applications and ensuring that privacy settings are always set and configured to their most stringent options.
With AI evolving and being adapted rapidly, it's crucial to establish AI practices that comply with GDPR regulations:
The GDPR profoundly impacts AI technologies' development, deployment, and usage. It strongly emphasizes the privacy of individuals and their rights and imposes strict guidelines for processing personal data.
Controllers using AI for processing should comply with the principles of the GDPR and take a prudent and risk-focused approach. It is important to note that the EU AI Act is also expected to come into effect in the future, providing more concrete compliance obligations for AI systems.
If you’re struggling to comply with the GDPR, request a demo to witness how Securiti can help you in your GDPR compliance journey.
Get all the latest information, law updates and more delivered to your inbox
November 14, 2023
What is AI Governance? At its core, AI Governance refers to a defined set of policies, practices, guidelines, processes, and rules an organization establishes...
November 6, 2023
The Personal Information Protection Commission (PIPC) released its guidance on the safe use of personal information in the age of AI on August 3,...
November 3, 2023
On September 21, 2023, New Zealand's Office of the Privacy Commissioner (OPC) published guidance on Artificial Intelligence and the Information Privacy Principles (IPPs). This...
At Securiti, our mission is to enable enterprises to safely harness the incredible power of data and the cloud by controlling the complex security, privacy and compliance risks.
Copyright © 2023 Securiti · Sitemap · XML Sitemap
info@securiti.ai
Securiti, Inc.
300 Santana Row
Suite 450
San Jose, CA 95128