The accelerated development of artificial intelligence (AI) technologies has prompted notable concerns regarding data privacy and protection. In response to growing AI concerns, the Austrian Data Protection Authority (Datenschutzbehörde, DSB) recently published a comprehensive set of Frequently Asked Questions (FAQs) that addresses the intersection of AI and data protection.
These frequently asked questions (FAQs) aim to provide guidance to both developers and users of AI technologies, shedding light on how the GDPR and the EU AI Act apply to AI systems.
1. What is meant by AI or AI systems?
The EU AI Act defines an AI system in Article 3(1) AI Regulation as “a machine-based system designed to operate autonomously to varying degrees and capable of being adaptable once it has started operating, and which derives from the inputs received, for explicit or implicit purposes, how to produce outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”
In essence, these are computer systems that execute tasks that necessitate human intellect, including problem-solving, learning, decision-making, and interacting with their surroundings in a way as humans do. On the other hand, Generative AI (GenAI) specifically denotes systems that produce new outputs, such as text, audio, images, or videos, in response to user inputs or prompts.
Learn more about the EU AI Act, the world’s first comprehensive AI law. Additionally, learn how the EU AI Act shapes AI governance.
2. What laws govern the use of AI systems?
The legal framework for AI systems in the EU is established through several key regulations. The EU AI Regulation, adopted on May 22, 2024, sets harmonized rules for AI systems' development, marketing, and deployment. The AI Liability Directive also proposes adapting non-contractual civil liability rules to AI. Copyright provisions also apply. Personal data processing is common when using AI, triggering the applicability of GDPR and the Austrian Data Protection Act (DSG).
3. How do the GDPR and the EU AI Act relate to each other?
As stated in Article 2 (7) of the AI Regulation, the AI Regulation does not impact the GDPR, the work of the data protection authority, or the obligations of providers and operators of AI systems as controllers or processors.
In essence, when personal data is processed, the GDPR continues to apply. Subsequently, the data protection authority continues to be responsible for resolving data protection concerns associated with AI systems.
4. Who is the responsible authority?
The EU AI Act authorizes one or more authorities to conduct market surveillance. The primary goal of market surveillance is to ensure that high-risk AI systems abide by the requirements of AI Regulation. It has not yet been confirmed who this will be in Austria. The EU Commission is also equipped with certain law enforcement powers. To support the implementation of the AI Regulation, an AI Service Center has been established at RTR GmbH, serving as a central contact point and information hub for all AI-related inquiries and resources.
The supervisory authorities in charge of the Police and Justice Directive serve as market monitoring authorities for high-risk AI systems in areas like law enforcement, border management, justice, and democracy. In Austria, the data protection authority is tasked with carrying out this obligation in line with Sections 18 and 31 of the Data Protection Act.
5. Can individuals file a complaint with the regulatory authority regarding AI systems?
An individual (data subject) may file a complaint with the data protection authority if they believe that using an AI system and the related processing of their personal data has violated the DSG or GDPR.
6. What special data protection clauses does the AI Regulation contain?
The GDPR is cited several times in the AI Regulation, including when defining terms like personal data, biometric data, and profiling.
The AI Regulation allows for the potential processing of "sensitive" data as defined by Art. 9 GDPR in some situations to identify "biases" in an AI system. Art. 30 GDPR (Art. 10 para. 5 AI Regulation) requires that the data that are absolutely essential for this reason be included in the register of processing activities, together with an explanation of why processing other data cannot accomplish the same objective.
Additionally, if personal data is processed, the EU declaration of conformity for high-risk AI systems under Art. 47 AI Regulation must state, among other things, that the AI system (or the data processing conducted within the AI system's framework) complies with the requirements of the GDPR or the Police and Justice Directive.
7. What data protection obligations must be observed when using AI systems?
The GDPR takes a technology-neutral stance, which means that it treats AI systems similarly to other means of processing personal data rather than singling them out for particular scrutiny. In essence, AI is subject to the same laws and regulations on data protection as any other kind of data processing.
Nevertheless, personal data processing is a critical component of AI systems as they frequently process personal data, particularly those based on machine learning, during both the learning and operational phases.
Principles
The GDPR sets forth several key principles that must be followed whenever personal data is processed, and it is the controller's responsibility to prove that they are adhering to these principles (Article 5(1) and (2) GDPR). These include lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability. These principles must also be followed when utilizing AI systems and processing personal data.
Legal basis
For personal data to be processed, at least one of the six legal bases listed in Article 6(1) of GDPR must be met. These include consent, performance of a contract, legal requirements, protecting vital interests, carrying out tasks in the public interest, and pursuing legitimate purposes.
When processing sensitive data (special categories of data as defined in Article 9(1) GDPR), an exception to the prohibition under Article 9(2) GDPR is also required, which provides stricter conditions compared to the permissions in Article 6(1) GDPR.
Processing in good faith; Transparency
A general concept known as "fair processing" states that personal data must not be processed in a manner that would unfairly disadvantage, discriminate against, unexpectedly, or mislead the data subject. Specifically, risk cannot be transferred from the controller to the data subject, such as via a reference in the terms and conditions. This is strongly related to the concept of transparency, which states that the data subject must be informed about the processing of their personal data.
Purpose limitation, data minimization, and storage limitation
Organizations engaging in personal data processing, even in the context of AI systems, must have a clear and well-defined purpose. Data may only be processed and maintained for the amount of time required to accomplish the goal, and it must be relevant and necessary.
Accuracy
According to the principle of data accuracy, all reasonable measures must be taken to ensure that inaccurate personal data is promptly erased or corrected, taking into account the reasons for which it is processed. Data must also be accurate and, where required, updated.
This presents a special problem for (text-)generating systems, as presently in use systems provide output that is probably right statistically but may not be factually accurate. In this case, data subjects should be notified that the outcomes generated by these technologies could be inaccurate or misleading.
Integrity and confidentiality (security)
When using AI systems for processing, appropriate security measures must be implemented to protect data from accidental loss, unauthorized access, and unlawful disclosure to third parties.
Rights of data subjects
The data subject's rights must be honored per the GDPR and EU AI Act.
8. Can AI systems be used to make automated decisions that impact individuals?
Organizations must ensure compliance with GDPR’s Art. 22 insofar as personal data is processed in the context of AI systems being used for automated decisions. However, Art. 22 protects individuals from decisions made solely on automated processing, including profiling, that have legal effects or similarly significantly impact them.
Thus, only those automated decisions that specifically impact the legal standing of data subjects are covered by Art. 22 GDPR. The GDPR's Recital 71 lists instances of these automated decisions, like the automated denial of an online credit application or online hiring procedures without any human involvement. However, this does not apply in only three cases:
- The decision is strictly required for the data subject and the controller to complete or perform a contract,
- A legal basis and appropriate safeguards protect the data subject's rights, freedoms, and legitimate interests, or
- The individual has explicitly consented.
Even in these circumstances, the data subject has to be informed of the automated decision-making process about them, together with the reasoning behind it and its desired outcomes. However, unless there is a legal basis, the data subject also has the right to challenge the decision, voice their opinions, and ask for human involvement to review the decision.
9. Are organizations or individuals still required to comply with the GDPR even if they have not developed the AI system?
Once a natural or legal person determines the purposes and means of data processing, they qualify as the data protection controller and must adhere to GDPR requirements. Even if the provider or operator sets the technical specifications, this typically does not alter the fact that the entity using the AI system is considered the data protection controller.
10. What should organizations consider when using third-party AI systems?
Organizations must consider whether using "foreign" systems would involve transferring personal data to the system's manufacturer (or other third parties), which might result in the disclosure of trade secrets or data.
To mitigate these risks, the situation should be assessed, and internal guidelines should be established on what data can be processed with the system. When in doubt, consult the third-party provider beforehand. Many providers also offer"on-premise" solutions, allowing data to be hosted on a company’s servers.
11. What is the ChatGPT Task Force?
The European Data Protection Board (EDPB) established the ChatGPT Task Force, a working group that focuses on data protection concerns related to ChatGPT products.
How Securiti Can Help
Enterprises that process personal data through AI systems must ensure that their practices comply with the EU AI Act and evolving AI laws. Using Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration for ensuring the safe use of data and AI — organizations can navigate existing and future regulatory compliance by:
- Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
- Conducting AI risk assessments to identify and classify AI systems by risk level.
- Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
- Implementing appropriate privacy, security, and governance guardrails for protecting data and AI systems.
- Ensure compliance with applicable data and AI regulations.
Request a demo to learn more.