Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View
Veeam

The Funniest Evening at RSA with Hasan Minhaj

Hasan Minhaj Request ticket
View

An Overview of Austria’s DSB FAQs Addressing AI and Data Protection

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syed Tatheer Kazmi

Data Privacy Analyst

CIPP/Europe

Published October 24, 2024

Listen to the content

The accelerated development of artificial intelligence (AI) technologies has prompted notable concerns regarding data privacy and protection. In response to growing AI concerns, the Austrian Data Protection Authority (Datenschutzbehörde, DSB) recently published a comprehensive set of Frequently Asked Questions (FAQs) that addresses the intersection of AI and data protection.

These frequently asked questions (FAQs) aim to provide guidance to both developers and users of AI technologies, shedding light on how the GDPR and the EU AI Act apply to AI systems.

1. What is meant by AI or AI systems?

The EU AI Act defines an AI system in Article 3(1) AI Regulation as “a machine-based system designed to operate autonomously to varying degrees and capable of being adaptable once it has started operating, and which derives from the inputs received, for explicit or implicit purposes, how to produce outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

In essence, these are computer systems that execute tasks that necessitate human intellect, including problem-solving, learning, decision-making, and interacting with their surroundings in a way as humans do. On the other hand, Generative AI (GenAI) specifically denotes systems that produce new outputs, such as text, audio, images, or videos, in response to user inputs or prompts.

Learn more about the EU AI Act, the world’s first comprehensive AI law. Additionally, learn how the EU AI Act shapes AI governance.

2. What laws govern the use of AI systems?

The legal framework for AI systems in the EU is established through several key regulations. The EU AI Regulation, adopted on May 22, 2024, sets harmonized rules for AI systems' development, marketing, and deployment. The AI Liability Directive also proposes adapting non-contractual civil liability rules to AI. Copyright provisions also apply. Personal data processing is common when using AI, triggering the applicability of GDPR and the Austrian Data Protection Act (DSG).

3. How do the GDPR and the EU AI Act relate to each other?

As stated in Article 2 (7) of the AI Regulation, the AI Regulation does not impact the GDPR, the work of the data protection authority, or the obligations of providers and operators of AI systems as controllers or processors.

In essence, when personal data is processed, the GDPR continues to apply. Subsequently, the data protection authority continues to be responsible for resolving data protection concerns associated with AI systems.

4. Who is the responsible authority?

The EU AI Act authorizes one or more authorities to conduct market surveillance. The primary goal of market surveillance is to ensure that high-risk AI systems abide by the requirements of AI Regulation. It has not yet been confirmed who this will be in Austria. The EU Commission is also equipped with certain law enforcement powers. To support the implementation of the AI Regulation, an AI Service Center has been established at RTR GmbH, serving as a central contact point and information hub for all AI-related inquiries and resources.

The supervisory authorities in charge of the Police and Justice Directive serve as market monitoring authorities for high-risk AI systems in areas like law enforcement, border management, justice, and democracy. In Austria, the data protection authority is tasked with carrying out this obligation in line with Sections 18 and 31 of the Data Protection Act.

5. Can individuals file a complaint with the regulatory authority regarding AI systems?

An individual (data subject) may file a complaint with the data protection authority if they believe that using an AI system and the related processing of their personal data has violated the DSG or GDPR.

6. What special data protection clauses does the AI Regulation contain?

The GDPR is cited several times in the AI Regulation, including when defining terms like personal data, biometric data, and profiling.

The AI Regulation allows for the potential processing of "sensitive" data as defined by Art. 9 GDPR in some situations to identify "biases" in an AI system. Art. 30 GDPR (Art. 10 para. 5 AI Regulation) requires that the data that are absolutely essential for this reason be included in the register of processing activities, together with an explanation of why processing other data cannot accomplish the same objective.

Additionally, if personal data is processed, the EU declaration of conformity for high-risk AI systems under Art. 47 AI Regulation must state, among other things, that the AI system (or the data processing conducted within the AI system's framework) complies with the requirements of the GDPR or the Police and Justice Directive.

7. What data protection obligations must be observed when using AI systems?

The GDPR takes a technology-neutral stance, which means that it treats AI systems similarly to other means of processing personal data rather than singling them out for particular scrutiny. In essence, AI is subject to the same laws and regulations on data protection as any other kind of data processing.

Nevertheless, personal data processing is a critical component of AI systems as they frequently process personal data, particularly those based on machine learning, during both the learning and operational phases.

Principles

The GDPR sets forth several key principles that must be followed whenever personal data is processed, and it is the controller's responsibility to prove that they are adhering to these principles (Article 5(1) and (2) GDPR). These include lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability. These principles must also be followed when utilizing AI systems and processing personal data.

For personal data to be processed, at least one of the six legal bases listed in Article 6(1) of GDPR must be met. These include consent, performance of a contract, legal requirements, protecting vital interests, carrying out tasks in the public interest, and pursuing legitimate purposes.

When processing sensitive data (special categories of data as defined in Article 9(1) GDPR), an exception to the prohibition under Article 9(2) GDPR is also required, which provides stricter conditions compared to the permissions in Article 6(1) GDPR.

Processing in good faith; Transparency

A general concept known as "fair processing" states that personal data must not be processed in a manner that would unfairly disadvantage, discriminate against, unexpectedly, or mislead the data subject. Specifically, risk cannot be transferred from the controller to the data subject, such as via a reference in the terms and conditions. This is strongly related to the concept of transparency, which states that the data subject must be informed about the processing of their personal data.

Purpose limitation, data minimization, and storage limitation

Organizations engaging in personal data processing, even in the context of AI systems, must have a clear and well-defined purpose. Data may only be processed and maintained for the amount of time required to accomplish the goal, and it must be relevant and necessary.

Accuracy

According to the principle of data accuracy, all reasonable measures must be taken to ensure that inaccurate personal data is promptly erased or corrected, taking into account the reasons for which it is processed. Data must also be accurate and, where required, updated.

This presents a special problem for (text-)generating systems, as presently in use systems provide output that is probably right statistically but may not be factually accurate. In this case, data subjects should be notified that the outcomes generated by these technologies could be inaccurate or misleading.

Integrity and confidentiality (security)

When using AI systems for processing, appropriate security measures must be implemented to protect data from accidental loss, unauthorized access, and unlawful disclosure to third parties.

Rights of data subjects

The data subject's rights must be honored per the GDPR and EU AI Act.

8. Can AI systems be used to make automated decisions that impact individuals?

Organizations must ensure compliance with GDPR’s Art. 22 insofar as personal data is processed in the context of AI systems being used for automated decisions. However, Art. 22 protects individuals from decisions made solely on automated processing, including profiling, that have legal effects or similarly significantly impact them.

Thus, only those automated decisions that specifically impact the legal standing of data subjects are covered by Art. 22 GDPR. The GDPR's Recital 71 lists instances of these automated decisions, like the automated denial of an online credit application or online hiring procedures without any human involvement. However, this does not apply in only three cases:

  • The decision is strictly required for the data subject and the controller to complete or perform a contract,
  • A legal basis and appropriate safeguards protect the data subject's rights, freedoms, and legitimate interests, or
  • The individual has explicitly consented.

Even in these circumstances, the data subject has to be informed of the automated decision-making process about them, together with the reasoning behind it and its desired outcomes. However, unless there is a legal basis, the data subject also has the right to challenge the decision, voice their opinions, and ask for human involvement to review the decision.

9. Are organizations or individuals still required to comply with the GDPR even if they have not developed the AI system?

Once a natural or legal person determines the purposes and means of data processing, they qualify as the data protection controller and must adhere to GDPR requirements. Even if the provider or operator sets the technical specifications, this typically does not alter the fact that the entity using the AI system is considered the data protection controller.

10. What should organizations consider when using third-party AI systems?

Organizations must consider whether using "foreign" systems would involve transferring personal data to the system's manufacturer (or other third parties), which might result in the disclosure of trade secrets or data.

To mitigate these risks, the situation should be assessed, and internal guidelines should be established on what data can be processed with the system. When in doubt, consult the third-party provider beforehand. Many providers also offer"on-premise" solutions, allowing data to be hosted on a company’s servers.

11. What is the ChatGPT Task Force?

The European Data Protection Board (EDPB) established the ChatGPT Task Force, a working group that focuses on data protection concerns related to ChatGPT products.

How Securiti Can Help

Enterprises that process personal data through AI systems must ensure that their practices comply with the EU AI Act and evolving AI laws. Using Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration for ensuring the safe use of data and AI — organizations can navigate existing and future regulatory compliance by:

  • Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
  • Conducting AI risk assessments to identify and classify AI systems by risk level.
  • Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
  • Implementing appropriate privacy, security, and governance guardrails for protecting data and AI systems.
  • Ensure compliance with applicable data and AI regulations.

Request a demo to learn more.

Frequently Asked Questions

Even when an AI system is involved, if personal data is being processed, the GDPR still fully applies. You must have a lawful basis for processing, follow core principles such as transparency, data minimization, and accuracy, and uphold data subject rights, in addition to meeting any AI-specific requirements.

When an AI system processes personal data, both the EU’s data protection regulation and Austria’s national law apply. This covers requirements such as having a lawful purpose for processing, respecting individuals’ rights, conducting audits for high-risk systems, and complying with oversight from the national data protection authority.

Yes. If an AI system processes personal data and directly affects an individual, such as through a risk score, credit decision, or profiling outcome, that person has rights. They can ask for information about how their data is processed, the logic behind the decisions, and can file a complaint with the data protection authority if they believe their rights are being violated.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Rehan Jalil, Veeam on Agent Commander : theCUBE + NYSE Wired: Cyber Security Leaders
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
Introducing Agent Commander
The promise of AI Agents is staggering— intelligent systems that make decisions, use tools, automate complex workflows act as force multipliers for every knowledge...
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About View More
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About
Boards are tuned in to the AI conversation, but there’s a blind spot many organizations still haven’t named: risk silos. Everyone agrees AI governance...
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
California’s Delete Request and Opt-out Platform (DROP) and the Delete Act View More
California’s Delete Request and Opt-out Platform (DROP) and the Delete Act
Understand California’s DROP platform and the Delete Act, including compliance timelines, the 45-day cycle, broker obligations, and how to operationalize compliance.
Building A Secure AI Foundation For Financial Services View More
Building A Secure AI Foundation For Financial Services
Access the whitepaper and discover how financial institutions eliminate Shadow AI, enforce real-time AI policies, and secure sensitive data with a unified DataAI control...
Emerging AI Security Trends For 2026 View More
Emerging AI Security Trends For 2026
Securiti’s latest infographic provides security leaders with a walkthrough of all the emerging AI security trends for 2026 to help them assess and plan...
Safe AI, Accelerated: View More
Safe AI, Accelerated: Securing Data & AI Across the Lifecycle
Securiti’s latest infographic dives into the issue organizations face when scaling their AI projects safely, and how best they can address those challenges.
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New