Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

An Overview of Austria’s DSB FAQs Addressing AI and Data Protection

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syed Tatheer Kazmi

Associate Data Privacy Analyst, Securiti

CIPP/Europe

Listen to the content

The accelerated development of artificial intelligence (AI) technologies has prompted notable concerns regarding data privacy and protection. In response to growing AI concerns, the Austrian Data Protection Authority (Datenschutzbehörde, DSB) recently published a comprehensive set of Frequently Asked Questions (FAQs) that addresses the intersection of AI and data protection.

These frequently asked questions (FAQs) aim to provide guidance to both developers and users of AI technologies, shedding light on how the GDPR and the EU AI Act apply to AI systems.

1. What is meant by AI or AI systems?

The EU AI Act defines an AI system in Article 3(1) AI Regulation as “a machine-based system designed to operate autonomously to varying degrees and capable of being adaptable once it has started operating, and which derives from the inputs received, for explicit or implicit purposes, how to produce outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

In essence, these are computer systems that execute tasks that necessitate human intellect, including problem-solving, learning, decision-making, and interacting with their surroundings in a way as humans do. On the other hand, Generative AI (GenAI) specifically denotes systems that produce new outputs, such as text, audio, images, or videos, in response to user inputs or prompts.

Learn more about the EU AI Act, the world’s first comprehensive AI law. Additionally, learn how the EU AI Act shapes AI governance.

2. What laws govern the use of AI systems?

The legal framework for AI systems in the EU is established through several key regulations. The EU AI Regulation, adopted on May 22, 2024, sets harmonized rules for AI systems' development, marketing, and deployment. The AI Liability Directive also proposes adapting non-contractual civil liability rules to AI. Copyright provisions also apply. Personal data processing is common when using AI, triggering the applicability of GDPR and the Austrian Data Protection Act (DSG).

3. How do the GDPR and the EU AI Act relate to each other?

As stated in Article 2 (7) of the AI Regulation, the AI Regulation does not impact the GDPR, the work of the data protection authority, or the obligations of providers and operators of AI systems as controllers or processors.

In essence, when personal data is processed, the GDPR continues to apply. Subsequently, the data protection authority continues to be responsible for resolving data protection concerns associated with AI systems.

4. Who is the responsible authority?

The EU AI Act authorizes one or more authorities to conduct market surveillance. The primary goal of market surveillance is to ensure that high-risk AI systems abide by the requirements of AI Regulation. It has not yet been confirmed who this will be in Austria. The EU Commission is also equipped with certain law enforcement powers. To support the implementation of the AI Regulation, an AI Service Center has been established at RTR GmbH, serving as a central contact point and information hub for all AI-related inquiries and resources.

The supervisory authorities in charge of the Police and Justice Directive serve as market monitoring authorities for high-risk AI systems in areas like law enforcement, border management, justice, and democracy. In Austria, the data protection authority is tasked with carrying out this obligation in line with Sections 18 and 31 of the Data Protection Act.

5. Can individuals file a complaint with the regulatory authority regarding AI systems?

An individual (data subject) may file a complaint with the data protection authority if they believe that using an AI system and the related processing of their personal data has violated the DSG or GDPR.

6. What special data protection clauses does the AI Regulation contain?

The GDPR is cited several times in the AI Regulation, including when defining terms like personal data, biometric data, and profiling.

The AI Regulation allows for the potential processing of "sensitive" data as defined by Art. 9 GDPR in some situations to identify "biases" in an AI system. Art. 30 GDPR (Art. 10 para. 5 AI Regulation) requires that the data that are absolutely essential for this reason be included in the register of processing activities, together with an explanation of why processing other data cannot accomplish the same objective.

Additionally, if personal data is processed, the EU declaration of conformity for high-risk AI systems under Art. 47 AI Regulation must state, among other things, that the AI system (or the data processing conducted within the AI system's framework) complies with the requirements of the GDPR or the Police and Justice Directive.

7. What data protection obligations must be observed when using AI systems?

The GDPR takes a technology-neutral stance, which means that it treats AI systems similarly to other means of processing personal data rather than singling them out for particular scrutiny. In essence, AI is subject to the same laws and regulations on data protection as any other kind of data processing.

Nevertheless, personal data processing is a critical component of AI systems as they frequently process personal data, particularly those based on machine learning, during both the learning and operational phases.

Principles

The GDPR sets forth several key principles that must be followed whenever personal data is processed, and it is the controller's responsibility to prove that they are adhering to these principles (Article 5(1) and (2) GDPR). These include lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability. These principles must also be followed when utilizing AI systems and processing personal data.

For personal data to be processed, at least one of the six legal bases listed in Article 6(1) of GDPR must be met. These include consent, performance of a contract, legal requirements, protecting vital interests, carrying out tasks in the public interest, and pursuing legitimate purposes.

When processing sensitive data (special categories of data as defined in Article 9(1) GDPR), an exception to the prohibition under Article 9(2) GDPR is also required, which provides stricter conditions compared to the permissions in Article 6(1) GDPR.

Processing in good faith; Transparency

A general concept known as "fair processing" states that personal data must not be processed in a manner that would unfairly disadvantage, discriminate against, unexpectedly, or mislead the data subject. Specifically, risk cannot be transferred from the controller to the data subject, such as via a reference in the terms and conditions. This is strongly related to the concept of transparency, which states that the data subject must be informed about the processing of their personal data.

Purpose limitation, data minimization, and storage limitation

Organizations engaging in personal data processing, even in the context of AI systems, must have a clear and well-defined purpose. Data may only be processed and maintained for the amount of time required to accomplish the goal, and it must be relevant and necessary.

Accuracy

According to the principle of data accuracy, all reasonable measures must be taken to ensure that inaccurate personal data is promptly erased or corrected, taking into account the reasons for which it is processed. Data must also be accurate and, where required, updated.

This presents a special problem for (text-)generating systems, as presently in use systems provide output that is probably right statistically but may not be factually accurate. In this case, data subjects should be notified that the outcomes generated by these technologies could be inaccurate or misleading.

Integrity and confidentiality (security)

When using AI systems for processing, appropriate security measures must be implemented to protect data from accidental loss, unauthorized access, and unlawful disclosure to third parties.

Rights of data subjects

The data subject's rights must be honored per the GDPR and EU AI Act.

8. Can AI systems be used to make automated decisions that impact individuals?

Organizations must ensure compliance with GDPR’s Art. 22 insofar as personal data is processed in the context of AI systems being used for automated decisions. However, Art. 22 protects individuals from decisions made solely on automated processing, including profiling, that have legal effects or similarly significantly impact them.

Thus, only those automated decisions that specifically impact the legal standing of data subjects are covered by Art. 22 GDPR. The GDPR's Recital 71 lists instances of these automated decisions, like the automated denial of an online credit application or online hiring procedures without any human involvement. However, this does not apply in only three cases:

  • The decision is strictly required for the data subject and the controller to complete or perform a contract,
  • A legal basis and appropriate safeguards protect the data subject's rights, freedoms, and legitimate interests, or
  • The individual has explicitly consented.

Even in these circumstances, the data subject has to be informed of the automated decision-making process about them, together with the reasoning behind it and its desired outcomes. However, unless there is a legal basis, the data subject also has the right to challenge the decision, voice their opinions, and ask for human involvement to review the decision.

9. Are organizations or individuals still required to comply with the GDPR even if they have not developed the AI system?

Once a natural or legal person determines the purposes and means of data processing, they qualify as the data protection controller and must adhere to GDPR requirements. Even if the provider or operator sets the technical specifications, this typically does not alter the fact that the entity using the AI system is considered the data protection controller.

10. What should organizations consider when using third-party AI systems?

Organizations must consider whether using "foreign" systems would involve transferring personal data to the system's manufacturer (or other third parties), which might result in the disclosure of trade secrets or data.

To mitigate these risks, the situation should be assessed, and internal guidelines should be established on what data can be processed with the system. When in doubt, consult the third-party provider beforehand. Many providers also offer"on-premise" solutions, allowing data to be hosted on a company’s servers.

11. What is the ChatGPT Task Force?

The European Data Protection Board (EDPB) established the ChatGPT Task Force, a working group that focuses on data protection concerns related to ChatGPT products.

How Securiti Can Help

Enterprises that process personal data through AI systems must ensure that their practices comply with the EU AI Act and evolving AI laws. Using Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration for ensuring the safe use of data and AI — organizations can navigate existing and future regulatory compliance by:

  • Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
  • Conducting AI risk assessments to identify and classify AI systems by risk level.
  • Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
  • Implementing appropriate privacy, security, and governance guardrails for protecting data and AI systems.
  • Ensure compliance with applicable data and AI regulations.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 10:35

There’s Been a Material Shift in the Data Center of Gravity

Watch Now View
Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View

Latest

View More

From Trial to Trusted: Securely Scaling Microsoft Copilot in the Enterprise

AI copilots and agents embedded in SaaS are rapidly reshaping how enterprises work. Business leaders and IT teams see them as a gateway to...

The ROI of Safe Enterprise AI View More

The ROI of Safe Enterprise AI: A Business Leader’s Guide

The fundamental truth of today’s competitive landscape is that businesses harnessing data through AI will outperform those that don’t. Especially with 90% of enterprise...

Understanding Data Regulations in Australia’s Telecom Sector View More

Understanding Data Regulations in Australia’s Telecom Sector

1. Introduction Australia’s telecommunications sector plays a crucial role in connecting millions of people. However, with this connectivity comes the responsibility of safeguarding vast...

Understanding Saudi Arabia’s Global AI Hub Law View More

Understanding Saudi Arabia’s Global AI Hub Law

Gain insights into Saudi Arabia’s Global AI Hub Law - a legal framework for operating various types of data centers referred to as Hubs....

ROPA View More

Records of Processing Activities (RoPA): A Cross-Jurisdictional Analysis

Download the whitepaper to gain a cross-jurisdictional analysis of records of processing activities (RoPA). Learn what RoPA is, why organizations should maintain it, and...

Managing Privacy Risks in Large Language Models (LLMs) View More

Managing Privacy Risks in Large Language Models (LLMs)

Download the whitepaper to learn how to manage privacy risks in large language models (LLMs). Gain comprehensive insights to avoid violations.

Comparison of RoPA Field Requirements Across Jurisdictions View More

Comparison of RoPA Field Requirements Across Jurisdictions

Download the infographic to compare Records of Processing Activities (RoPA) field requirements across jurisdictions. Learn its importance, penalties, and how to navigate RoPA.

Navigating Kenya’s Data Protection Act View More

Navigating Kenya’s Data Protection Act: What Organizations Need To Know

Download the infographic to discover key details about navigating Kenya’s Data Protection Act and simplify your compliance journey.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New