Securiti Launches Industry’s First Solution To Automate Compliance


What to Know about the CJEU Ruling on Automated Individual Decisions by Schufa

By Anas Baig | Reviewed By Maria Khan
Published March 6, 2024

Artificial Intelligence (AI) is taking on an increasingly important strategic role for businesses. It promises a tremendous leap in productivity, efficiency, and revenues, with unprecedented potential for growth and scalability. However, these aspirations need to be tampered with a profound sense of responsibility. If not handled properly, AI can cause just as many problems for both businesses and individuals.

This was apparent in the recent ruling by the Court of Justice of the European Union (CJEU). The ruling, made against Schufa, is poised to redefine the limits and boundaries of automated decision-making under the General Data Protection Regulation (GDPR) while also carrying significant implications for data privacy. Additionally, the ruling sets a new precedent for transparency, individuals’ data rights, and the role of human intervention in automated models and systems.

Read on to learn more about the background of the case, key findings of the CJEU, legal principles on automated decision-making, and business implications of the ruling, as well as the best tools available to avoid repeating the same mistakes Schufa is deemed to have made.

A Brief Background

Schufa is a leading German credit rating agency. It’s involved in handling information on more than 70 million individuals. Among its several services offered are credit scores for German citizens.

While Schufa’s customers include online retailers, telecommunication companies, and transport companies, to name a few, financial service providers constitute Schufa’s biggest clientele. These financial service providers in Germany rely on Schufa’s credit scores when making lending decisions. These can range from mortgages to house rentals.

The case brought in front of CJEU involves a German citizen whose request for a loan was rejected by a bank that relied on Schufa’s credit score for that individual. Since the credit score was poor, the loan was rejected, prompting the German citizen to request Schufa to provide them access to the information they possessed in addition to a request for data erasure.

Schufa did respond to the individual. However, it was far from a straightforward response. It informed the complainant about her credit score but did not provide details on how it was calculated and what exact factors and elements it took into account when calculating these scores owing to trade secrecy. Furthermore, Schufa stated it could not comply with the individual’s request for access to their information since Schufa restricts such access to information to its customers only.

In October 2018, the German citizen requested the Hessischer Beauftragter für Datenschutz und Informationsfreiheit (HBDI), the German Data Protection Agency for the state of Hesse, to order Schufa to comply with their requests for access and erasure. However, in June 2020, the HBDI rejected her appeal, and this decision was subsequently appealed before the Verwaltungsgericht Wiesbaden (Administrative Court, Wiesbaden, Germany). The Administrative Court referred the matter to the CJEU, requesting further clearance on how the GDPR, specifically Article 22(1), is to be interpreted in this particular case.

Key Findings of the Ruling

The key findings of the ruling can be summarized as follows:

  • To be classified as engaged in "automated individual decision-making" under Article 22 of the GDPR, these three specific conditions must be met:

(i)  A decision must be rendered;

(ii) The decision must solely result from automated processing, including profiling; and

(iii) The decision must have legal consequences for the individual or yield an effect that is equivalent or similarly important in its impact on the individual.

The CJEU affirmed that, in the current case, all these conditions were fulfilled.

Under the GDPR, profiling refers to any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular, to analyze or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behavior, location or movements, where it produces legal effects concerning him or her or similarly significantly affects him or her.

As per the EDPB Guidelines, profiling is composed of the following three elements:

  1. It has to be an automated form of processing,
  2. It has to be carried out on personal data, and
  3. The objective of the profiling must be to evaluate the personal aspects of a natural person.
  • Schufa is in a much better position to satisfy the Article 15(1)(h) GDPR request, which grants data subjects the right to access the logic behind automated decisions, its significance, and possible consequences, as the bank lacked specific details about the operational aspects of automated processes used to generate the credit score.
  • The court has adopted a broader interpretation of the term “decision”, stating that it could include several acts that could affect data subjects, including credit score calculation.
  • A more limited interpretation of Article 22 of GDPR, establishing credit score as preparatory and reserving the term 'decision' solely for actions taken by a third party, could risk circumventing the protection provided to a data subject.

The CJEU has tasked the Administrative Court of Wiesbaden to conduct its own investigations to determine whether the German federal law allows for a GDPR-compatible exception to the prohibition of automated decision-making. The Administrative Court is now expected to conduct its own proceedings, but if it does not find any applicable exceptions, then credit scoring agencies in Germany, and by extension, all of the EU, will have to gain explicit consent from their consumers before calculating their credit worthiness.

In such a case, consumers will also have to be provided an opportunity to reject and challenge any outputs generated related to their creditworthiness by AI models and systems. For organizations whose main functions rely on the use of such automated operations, this could represent a severe blow to their daily operations.

Based on reading GDPR Article 22, the CJEU’s decision on the Schufa case, and the Article 29 Data Protection Working Party Guidelines on automated decision-making and profiling, the following legal principles can be derived in relation to automated decision-making and profiling. Data controllers aiming to use automated decision-making, including profiling, must consider the following legal principles for operational compliance.

Automated decision-making, including profiling, with legal or similarly significant effects, is allowed if it is:

  1. Based on the explicit consent of the data subject,
  2. Expressly authorized by Union or Member State law; or
  3. Necessary for the performance of a contract between the data subject and the data controller.

As per EDPB Guidelines on Consent, “explicit consent” refers to the way the data subject expresses consent and that the data subject must give an express statement of consent. The EDPB confirms that an obvious way to make sure consent is explicit would be to expressly confirm consent in a written statement. However, in the digital or online context, explicit consent may also be obtained by filling in an electronic form, sending an email, uploading a scanned document carrying the signature of the data subject, or using an electronic signature or a two-stage verification process.

In the context of profiling, the WP29 guidelines on automated decision-making confirm that data controllers need to be able to demonstrate that the data subjects understand exactly what they are consenting to and that data subjects have enough information about the envisaged use and consequences of the processing to ensure that any consent they provide represents an informed choice.

2. Provide Appropriate Notice to the Data Subject

At the time of personal data collection or while obtaining the data subject’s consent, data controllers must inform the data subject about the automated decision-making, including profiling, with legal or similarly significant effects. Data controllers must also provide them meaningful information about the logic involved, as well as the significance, and at least when based on profiling, the envisaged consequences of such processing for the data subject.

In addition, data controllers should provide the data subject with general information, particularly regarding the factors considered in the decision-making process and their respective ‘weight’ on an aggregate level.

3. Ensure Data Subjects’ Rights Fulfillment

In relation to automated decision-making, including profiling, the data subject has the following rights that must be fulfilled by the data controller when the data subject makes a request:

  • Right of access,
  • Right to rectification,
  • Right to erasure,
  • Right to restriction of processing,
  • Right to object.

4. Conduct a Data Protection Impact Assessment

Upon other instances, the GDPR requires data controllers to conduct a data protection impact assessment (DPIA) in cases where there is a systematic and extensive evaluation of personal aspects relating to natural persons, which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person.

5. Appoint a Data Protection Officer

The GDPR requires data controllers to appoint a data protection officer (DPO) as an accountability measure where profiling and/or automated decision-making are a core activity of the controller and require regular and systematic monitoring of data subjects on a large scale.

6. Implement Appropriate Data Security Measures

In cases where automated decision-making (with legal or similarly significant effects) is necessary for entering into or performance of a contract between the data subject and a data controller or is based on the data subject’s explicit consent, the GDPR requires data controllers to implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, express their viewpoint, and obtain an explanation of the decision reached after such assessment and contest the decision. The data subject may be informed of these safeguards as a matter of good practice.

Data controllers should employ appropriate mathematical or statistical methods for profiling, along with technical and organizational measures to correct inaccuracies, minimize errors, mitigate risks to data subjects' interests and rights, and prevent discriminatory effects based on their sensitive data such as race, ethnicity, or health status, etc. These security measures may be applied on a cyclical basis, not only at the design stage but also continuously as the profiling is applied to individuals. Moreover, the WP29 recommends that the outcome of such testing must feed back into the system design.

In relation to human intervention/oversight, the WP29 provides the following recommendations:

  • Human review must be carried out by someone who has the appropriate authority and capability to change the decision, and
  • The human reviewer should undertake a thorough assessment of all the relevant data, including any additional information provided by the data subject.

7. Additional Protections in the Case of Sensitive Personal Data

Automated decision-making (with legal or similarly significant effects) that involves sensitive personal data is only allowed if suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place and one of the following grounds is applicable:

  1. Processing of personal data is based on explicit consent of the data subject for one or more specific purposes, except where expressly authorized by EU or member state law; or
  2. Processing of personal data is necessary for reasons of substantial public interest, either based on EU or member state law. It must be proportionate to the aim pursued, respecting data protection rights, and include measures to safeguard the fundamental rights and interests of the data subject.

8. Implement Data Protection Principles

In addition to the above legal principles, data controllers must adhere to the general data protection principles of the GDPR, including the following:

  • Lawfulness, fairness and transparency
  • Storage limitation
  • Purpose limitation
  • Data minimization
  • Data accuracy

9. Implement Additional Safeguards in Profiling Relating to Children

As per Recital 71 of the GDPR, children should not be subject to solely automated decision-making, including profiling, with legal or similarly significant effects. The WP29 recommends that data controllers should rely on exceptions to the prohibition of automated decision-making to justify the same. Moreover, the WP29 recommends not performing profiling on children for marketing purposes.

However, in cases where automated decision-making, including profiling (with legal or similarly significant effects), is required to be performed in relation to children’s personal data (for example, to protect their welfare), the following principles must be taken into consideration:

  • There must be suitable safeguards appropriate for children to protect their rights, freedoms, and legitimate interests, including the right to obtain human intervention, express their viewpoint, and obtain an explanation of the decision and contest it.
  • Codes of conduct that incorporate such safeguards and depict how consent can be obtained from holders of parental responsibility over children should be followed.

10. Additional Protections in an Employment Context

In the employment context, consent as a legal basis can only be used only as a last resort when there are no adverse consequences to the employment relationship if an employee chooses not to provide consent. Therefore, employers are generally recommended to rely on the other two grounds for automated decision-making (performance of contract or authorization from union/member state law) rather than relying on the explicit consent of the employee. However, in the case of consent as a ground of automated decision-making (with legal or similarly significant effects) in the employment context, employers must provide alternatives to employees who ask for human intervention and refuse consent, and such refusal should not disadvantage such employees.

Implications For Businesses

The CJEU’s decision carries significant implications for businesses that deploy algorithms or any other forms of automated practices along similar lines, such as fraud detection and ID verification. Owing to several factors, the degree of relevance of the Schufa case may vary. Still, it is critical that businesses truly understand the implications of this decision, not only to ensure their practices are compliant with the necessary regulatory requirements but to avoid losing public trust.

Some of the critical business implications of this decision include the following:

Transparency in AI Decisions

The CJEU’s decision adds an additional layer of responsibility for businesses to ensure an appropriate degree of transparency in all AI-driven decisions. User transparency refers to informing users that they are interacting with an AI system, the logic of the AI system, and the rights individuals have. This can be achieved with the help of privacy notices or instructions of use.

Necessity for Human Review

There should be appropriate training for human overviews in order to ensure that human reviewers can interpret and challenge outputs made by the AI system once the system is deployed. Human oversight mechanisms can include establishing a redress/feedback mechanism that allows individuals to comment on the explanations they receive. Organizations can achieve this with the help of appropriate human intervention tools, assessments to assess human intervention tools within the AI and by maintaining logs of automated decisions that were overridden by a human reviewer and the reasons thereof.

Scope of ADM

The CJEU decision has clarified the legal principles that can be derived from the reading of Article 22 of the GDPR. These legal principles are especially relevant for businesses that rely on algorithms and automated processes that produce similar outputs to the ones generated by Schufa such as ID verification and fraud detection. Until now, the assumption that users agree to and bear the risks of such outputs has been an essential element in many of these businesses’ operational models.

However, it is just as important not to view this as a blanket judgment as there are several details within the detailed judgment that affect business models differently than Schufa’s. These details can include but are not limited to, the exact degree of reliance of users on the output generated and the legal effects of such outputs for users.

This will require an exhaustive review of their automated systems to determine which processes may now fall under the extended scope, leading to significant adjustments in all such operations.

Need for AI Governance

AI’s relevance and potential will continue to increase, meaning businesses cannot continue to see it as an experiment and must adopt it as an established business practice. Doing so will require an extensive set of rules and policies that govern its usage for all internal operations to ensure compliance with relevant regulatory requirements.

This becomes even more critical with the EU AI Act coming into effect. A comprehensive AI governance framework can serve as the solid foundation necessary for businesses to continue leveraging AI without lapsing on their regulatory responsibilities.

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Additionally, Securiti provides organizations access to critical individual modules and solutions that can prove vital in their pursuit to regulatory compliance. Its DSR automation empowers organizations to address all data-related requests seamlessly, the automated internal assessment allows for a thorough review of all internal policies as well as third-party vendors, and the consent management solution allows for comprehensive compliance with all cookie & first party consent management through AI-driven automation and orchestration, as well as several other modules that enable similar assurance of compliance along with ease of use.

Request a demo today and learn more about how Securiti can help you stay compliant with any GDPR-related obligations.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


More Stories that May Interest You