Next Thursday, the Court of Justice of the European Union (CJEU) will hear an important case about “automated decision-making” under the GDPR.

As more and more organisations integrate AI into their workflows, it’s crucial to consider how automation is governed by the GDPR’s rules. This case is about the activities of credit-rating agencies, and whether those activities are “automated decisions” under Article 22 of the GDPR.

→ PrivSec London is a two-day, in-person event taking place on 28th February and 1st March at the Park Plaza, Riverbank, London. 

In the session A New Era of AI Governance, speakers including; Kobi Nissan of Mine, Tharishni Arumugam of AON, Pratiksha Karnawał of First Abu Dhabi Bank and Debbie Reynolds, the “Data Diva”, will discuss strategies for integrating AI in an ethical and compliant way.

Register here

PrivSec London logo

The case could undermine credit rating as a business model. But perhaps more importantly, it will test the limits of the GDPR’s rules on automated decision-making.

This EU Court Case Could Expand the GDPR’s ‘Automated Decision-Making’ Rules

The Question

Here’s one of two questions referred to the CJEU by the Wiesbaden Administrative Court:

Is Article 22(1) of Regulation (EU) 2016/679 1 to be interpreted as meaning that the automated establishment of a probability value concerning the ability of a data subject to service a loan in the future already constitutes a decision based solely on automated processing, including profiling, which produces legal effects concerning the data subject or similarly significantly affects him or her, where that value, determined by means of personal data of the data subject, is transmitted by the controller to a third-party controller and the latter draws strongly on that value for its decision on the establishment, implementation or termination of a contractual relationship with the data subject?

Typically for a CJEU reference, this question is not written with readability in mind.

Let’s look at the background and break the question down.

The Background

Here’s the background:

  • An individual applied for a loan with a bank. 

  • The bank refused, based on the individual’s credit rating. 

  • The rating was provided by a credit-rating company called SCHUFA.

After the individual’s loan was refused:

  • She submitted access and deletion requests to SCHUFA. 

  • SCHUFA provided some basic information 

  • SCHUFA refused to provide a detailed breakdown of its scoring methodology, stating that this information was commercially sensitive.

The individual complained about SCHUFA to the Hessian data protection authority (DPA). 

The Hessian DPA essentially sided with SCHUFA, stating that the company’s actions complied with the BDSG (the German implementation of the GDPR).

The CJEU Reference

The individual took SCHUFA and the Hessian DPA to the Wiesbaden Administrative Court, which referred two questions to the CJEU. I’m focusing on question 1 (above).

The question centres on Art 22 of the GDPR, which states that individuals have the right not to be subject to “a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

In other words, you can’t normally base decisions such as credit or job applications SOLELY on automated processes—you need a human to be involved.

I’ll call these types of decisions “significant automated decisions” as short-hand.

So, the court asks:

If a credit-rating agency profiles you and assigns you a credit score… 

Then a bank refuses you a loan based on that score…

  • Is it just the bank making a significant automated decision?

  • Or is the credit-rating agency making one too?

Exceptions to the Prohibition on Significant Automated Decisions

There are exceptions to the prohibition of significant automated decisions. 

For example, where a significant automated decision is “necessary” to perform or enter into a contract between the individual and a controller (for example, the loan contract between an individual and their bank, based on a third-party credit score).

Or where the decision-making is allowed under national law (question 2 of the CJEU reference concerns this).

However, in these exceptional situations, the decision-maker—or the national law—must have safeguards in place the protect the individual.

We’ll consider those later in the article.

Are Credit-Rating Agencies ‘Significant Automated Decision-Makers’?

SCHUFA scores people’s creditworthiness via profiling—collecting pieces of information about a person and then drawing conclusions about that person (in this case, how likely they are to pay back a loan).

The credit-rating agency passes that information to banks in the form of a credit score. A bank uses the information to decide whether to offer a loan.

One important point is that the credit score was deemed to be “decisive” in the assessment of loan applications by the bank. In the words of the court in its CJEU reference:

“…it is ultimately the score… that actually decides whether and how the third-party controller enters into a contract with the data subject. Although the third-party controller does not have to make his or her decision dependent solely on the score, he or she usually does so to a significant extent.”

So, by conducting this profiling and then sharing it with the bank, is SCHUFA a “significant automated decision-maker”?

If so, this decision could have major implications for the business model of credit-rating agencies. 

These organisations could no longer say: “We’re just providing information—the bank makes the decision”. They would be directly subject to the rules on significant automated decision-making.

The Implications of the Case

There’s an outstanding question for me here. Let me know if you can answer it.

As mentioned, the GDPR allows for certain exceptions to the prohibition of significant automated decision-making.

One of these exceptions is where the significant automated decision-making is necessary to perform or enter into a contract between the individual and a controller.

Note, the GDPR says “a” controller—not “the” controller. 

If the automated decision-making is indeed “necessary” to enter into a contract, that contract may be between the bank and the individual, rather than the credit-rating agency. 

If the CJEU decides that the credit-rating agency *is* conducting a significant automated decision, could the credit-rating agency rely on the “contract” exemption?

If so, credit rating agencies would need to allow individuals to: 

  • Obtain human intervention

  • Express their view on the automated decision

  • Contest the decision

Credit-rating agencies would also need to provide individuals with more information if they received a subject access request:

  • ”Meaningful information about the logic involved” in their automated decision.

  • Information about “the significance and the envisaged consequences” of the decision.

This would expand the GDPR’s rules on automated decision-making. And it would also mean credit-rating agencies might no longer be able to keep their scoring methodologies confidential.

→ PrivSec London is a two-day, in-person event taking place on 28th February and 1st March at the Park Plaza, Riverbank, London. 

In the session A New Era of AI Governance, speakers including; Kobi Nissan of Mine, Tharishni Arumugam of AON, Pratiksha Karnawał of First Abu Dhabi Bank and Debbie Reynolds, the “Data Diva”, will discuss strategies for integrating AI in an ethical and compliant way.

Register for free

PrivSec London logo