Lloyd v Google class action denied: what now for data breach class actions?

Kolvin Stone
Kolvin Stone (partner)
Ben Nolan
Ben Nolan (associate)

The Supreme Court has issued its long-awaited ruling in the Lloyd v Google case, overturning the Court of Appeal’s 2019 ruling which granted permission for ‘opt-out class action’ proceedings relating to Google’s alleged breach of the (old) Data Protection Act 1998 (“DPA”) to be served on Google in the USA.

The Supreme Court ruled that the claim had no likely prospect of success, reversing the grant of permission to serve. The decision will likely be well received by businesses but disappoint privacy activists and consumer rights groups.

The case is not only important from a data protection perspective, as it clarifies the circumstances in which damages for data protection breaches under the DPA can be obtained; but also helps clarify the situations in which “opt-out” class action legal proceedings can be brought in England and Wales under the Civil Procedure Rules (CPR).

Although the decision appears to stem a potential tide of “opt-out” data breach class actions, importantly, the Supreme Court does point to other formulations of claims which would have been successful. Data controllers should, therefore, continue to be mindful of their obligations under the DPA and the General Data Protection Regulation (GDPR) to avoid unnecessary litigation risk.

Background

The facts in brief, relate to Google’s use of advertising cookies to collect data on iPhone users’ internet browsing habits between 2011 and 2012 without those individuals having any knowledge of the cookies being used.

Google subsequently sold the data collected through use of the cookies (some of which is alleged to have been sensitive in nature) to third parties for advertising purposes.

The case against Google was brought by Richard Lloyd, a well-known consumer rights activist, as a representative action under CPR 19.6 claiming damages on behalf of all four million iPhone users whose data were obtained by Google during this time.

The claim was unique; it purported to be akin to an ‘opt-out’ consumer class action (something which is not expressly provided for under English law, except in relation to certain competition claims).

Mr Lloyd sought permission from the court to serve Google outside the jurisdiction. Google responded by seeking to strike out the claim on the basis that it had no real prospect of success. The case made its way all the way to the UK Supreme Court, with Google successful at first instance and Mr Lloyd successful before the Court of Appeal.

Supreme Court decision

The Supreme Court’s decision centred around two key issues:

  1. Whether the claim could be brought as a representative action.
  2. Whether damages could be awarded to the class under the DPA for Google’s breach of the DPA.

Appropriateness of the representative action

The Supreme Court ruled that it was not acceptable for Lloyd to bring a representative action claiming damages on behalf of the class.

The only requirement for a representative action to be brought is that the representative has the same interest in bringing the claim as the persons represented. Here, the Supreme Court considered it conceivable that the class members could have the same interests as Lloyd.

However, the issue stemmed from the fact that Lloyd was seeking damages on behalf of the class members on a uniform, lowest common denominator ‘tariff’ basis (£750 per person, for loss of control of personal data).

The purpose of damages under common law is to put the individual in the same position in which they would have been if the wrong had not been committed. Similarly, section 13 of the DPA gives an individual who suffers damage “by reason of any contravention by a data controller of any of the requirements of this Act” a right to compensation from the data controller for that damage.

The extent of the harm suffered by members of the class would ultimately depend on a range of factors, such as the extent of the tracking carried out by Google in relation to each user, and the sensitivity of the information obtained by Google. This would require each class member having their claim for damages assessed on an individual basis. Lloyd had therefore failed to meet the ‘same interest’ requirement under CPR 19.6.

Damages under the DPA

Lloyd argued that the class members were entitled to compensation under the DPA on the basis that Google’s breach had resulted in them incurring a “loss of control” of their personal data.

The Supreme Court rejected Lloyd’s argument on the basis that individuals must have suffered material damage (i.e. financial loss or distress) to be entitled to compensation under section 13 of the DPA. It was not possible to construe section 13 of the DPA as providing individuals with a right to obtain compensation on the basis of a controller’s breach of the DPA alone.

Whilst certain members of the class may indeed have suffered material damage as a result of Google’s breach, entitling them to obtain compensation, the way in which the claim was structured (i.e. on a lowest common denominator basis) made it impossible for damages to be awarded under it.

Ongoing litigation risk – what now for data breach class actions?

Although the Supreme Court decision might appear to protect data controllers from litigation risk, we do not consider this to be the case. While Lloyd’s claim failed to meet the ‘same interest’ test, the court highlighted other formulations which would have satisfied the CPR 19.6 requirements.

It pointed to bifurcated or “split” proceedings, where common issues (such as the data controller’s liability) are considered first, with individual issues (such as damages suffered) being considered at a later stage/second trial.

In addition, it is important to note that the Supreme Court’s decision focussed on the DPA 1998, which has been replaced by the GDPR and Data Protection Act 2018. Article 82 of the GDPR introduced an individual’s right to seek compensation for material/non-material damage (including financial loss and distress) from organisations breaching the data protection rules.

Given that Lloyd’s claim focused on the loss of control of class members’ data (which is ‘non-material’), it may have succeeded had it (i) related to breaches of the GDPR and (ii) proceeded on a bifurcated basis.

Data controllers should, therefore, continue to be mindful of their exposure to potential consumer litigation for breaches under the amended DPA and under the GDPR.

Ultimately, the Supreme Court did not say that Google or other data controllers could not be liable for damage caused to groups of consumers; just that the particular way in which Lloyd sought to bring this particular claim could not work, because of the combination of the terms of the DPA and the CPR.

In other words, it is business as usual for data controllers, and for claimant lawyers investigating and prosecuting group actions on behalf of the victims of data privacy breaches.

The orthodox way to bring a consumer ‘class’ action for data breach – as an ‘opt-in’ group action subject to a Group Litigation Order if necessary – remains perfectly valid. While the orthodox ‘opt-in’ group action is inferior from an access to justice perspective – because of the upfront ‘book-building’ effort required for an ‘opt-in’ group action – it can still be effective, as shown by the group action case brought against British Airways which settled in July 2021.

Take home points

  1. Data controllers now have more clarity around how damages can be obtained for data protection breaches under the DPA and this will be welcomed.
  2. This does not eliminate their risk from being subject to a class action as the Supreme Court’s decision was based solely on the facts of this specific case.
  3. Despite the Supreme Court’s decision a class action still remains a fully viable way of claiming damages in relation to data protection breaches – but the focus must be on how to bring a case.

Contact us

If you have any questions about these issues in relation to your own organisation, please contact a member of the team or speak with your usual Fox Williams contact.

 

Privacy Policies – Do’s and Don’ts following WhatsApp €225m fine

Nigel Miller (partner)
Ben Nolan
Ben Nolan (associate)

At the beginning of September, WhatsApp was fined €225 million by the Irish Data Protection Commissioner (“DPC”) for a number of failings related to its compliance with the GDPR’s transparency obligations (primarily set out in Art. 13 and 14 GDPR).  The fine is the second highest handed out under the GDPR to date and the decision sheds light on some of the key issues to be taken into account when drafting and updating privacy notices.

Many of the practices for which WhatsApp was fined are relatively standard. The decision should, therefore, come as a warning shot for organisations, especially those in the online consumer technology space, to make sure that they are providing individuals with all the required information.

The DPC’s decision is extremely long winded (266 pages), so we have summarised the key “do’s” and “don’ts” for privacy notices in light of the decision below.

DO’S AND DON’TS

When providing information on the purposes for which you process personal data and the lawful bases upon which such processing is based (as required by Art. 13(1)(c) GDPR):

DO

  • Provide information to individuals around how their personal data is actually used to achieve the relevant purpose. For example, if personal data are processed “to promote safety and security”, you should explain how the data are used to achieve those purposes, rather than simply stating the overall objective.
  • Provide information regarding the categories of personal data which are processed for each purpose. Up until now, it has been relatively common for controllers to simply set out the purposes for which they process personal data and the corresponding lawful basis, without clarifying which types of personal data are required for each purpose.
  • If more than one lawful basis applies in respect of a specific purpose for which you process personal data, clearly specify the circumstances when each basis will apply (for example, if you rely on both consent and also legitimate interests to send marketing communications, you should explain when each of these will apply).
  • Where processing is carried out on the basis of Art. 6(1)(c) GDPR (i.e. to comply with a legal obligation), you should provide information as to the types of law which require such processing to take place.

DON’T

  • Use vague wording to explain your purpose for processing the data (e.g. will readers know what you mean if you say that you use their data for the purpose of “improving their experience”?)

When providing information regarding your reliance on legitimate interests (as required by Art. 13(1)(d) GDPR):

DO

  • Be as specific as possible in setting out the relevant interest which applies which makes the processing necessary.
  • If the processing is being carried out based on the legitimate interests of a third party, you should specify the relevant third party who will benefit from the processing.

DON’T

  • Bundle together numerous interests to justify processing being carried out for one purpose.
  • Simply say you rely on legitimate interests to carry out a certain type of processing without mentioning what your interests are (this is more common than you think!).

When providing information on the third parties with which you share personal data (as required by Art. 13(1)(e) GDPR):

DO

  • If you identify the “categories of recipients” (rather than the specific third parties with whom personal information is shared), be as specific as possible when setting out such categories. For example, if your privacy policy says that you share customers’ personal information with service providers, you should provide information on the different types of service providers you share data with (e.g. IT service providers, data hosting service providers, marketing agencies etc.).
  • Identify the categories of data which are transferred to the specific third parties referred to the notice. (NB. To date, it is uncommon for controllers to provide this level of information in connection with data sharing.)
  • If you share personal data with other group members, clearly identify the specific entities with which the data is shared.

When providing information on international transfers (as required by Art. 13(1)(f) GDPR):

DO

  • If relying on an adequacy decision(s) to transfer personal data internationally, identify the specific adequacy decision(s) relied upon.
  • Identify the categories of data that are being transferred internationally. (NB. Again, providing this level of information has been uncommon in practice.)

DON’T

  • Use conditional language such as “may” when referring to reliance on a transfer mechanism (e.g. “we may transfer personal data internationally on the basis of an adequacy decision”).

When providing information on the right to withdraw consent (as required by Art. 13(2)(c) GDPR):

DO

  • Inform individuals that this does not affect the lawfulness of processing based on consent before its withdrawal (the DPC considers this necessary to “manage the data subject’s expectations” and ensure they are fully informed on the right).
  • Include the relevant information in the section of the privacy notice which discusses data subject rights, as this is the area individuals are most likely to consult for information around this.

If you have collected personal data indirectly but are exempt from providing relevant individuals with a privacy notice on the basis that this would involve “disproportionate effort”:

DO

  • Make sure that you still provide all the information required under Art. 14(1) and (2) in a privacy notice which you make publicly available – you can’t rely on this exemption if not!
  • Clearly identify in the privacy notice the parts of the document which are intended to apply in respect of individuals who have not been provided the privacy notice directly.

DON’T

  • Assume that posting your privacy notice on your website will be sufficient to satisfy the requirement that the privacy notice be made “publicly available”. In the WhatsApp decision, the DPC noted that:

“WhatsApp should give careful consideration to the location and placement of such a public notice so as to ensure that it is discovered and accessed by as wide an audience of non-users as possible. [A]…non-user is unlikely to have a reason to visit WhatsApp’s website of his/her own volition such that he/she might discover the information which he/she is entitled to receive”.

OTHER COMMENTS

Much of the DPC’s decision focused on the way in which WhatsApp presented information in its privacy notice, with WhatsApp being found to have violated Art. 12(1) GDPR (which requires controllers to provide information in a concise, transparent, intelligible and easily accessible form, using clear and plain language) in numerous instances. In this regard, the following practical tips can be drawn from the decision:

  • Avoid excessive linking to external documents in your privacy notice, particularly where these duplicate or (even worse) contradict information set out in your privacy notice or elsewhere. Readers should not have to “work hard” to get to grips with the notice.
  • Consider where in your privacy notice you are setting out information to ensure information is presented in a cohesive way and in the place that readers would expect. For example, the DPC considered that it would be logical to include information on the right to withdraw consent and the right to a complain to a data protection regulator in the “data subject rights” section of WhatsApp’s privacy notice as this is where most readers would come to find this information.
  • Avoid using vague and opaque language.

CONCLUSION

The DPC expects the information to be provided in privacy notices to be extremely granular, even more so than most organisations (and even data protection practitioners) would have expected to date, whilst still presenting the information in a concise and accessible manner. This will no doubt prove challenging for larger organisations carrying out complex processing operations, who will have to remain fully on top of their processing activities and data flows to stand a chance of providing the information expected by the DPC. The cost of compliance could be significant.

The decision is by an EU data protection regulator and relates to EU GDPR. It is not clear whether the UK ICO, which tends to be more pragmatic on data protection compliance, would take such a hard-line stance on the issues investigated by the DPC. However, it is clear that UK organisations that have a presence in the EU or are otherwise caught by the extra-territorial scope of the EU GDPR will need to update their privacy notices in line with the DPC’s decision.

 

If you have any questions about these issues in relation to your own organisation, please contact a member of the team or speak with your usual Fox Williams contact.

 

Disruption in AdTech: where are we and what next?

Kolvin Stone
Kolvin Stone (partner)
Ben Nolan
Ben Nolan (associate)

The AdTech industry is facing the biggest overhaul since its inception, which inevitably will have an impact on the wider web ecosystem as so many services and content are funded via advertising revenue.

AdTech is currently heavily premised on the concept of delivering personalised ads to users. This is achieved through the use of technologies such as cookies and mobile advertising identifiers.

The impact of the GDPR and similarly inspired regulations, the tightening grip of regulators and, in some ways even more significantly, the recent action by two of the industry’s biggest players, Apple and Google, have left the industry in a state of flux.

We discuss recent developments below and look at what’s next for more privacy friendly AdTech.

New regulations and regulatory action

Following GDPR, new privacy laws are being developed in jurisdictions across the globe and many of these specifically regulate online advertising. Notably, in the US, California has introduced the CCPA and CRPA and similar privacy laws are expected in various other US states in the near future. Further changes to the ePrivacy landscape are also coming to the EU soon.

In the UK, regulatory action is on the cards, with the ICO currently investigating the AdTech industry. It is expected that industry participants will need to make significant changes to their practices following the conclusion of the ICO’s investigation and expected enforcement action.

Apple’s new operating system

In April, Apple rolled out a new operating system, iOS 14.5, which prevents mobile applications from using IDFAs (unique advertising IDs attributed to iPhones) and other device identifiers to track users’ app and internet browsing activities for marketing purposes, unless the user has provided consent to such tracking.

This change affects iPhone users worldwide and early statistics suggests a large proportion of users are taking advantage of the option to opt-out of being tracked.

Google Chrome and the Removal of the Third-Party Cookie

At the browser level, Google has announced that it will block all third party cookies in early 2022 (all other major browser providers have already phased out these cookies).

Third-party cookies have traditionally been relied on to track users’ internet browsing activities across websites to build up a profile of that user.  This information is then shared within the AdTech ecosystem to ensure that businesses are able to deliver targeted ads to users.

However, there are almost insurmountable challenges with using third party cookies lawfully for tracking and advertising given the challenges to meet the high standards of transparency and consent required from privacy regulations like the GDPR.

This is the context in which Google has decided to phase third party cookies.

What next for AdTech?

Although it is too soon to say for sure what these changes will mean for companies in the AdTech space, we have set out some likely consequences below:

  • Cookie-less advertising – businesses are developing advertising strategies that do not rely on cookies. For example, Google has begun trialling its proposed alternative, “Federated Learning of Cohorts”, where ads are delivered to categories of users (rather than specific individuals).
  • First party data advertising – based on information collected directly from the user or via interactions with your site or App.
  • Resurgence of contextual advertising? – this type of advertising, which fell out of favour following the rise of behavioural advertising, displays ads to users relating to the content of the page being viewed, rather than being targeted at specific users.
  • Incentives to sharing data? – it is possible that some businesses may offer incentives to customers who agree to their data being used for advertising purposes.

What does this mean?

If your business model is based on Ad revenue, you need to review whether your Ad partners are using third party cookies.  There will likely be legal risk with using third party cookies.  In addition, now is the time to consider using more privacy friendly AdTech models.

 

If you have any questions about these issues in relation to your own organisation, please contact a member of the team or speak with your usual Fox Williams contact.

Do B2B companies not based in the EU need to comply with the GDPR?

Kolvin Stone
Kolvin Stone (partner)

I’ve long questioned the extraterritorial scope of the EU General Data Protection Regulation and if non-EU based organizations that engage solely in business-to-business activities fall under the GDPR.

The GDPR is at best ambiguous on this issue, and the guidance published to date from the regulators is unhelpful.

This issue has been brought into focus because of Brexit and the numerous inquiries I’ve received about whether U.K. B2B companies (with no physical presence in the EU) need to appoint an EU representative (and comply with the GDPR more generally in the EU).

The point has been raised by the privacy activist organization founded by Max Schrems (NOYB – European Center for Digital Rights), which stated in its submission in December 2020 on the European Commission’s proposed new standard contractual clauses that further guidance is needed to clarify the scope of the requirement to appoint an EU representative.

What is the issue in a nutshell?

Article 3(2)(a) of the GDPR states controllers and processors not based in the EU are subject to the GDPR where they process personal data of individuals in the EU in the course of offering goods or services to those individuals.

So, a U.K.-based clothing retailer selling items to an individual in France needs to comply with the GDPR. Makes sense as the retailer could be collecting a fair amount of information about the individual, including name, address, payment information and possibly some profile data.

But what happens if the U.K.-based retailer is selling to a company and only collecting business contact details in that context? It is not offering goods to an individual but a company. Does that mean the GDPR does not apply?

Interpretation of Article 3(2)(a)

On a literal reading of Article 3(2)(a), the answer must be yes. The B2B retailer is not offering goods to an individual.  The European Data Protection Board has published guidance to help clarify the scope of Article 3(2)(a) and all of the examples relate to business to consumer scenarios. Not helpful at all.

The EDPB could have taken the opportunity to make clear that Article 3(2)(a) also applies to B2B scenarios, and individuals should be read as individuals acting on behalf of companies. It did not do this, and I’m not sure why.

Is that an implicit recognition that Article 3(2)(a) may not apply to B2B scenarios? It would be somewhat of an anomaly that personal information collected in the context of B2B transaction is subject to the GDPR if you have an establishment in the EU but out of scope where you are not in the EU. And what about protecting the privacy rights of individuals at companies that are clearly entitled to protection?

Unfair advantage

It would create somewhat of an unfair advantage where you sell into the EU but are based outside of it. The GDPR and the extraterritoriality provisions were intended to level the playing field to ensure non-EU based technology businesses were also subject to the GDPR when active in the EU. Recognizing this, it is hard to justify an interpretation that excludes B2B transactions for non-EU based businesses.

There is no getting away from the fact that Article 3(2)(a) only refers to individuals and the EDPB guidance highlights B2C transactions.

While it seems odd to distinguish between B2B and B2C in this way, this distinction is well established (even if controversial) in the U.K. where B2B (e.g., corporate email accounts) communications are excluded from the scope of Privacy and Electronic Communications Act 2002. Only B2C (e.g., private email accounts) communications require opt-in consent. There are then forms for having different standards depending on whether the processing of personal data is in the context of B2B or B2C transactions.

Purposive and pragmatic interpretation

For my part, while Article 3(2)(a) is ambiguous, I’ve always worked on the basis that non-EU based organizations that engage solely in B2B activities are within the scope of the GDPR, although I have often had clients query this and highlight the fact that they are not selling to individuals.

With Brexit having occurred, clarity is important as U.K. businesses need to know as a matter of urgency the scope of their obligations as there is a real cost to having to appoint an EU representative.

The U.K. Information Commissioner’s Office has no clear official position on this issue and there are mixed messages on whether an EU representative is needed when the activities are pure B2B.

Scope for a UK approach

In September, the U.K. government published a consultation document on a new National Data Strategy with laudable goals to “build a world-leading data economy” with laws that are “not too burdensome” and “a data regime that is neither unnecessarily complex nor vague.”

In this context, is there scope for the U.K. to develop a different and more business-friendly interpretation of the GDPR? The U.K. courts and lawyers have historically taken a more literal approach to interpretation as compared to the EU courts and lawyers. Hence, my EU peers do not necessarily see the same issue with Article 3(2)(a). If the U.K. developed a more literal interpretation to Article 3(2)(a), that may reduce some regulatory friction to trade with the U.K. It would mean non-U.K.-based B2B businesses would not need to have a U.K. representative.

That, though, does not help the many U.K.-based businesses that are asking whether they now need to appoint an EU representative. Clarity from regulators would be extremely welcome.

 

If you have any questions about these issues in relation to your own organisation, please contact a member of the team or speak with your usual Fox Williams contact.

EU proposes new Regulation on AI

Sian Barr (senior associate)

Introduction

On 21 April 2021, the European Commission published a new Regulation on Artificial Intelligence (“AI”) (the “AI Regulation”). When it comes into force, the AI Regulation will be the first ever comprehensive regulatory regime for the use of AI. It adopts a risk-based approach: there will be different requirements according to the level of risk that a technology carries.

The AI Regulation is promoted as having EU values at its core, with a focus on protecting safety, quality and the rights of individuals. This can be contrasted with other major global AI markets, notably the US and China.

The EU has form for developing regulations of this nature: in the privacy world, the GDPR has been a great success in improving and protecting individuals’ rights with respect to their data privacy, although this has come at considerable cost to businesses. There are some features of the AI Regulation which will be familiar from the GDPR (e.g. extra-territorial reach and scarily high fines). Indeed, businesses which develop or employ AI will be able to draw on their experience of implementing a GDPR compliance programme, when designing a similar programme for AI Regulation compliance. In this way, while the AI Regulation could be seen as a headache for AI developers and users, it can also be viewed as an opportunity to build trust with stakeholders and members of the public alike, in the context of technologies that can often be viewed with suspicion.

Which technology does the AI Regulation cover?

The AI Regulation applies to the use of any AI system defined as:

Software that is developed with one or more of the following techniques and approaches:

  • machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
  • logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
  • statistical approaches, Bayesian estimation, search and optimisation methods;

and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The proposed definition of AI is wide and could potentially catch software which might not usually be considered to be AI, particularly in the field of search and optimisation software.

The AI Regulation will not apply to AI that is already on the market at the time the AI Regulation comes into effect (so-called ‘legacy AI’) until the AI is repurposed or substantially modified. There are other exemptions relating to public, government or military systems.

Who does the AI Regulation apply to?

Providers: you will be a ‘provider’ under the AI Regulation if you:

  • develop an AI system;
  • put an AI system on the market under your own name or trade mark;
  • modify the intended purpose of an AI system; or
  • make a substantial modification to, an AI system.

Providers have the most obligations under the AI Regulation.

An Importer will be an EU entity that puts on the market an AI system that bears the name or trade mark of an entity established outside the EU.

A Distributor will be any other entity in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market without changing it: e.g. a reseller.

Users: all other business (non-consumer) user of an AI system.

In which countries will the AI Regulation apply?

The AI Regulation applies to:

  • providers placing on the market or putting into service AI systems in the EU for the first time, irrespective of where those providers are established;
  • users of AI systems located within the EU;
  • providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.

Following Brexit, the AI Regulation will not automatically apply in the UK, but it is likely to influence any future UK regulation of AI. Also, due to the extraterritorial application of the AI Regulation, it will effectively apply to all UK businesses with end users in the EU.

What does the AI Regulation say?

In accordance with the risk-based approach, the AI Regulation differentiates between AI technologies by separating them into four categories: unacceptable risk, high risk, limited risk and minimal risk. We summarise some of the key provisions of the AI Regulation below.

Unacceptable Risk AI
Which AI systems are affected? AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour and that causes or is likely to cause that person or another person physical or psychological harm
AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person within that group and that causes or is likely to cause that person or another person physical or psychological harm
Social scoring by or on behalf of public authorities in certain circumstances
‘Real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (subject to exceptions)
Restrictions All unacceptable risk AI systems are prohibited.
Penalties Fine of EUR 30m or 6% of worldwide annual turnover (higher of).
High risk AI
Which AI systems are affected? Biometric identification and categorisation of individuals. Real-time and post remote biometric ID of individuals.
Management and operation of critical infrastructure Safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity
Education and vocational training Assigning people to schools and other educational or training settings
Student testing
Employment, workers management and access to self-employment Recruitment, screening or filtering applications, evaluating candidates in interviews or tests.
Making decisions on promotion and termination of employment, for task allocation and for monitoring and evaluating performance and behaviour.
Access to and enjoyment of essential private services and public services and benefits Use by public authorities to evaluate the eligibility of people  for public benefits and services.
Evaluation of the creditworthiness of people or establishing their credit score, with the exception of AI systems put into service by small scale providers for their own use
Dispatching, or to establish priority in the dispatching of emergency first response services, including by firefighters and ambulance
Law enforcement Various types of AI systems fall within this category, including polygraphs, risk of offending/reoffending.
Migration, asylum and border control management Various types of AI systems fall within this category, including polygraphs, security risks, assessing asylum and visa applications.
Administration of justice and democratic processes Assisting a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.
Safety components of products, or is itself a product covered by certain EU product safety rules and those rules require the product to undergo a third-party conformity assessment Machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices, aviation, agricultural and forestry vehicles, two- or three-wheel vehicles and quadricycles, marine equipment, rail systems, motor vehicles, trailers and parts.
What are the obligations/restrictions? Risk management system to be implemented and maintained. Must be a continuous iterative process run throughout the entire lifecycle of a high-risk AI system.
Data and data governance Techniques involving the training of models with data must be developed on the basis of training, validation and testing data sets that meet certain quality criteria.
Technical documentation Documentation must demonstrate the system’s compliance with the High Risk AI requirements of the AI Regulation.To be drawn up before the system is placed on the market or put into service and kept up-to date.
Record-keeping (logs) The system must have the capability to keep logs while it is operating. To ensure traceability.
Transparency and provision of information to users Operation must be sufficiently transparent to enable users to interpret the system’s output and use it appropriately.A list of mandatory information to be provided.
Human oversight The system must be designed and developed in such a way that they can be effectively overseen by humans during the period in which the AI system is in use, including with appropriate human-machine interface tools
Accuracy, robustness and cybersecurity Must be appropriate to the system’s intended purpose and they must perform consistently.
Registration Standalone AI systems to be registered in EU register.
Ongoing monitoring and reporting Serious incidents to be reported.
Who is responsible for compliance of high-risk AI systems? Providers of the system Providers have overall responsibility for compliance with the above requirements.
Product manufacturers – if a high-risk AI system is used/sold with the product Applies to certain products listed in the Annex to the AI Regulation.Product manufacturer will have the same obligations as a provider.
Importers of an AI system Responsible for checking that the system conforms to the requirements of the AI Regulation.Notification obligations if the system presents certain risks.Must appoint an authorised representative in the EU to carry out certain compliance obligations.
Distributors Responsible for checking that the provider or importer has complied with the AI Regulation.Notification obligations if the system presents certain risks.Obligation to take corrective action if the system does not conform.
Users Must use them in accordance with instructions for use.If user controls input data, this must be relevant to intended purpose.Monitor system for risks and notify accordingly/stop using system if risk occurs.Keep logs if under their control.Carry out a data protection impact assessment.
Penalties Fine of up to EUR 30m or 6% of worldwide annual turnover (higher of). For breach of the data and data governance obligations.
  Fine of up to EUR 20m or 4% of worldwide annual turnover (higher of). For breach of any other obligations under the AI Regulation.
  Fine of up to EUR 10m or 2% of worldwide annual turnover (higher of). For supply of incorrect, incomplete or misleading information to authorities.
Limited risk AI systems
Which AI systems are affected? AI systems intended to interact with natural persons, emotion recognition system or a biometric categorisation system, systems producing deep fakes (with exceptions for systems used in policing/criminal justice).
What are the obligations? Transparency obligations.
Penalties Fine of up to EUR 20m or 4% of worldwide annual turnover (higher of).
Minimal risk AI systems
Which AI systems are affected? All other AI systems
What are the obligations? None.

 

Implications for business

The AI Regulation is still in draft form and has a long way to go before it potentially bites. Then, once it has finished the EU’s legislative process, there will be a grace period of two years. This means that the AI Regulation is unlikely to apply until at least 2024.

That said, given the likely cost to business of compliance with the new regime, it would be prudent for businesses to take the AI Regulation into account as early as possible, while acknowledging that some provisions may change as the draft AI Regulation evolves.

Any business employing a high-risk AI system in its products or services should pay particular attention to the provisions on data and data governance, as breach of these requirements carries the highest possible penalty and is accordingly likely to be high on the regulator’s list of compliance checks.

  

[This note is intended as a high level introduction to the AI Regulation. We will be producing a series of notes about the draft AI Regulation, focussing on specific areas or developments of the AI Regulation over the coming months.]