Lloyd v Google class action denied: what now for data breach class actions?

Kolvin Stone
Kolvin Stone (partner)
Ben Nolan
Ben Nolan (associate)

The Supreme Court has issued its long-awaited ruling in the Lloyd v Google case, overturning the Court of Appeal’s 2019 ruling which granted permission for ‘opt-out class action’ proceedings relating to Google’s alleged breach of the (old) Data Protection Act 1998 (“DPA”) to be served on Google in the USA.

The Supreme Court ruled that the claim had no likely prospect of success, reversing the grant of permission to serve. The decision will likely be well received by businesses but disappoint privacy activists and consumer rights groups.

The case is not only important from a data protection perspective, as it clarifies the circumstances in which damages for data protection breaches under the DPA can be obtained; but also helps clarify the situations in which “opt-out” class action legal proceedings can be brought in England and Wales under the Civil Procedure Rules (CPR).

Although the decision appears to stem a potential tide of “opt-out” data breach class actions, importantly, the Supreme Court does point to other formulations of claims which would have been successful. Data controllers should, therefore, continue to be mindful of their obligations under the DPA and the General Data Protection Regulation (GDPR) to avoid unnecessary litigation risk.

Background

The facts in brief, relate to Google’s use of advertising cookies to collect data on iPhone users’ internet browsing habits between 2011 and 2012 without those individuals having any knowledge of the cookies being used.

Google subsequently sold the data collected through use of the cookies (some of which is alleged to have been sensitive in nature) to third parties for advertising purposes.

The case against Google was brought by Richard Lloyd, a well-known consumer rights activist, as a representative action under CPR 19.6 claiming damages on behalf of all four million iPhone users whose data were obtained by Google during this time.

The claim was unique; it purported to be akin to an ‘opt-out’ consumer class action (something which is not expressly provided for under English law, except in relation to certain competition claims).

Mr Lloyd sought permission from the court to serve Google outside the jurisdiction. Google responded by seeking to strike out the claim on the basis that it had no real prospect of success. The case made its way all the way to the UK Supreme Court, with Google successful at first instance and Mr Lloyd successful before the Court of Appeal.

Supreme Court decision

The Supreme Court’s decision centred around two key issues:

  1. Whether the claim could be brought as a representative action.
  2. Whether damages could be awarded to the class under the DPA for Google’s breach of the DPA.

Appropriateness of the representative action

The Supreme Court ruled that it was not acceptable for Lloyd to bring a representative action claiming damages on behalf of the class.

The only requirement for a representative action to be brought is that the representative has the same interest in bringing the claim as the persons represented. Here, the Supreme Court considered it conceivable that the class members could have the same interests as Lloyd.

However, the issue stemmed from the fact that Lloyd was seeking damages on behalf of the class members on a uniform, lowest common denominator ‘tariff’ basis (£750 per person, for loss of control of personal data).

The purpose of damages under common law is to put the individual in the same position in which they would have been if the wrong had not been committed. Similarly, section 13 of the DPA gives an individual who suffers damage “by reason of any contravention by a data controller of any of the requirements of this Act” a right to compensation from the data controller for that damage.

The extent of the harm suffered by members of the class would ultimately depend on a range of factors, such as the extent of the tracking carried out by Google in relation to each user, and the sensitivity of the information obtained by Google. This would require each class member having their claim for damages assessed on an individual basis. Lloyd had therefore failed to meet the ‘same interest’ requirement under CPR 19.6.

Damages under the DPA

Lloyd argued that the class members were entitled to compensation under the DPA on the basis that Google’s breach had resulted in them incurring a “loss of control” of their personal data.

The Supreme Court rejected Lloyd’s argument on the basis that individuals must have suffered material damage (i.e. financial loss or distress) to be entitled to compensation under section 13 of the DPA. It was not possible to construe section 13 of the DPA as providing individuals with a right to obtain compensation on the basis of a controller’s breach of the DPA alone.

Whilst certain members of the class may indeed have suffered material damage as a result of Google’s breach, entitling them to obtain compensation, the way in which the claim was structured (i.e. on a lowest common denominator basis) made it impossible for damages to be awarded under it.

Ongoing litigation risk – what now for data breach class actions?

Although the Supreme Court decision might appear to protect data controllers from litigation risk, we do not consider this to be the case. While Lloyd’s claim failed to meet the ‘same interest’ test, the court highlighted other formulations which would have satisfied the CPR 19.6 requirements.

It pointed to bifurcated or “split” proceedings, where common issues (such as the data controller’s liability) are considered first, with individual issues (such as damages suffered) being considered at a later stage/second trial.

In addition, it is important to note that the Supreme Court’s decision focussed on the DPA 1998, which has been replaced by the GDPR and Data Protection Act 2018. Article 82 of the GDPR introduced an individual’s right to seek compensation for material/non-material damage (including financial loss and distress) from organisations breaching the data protection rules.

Given that Lloyd’s claim focused on the loss of control of class members’ data (which is ‘non-material’), it may have succeeded had it (i) related to breaches of the GDPR and (ii) proceeded on a bifurcated basis.

Data controllers should, therefore, continue to be mindful of their exposure to potential consumer litigation for breaches under the amended DPA and under the GDPR.

Ultimately, the Supreme Court did not say that Google or other data controllers could not be liable for damage caused to groups of consumers; just that the particular way in which Lloyd sought to bring this particular claim could not work, because of the combination of the terms of the DPA and the CPR.

In other words, it is business as usual for data controllers, and for claimant lawyers investigating and prosecuting group actions on behalf of the victims of data privacy breaches.

The orthodox way to bring a consumer ‘class’ action for data breach – as an ‘opt-in’ group action subject to a Group Litigation Order if necessary – remains perfectly valid. While the orthodox ‘opt-in’ group action is inferior from an access to justice perspective – because of the upfront ‘book-building’ effort required for an ‘opt-in’ group action – it can still be effective, as shown by the group action case brought against British Airways which settled in July 2021.

Take home points

  1. Data controllers now have more clarity around how damages can be obtained for data protection breaches under the DPA and this will be welcomed.
  2. This does not eliminate their risk from being subject to a class action as the Supreme Court’s decision was based solely on the facts of this specific case.
  3. Despite the Supreme Court’s decision a class action still remains a fully viable way of claiming damages in relation to data protection breaches – but the focus must be on how to bring a case.

Contact us

If you have any questions about these issues in relation to your own organisation, please contact a member of the team or speak with your usual Fox Williams contact.

 

Privacy Policies – Do’s and Don’ts following WhatsApp €225m fine

Nigel Miller (partner)
Ben Nolan
Ben Nolan (associate)

At the beginning of September, WhatsApp was fined €225 million by the Irish Data Protection Commissioner (“DPC”) for a number of failings related to its compliance with the GDPR’s transparency obligations (primarily set out in Art. 13 and 14 GDPR).  The fine is the second highest handed out under the GDPR to date and the decision sheds light on some of the key issues to be taken into account when drafting and updating privacy notices.

Many of the practices for which WhatsApp was fined are relatively standard. The decision should, therefore, come as a warning shot for organisations, especially those in the online consumer technology space, to make sure that they are providing individuals with all the required information.

The DPC’s decision is extremely long winded (266 pages), so we have summarised the key “do’s” and “don’ts” for privacy notices in light of the decision below.

DO’S AND DON’TS

When providing information on the purposes for which you process personal data and the lawful bases upon which such processing is based (as required by Art. 13(1)(c) GDPR):

DO

  • Provide information to individuals around how their personal data is actually used to achieve the relevant purpose. For example, if personal data are processed “to promote safety and security”, you should explain how the data are used to achieve those purposes, rather than simply stating the overall objective.
  • Provide information regarding the categories of personal data which are processed for each purpose. Up until now, it has been relatively common for controllers to simply set out the purposes for which they process personal data and the corresponding lawful basis, without clarifying which types of personal data are required for each purpose.
  • If more than one lawful basis applies in respect of a specific purpose for which you process personal data, clearly specify the circumstances when each basis will apply (for example, if you rely on both consent and also legitimate interests to send marketing communications, you should explain when each of these will apply).
  • Where processing is carried out on the basis of Art. 6(1)(c) GDPR (i.e. to comply with a legal obligation), you should provide information as to the types of law which require such processing to take place.

DON’T

  • Use vague wording to explain your purpose for processing the data (e.g. will readers know what you mean if you say that you use their data for the purpose of “improving their experience”?)

When providing information regarding your reliance on legitimate interests (as required by Art. 13(1)(d) GDPR):

DO

  • Be as specific as possible in setting out the relevant interest which applies which makes the processing necessary.
  • If the processing is being carried out based on the legitimate interests of a third party, you should specify the relevant third party who will benefit from the processing.

DON’T

  • Bundle together numerous interests to justify processing being carried out for one purpose.
  • Simply say you rely on legitimate interests to carry out a certain type of processing without mentioning what your interests are (this is more common than you think!).

When providing information on the third parties with which you share personal data (as required by Art. 13(1)(e) GDPR):

DO

  • If you identify the “categories of recipients” (rather than the specific third parties with whom personal information is shared), be as specific as possible when setting out such categories. For example, if your privacy policy says that you share customers’ personal information with service providers, you should provide information on the different types of service providers you share data with (e.g. IT service providers, data hosting service providers, marketing agencies etc.).
  • Identify the categories of data which are transferred to the specific third parties referred to the notice. (NB. To date, it is uncommon for controllers to provide this level of information in connection with data sharing.)
  • If you share personal data with other group members, clearly identify the specific entities with which the data is shared.

When providing information on international transfers (as required by Art. 13(1)(f) GDPR):

DO

  • If relying on an adequacy decision(s) to transfer personal data internationally, identify the specific adequacy decision(s) relied upon.
  • Identify the categories of data that are being transferred internationally. (NB. Again, providing this level of information has been uncommon in practice.)

DON’T

  • Use conditional language such as “may” when referring to reliance on a transfer mechanism (e.g. “we may transfer personal data internationally on the basis of an adequacy decision”).

When providing information on the right to withdraw consent (as required by Art. 13(2)(c) GDPR):

DO

  • Inform individuals that this does not affect the lawfulness of processing based on consent before its withdrawal (the DPC considers this necessary to “manage the data subject’s expectations” and ensure they are fully informed on the right).
  • Include the relevant information in the section of the privacy notice which discusses data subject rights, as this is the area individuals are most likely to consult for information around this.

If you have collected personal data indirectly but are exempt from providing relevant individuals with a privacy notice on the basis that this would involve “disproportionate effort”:

DO

  • Make sure that you still provide all the information required under Art. 14(1) and (2) in a privacy notice which you make publicly available – you can’t rely on this exemption if not!
  • Clearly identify in the privacy notice the parts of the document which are intended to apply in respect of individuals who have not been provided the privacy notice directly.

DON’T

  • Assume that posting your privacy notice on your website will be sufficient to satisfy the requirement that the privacy notice be made “publicly available”. In the WhatsApp decision, the DPC noted that:

“WhatsApp should give careful consideration to the location and placement of such a public notice so as to ensure that it is discovered and accessed by as wide an audience of non-users as possible. [A]…non-user is unlikely to have a reason to visit WhatsApp’s website of his/her own volition such that he/she might discover the information which he/she is entitled to receive”.

OTHER COMMENTS

Much of the DPC’s decision focused on the way in which WhatsApp presented information in its privacy notice, with WhatsApp being found to have violated Art. 12(1) GDPR (which requires controllers to provide information in a concise, transparent, intelligible and easily accessible form, using clear and plain language) in numerous instances. In this regard, the following practical tips can be drawn from the decision:

  • Avoid excessive linking to external documents in your privacy notice, particularly where these duplicate or (even worse) contradict information set out in your privacy notice or elsewhere. Readers should not have to “work hard” to get to grips with the notice.
  • Consider where in your privacy notice you are setting out information to ensure information is presented in a cohesive way and in the place that readers would expect. For example, the DPC considered that it would be logical to include information on the right to withdraw consent and the right to a complain to a data protection regulator in the “data subject rights” section of WhatsApp’s privacy notice as this is where most readers would come to find this information.
  • Avoid using vague and opaque language.

CONCLUSION

The DPC expects the information to be provided in privacy notices to be extremely granular, even more so than most organisations (and even data protection practitioners) would have expected to date, whilst still presenting the information in a concise and accessible manner. This will no doubt prove challenging for larger organisations carrying out complex processing operations, who will have to remain fully on top of their processing activities and data flows to stand a chance of providing the information expected by the DPC. The cost of compliance could be significant.

The decision is by an EU data protection regulator and relates to EU GDPR. It is not clear whether the UK ICO, which tends to be more pragmatic on data protection compliance, would take such a hard-line stance on the issues investigated by the DPC. However, it is clear that UK organisations that have a presence in the EU or are otherwise caught by the extra-territorial scope of the EU GDPR will need to update their privacy notices in line with the DPC’s decision.

 

If you have any questions about these issues in relation to your own organisation, please contact a member of the team or speak with your usual Fox Williams contact.

 

New – Standard Contractual Clauses!

Nigel Miller
Nigel Miller

Background

Standard Contractual Clauses (SCCs) are the most commonly used mechanism to authorise transfers of personal data from the EEA. The attraction is that they are relatively straight forward and cost-effective to implement. The problem is that the current versions are hopelessly out of date and, given that they are often simply signed and “left in the drawer”, don’t really do a convincing job in terms of protecting personal data.

It was always the intention to update them to reflect the GDPR. However, two years on from GDPR go-live in May 2018, the old versions of SCCs are still very much in use in the absence of alternative solutions. Then, in July 2020, along came the decision of the European Court of Justice (“ECJ”) in Schrems II which shook up the world of international data transfers.

Schrems II

The Schrems II decision has two main consequences.  First, the ECJ found that the EU-US Privacy Shield like its predecessor the Safe Harbor – is invalid as a transfer mechanism.  Second, although the validity of SCCs was upheld, the ECJ stressed that simply signing off the SCCs will not always be sufficient. The ECJ said that the parties to the SCCs need to:

  • carry out a transfer impact assessment as to whether there is adequate protection for data in the country concerned; and
  • if necessary, implement “supplementary measures” to ensure that individuals have equivalent protections in respect of their data as afforded under EU law.

On 11 November 2020 the European Data Protection Board (“EDPB”) issued for consultation its much awaited guidance on these issues. This sets out the steps data exporters must take to determine if they need to put in place supplementary measures to be able to transfer data outside the EEA, and provides examples of measures that can be used. For our article on this, please see here

New SCCs

And then, barely noticed, the next day, the European Commission published its proposals for the new SCCs. There is a relatively short consultation period on the proposed new SCCs expiring on 10 December 2020.  Once the proposed new SCCs are approved, probably before the end of the year, we’ll have 12 months in which to replace all existing SCCs with the new ones. And this is far from a form-filling or box-ticking exercise.

We’ve taken a look at the proposed new SCCs and find some interesting developments:

  • The SCCs adopt a modular approach to cater for various transfer scenarios. They can be used for transfers from (i) controllers to other controllers, (ii) controllers to processors, (iii) processors to sub-processors and (iv) processors to controllers. This is helpful as the current SCCs do not cope with categories (iii) or (iv) which is problematic.
  • While the current SCCs can only be used by EU based controllers, the new SCCs can also be used by parties who are outside the EU who may be subject to the GDPR by virtue of its extraterritorial reach.
  • They allow for more than two parties to sign up to the SCCs, which can be useful (for example) for intra-group transfers.
  • They also allow for additional parties to accede to the clauses from time to time as exporters or importers.  For example, onward transfers by the importer to a recipient in another third country can be allowed if the recipient accedes to the SCCs.
  • Data subjects must be able to enforce the SCCs as a third party beneficiary. As such the SCCs must be governed by a law that allows for third party beneficiary rights.
  • For transparency purposes, data subjects should be provided with a copy of the SCCs and should be informed of any change of the identity of any third party to which the personal data is disclosed.
  • In respect of transfers by a controller to a processor, or by a processor to a sub-processor, the SCCs comply with the data processing requirements of the GDPR so that it will no longer be necessary to supplement the SCCs with data processing clauses.
  • The SCCs support EU processors by allowing for the transfer by an EU processor to a controller in a third country, reflecting the limited self-standing obligations of processors under the GDPR.
  • The SCCs have also been written with Schrems II in mind and provide for certain specific safeguards. The exporter must warrant that it has used reasonable efforts to determine that the importer is able to satisfy its obligations under the clauses and must document its transfer impact assessment. In the event that, for example, the importer is subject to a legal requirement to disclose data to a government or law enforcement agency, the importer must notify the exporter and, where possible, challenge the request. The data exporter may be required to suspend the data transfers if it considers that no appropriate safeguards can be ensured.

What about Brexit?

The new SCCs may become effective just around the time the transition period expires and the UK fully leaves the EU. So, what will be the position so far as the UK is concerned?

First, the UK Government are seeking an “adequacy decision” from the European Commission as part of the Brexit deal. If there is no deal, or no adequacy decision or other transitional arrangement, in place by 31 December 2020, then the UK will become a third country and data transfers from the EU to the UK will need to comply with EU GDPR transfer restrictions. In this scenario, SCCs will be required for transfers from the EU to the UK. The new SCCs will be particularly helpful as they can be used to cover transfers by EUA based processors to UK controllers or sub-processors, something which is not possible under the current SCCs.

As regards transfers from the UK, UK rules will mirror the current GDPR rules. The UK government has confirmed that, when the transition period ends, transfers from the UK to the EEA will not be restricted.

The rules on transfers to countries outside the EEA will remain similar to the current GDPR rules. Although the UK will make its own adequacy decisions after the end of the transition period, the UK government has confirmed that it intends to recognise existing EU adequacy decisions and the EU approved SCCs.

Next steps

Organisations now have a year to review all international transfers. Where necessary this will involve conducting transfer impact assessments, implementing the new SCCs in place of the current ones, adopting supplemental measures, putting in place flow-down terms where there are onward data transfers and providing enhanced transparency to data subjects. Certain data transfers may need to be discontinued or restructured. It’s going to be a busy 2021!

H&M 35m

Fashion retailer H&M hit with €35m fine

As a result of the monitoring of several hundred employees at their service centre in Nuremberg, the Hamburg Data Protection Commissioner has issued an eye-watering fine of €35.25m against H&M. This is the second largest fine under the GDPR to date.

Since 2014, employees of H&M had been subject to extensive recording of data relating to their private lives including sensitive personal data (special category data). For example, after vacation and sick leave – even short absences – the team leaders conducted a so-called “welcome back talk”. After these talks, details were recorded including not only the employees’ vacation experiences, but also symptoms of illness and diagnoses. In addition, supervisors acquired a broad knowledge of their employees’ private lives through casual conversations, ranging from harmless details to family problems and religious beliefs. Some of the findings were then recorded, digitally stored and accessible by up to 50 managers throughout the company.

The data collected in this way was used, among other things, to profile the employees and to support employment decisions.

The practice came to light following a data breach in October 2019 when, as a result of a configuration error, the data became accessible company-wide for several hours.

Aside from the fine, to show its contrition, H&M has expressly apologized to the affected employees and has also agreed to pay compensation. Other measures which H&M has agreed to take include the appointment of a data protection coordinator, monthly data protection status updates and enhanced whistle-blower protection.

Comment:

The case serves as a reminder that the GDPR applies equally to HR data as it does to consumer / customer data. In fact, given that HR data routinely involves processing of higher risk “special category” data, such as sickness records and details of employee personal issues, great care is needed in relation to the collection and storage of such data.

Aside from the data security breach, H&M would seem to have breached several of the data protection principles: for example,  data minimisation (only collecting data that is relevant and limited to the purpose for which it is collected), purpose limitation (collecting data only for legitimate purposes) and processing data fairly and in a transparent manner, making sure that employees are aware of the data which you are collecting and storing.

If your GDPR compliance programme did not focus on HR data with at least the same rigour as other data, or needs a refresh, there are 35m reasons why now would be a good time.

Addressing privacy concerns with NHSX App

NHSX CovidThe contact tracing App being developed by NHSX is being promoted as a key tool which will enable the lockdown to be eased by automating the process of identifying people who have been in recent close proximity with someone with symptoms of Covid-19.

The success of the App is dependant to a large extent on a significant proportion of the population downloading and using it. While the App has some utility if only 20% of the population download it, contact tracing will only be effective if a significant percentage (estimated to be around 60%) of the population participate.

Whether or not people will take up the App is, in turn, critically dependant on the level of trust which people have that the system will operate as advertised and on if and how legitimate concerns as to the privacy and security of the data will be addressed.

The way it works

The App uses low power Bluetooth on smartphone devices to communicate with other devices in near proximity that also have the App installed. The App tracks the estimated distance and duration of each device from each other device. Each device that is in contact with another will issue to the other randomised numbers. This proximity log is then stored on the device.

If, soon after, a user develops symptoms of the virus, the user can then update their status on the App. The proximity log will then be uploaded to the central system that will work out the specific other devices that need to be alerted to the fact that they have been in proximity with someone who now has symptoms, so that the users of the other devices can then self-isolate.

Privacy concerns

Any government sponsored technology that can track and trace the population instinctively raises privacy concerns.

First, although the data is anonymised and does not contain any personal identifiers, it will track everyone a user comes into contact with. Data concerning one’s daily personal inter-actions and the people one associates with can be highly sensitive and not something one would wish to share with the state.

Then there is “feature creep”. While the technology is being introduced with the best intentions and in the interests of public health, once it has been widely implemented and as time goes on there will be a temptation to “enhance” it and use it for broader purposes. For example, if the App starts to record specific location data (and not only proximity data), this will be a serious privacy concern as location data can itself reveal highly sensitive personal data (e.g. meetings at other people’s homes, attendance at particular (e.g. political) events, health clinics or places of worship etc).  There may be a temptation to share the data with other government departments or the police for other purposes, such as detecting crime, or for tax or immigration purposes.

Also, let’s face it, the government and NHS do not have a great track record in respect of data security – so how secure will the data collected by the App be? There must be a risk that it could it be hacked by criminals or a rogue state sponsored hacker?

The fact that NHSX has – in contrast with many other governments (such as Ireland, Germany and Switzerland) and unlike the Google / Apple initiative – apparently opted to implement a centralised system, where data is held by the government rather than only locally on the device, heightens these concerns.

Application of Data Protection laws

Data protection laws apply to “personal data” relating to an identified or identifiable person. In the case of the App, it is used on a no names basis with the user being given a random rotating ID. The specific device ID is not used although the make and model of the device is captured. The GDPR specifically refers to an “online identifier” as being personal data. However, while pseudonymised data is regulated as personal data, truly anonymised data is not.

Although the precise way the App works is yet to be finalised and published, we must assume that the use of the App for track and trace will involve personal data and as such will be regulated by the GDPR as it will be possible to identify and distinguish some individuals (or devices) from others and to apply different treatment accordingly. Data protection laws do not stand in the way of such technologies, but such technologies must be built and implemented in compliance with data protection laws.

How to address the privacy concerns

While most people will in the present circumstances accept some degree of compromise on their privacy in the interests of their, and the nation’s, health, this has to be proportionate with the App being as minimally privacy invasive as is possible. To ensure widespread adoption on the App, it will be essential to ensure that privacy concerns are comprehensively addressed. There are a number of steps that must be taken.

Centralised v localised

First, NHSX should reconsider the centralised data approach and consider switching to a localised data solution. As the ICO commented, a purely localised system without a centralised dataset must inherently be more secure. It would also have the benefit of achieving greater interoperability with localised solutions being implemented by other countries; in particular, it is important to have interoperability on the island of Ireland.

NHSX counter this, however, by saying that there are public health benefits in their having access to the big data for analytics and research so as to learn more about the virus. It may also help limit malicious self-reporting (which could be done to try to put someone into self-isolation).

While a centralised system can be made to work, it is the case that much greater efforts in terms of data security will be required if public confidence is to be won over. There is a trade-off between functionality and public confidence; the more you try to get of the one, the less you get of the other. And public confidence is critical for widespread adoption, and ultimately for success, of the App.

There have been reports in the past few days of NHSX investigating the feasibility of transitioning the App to Apple and Google’s technology, and this could indicate a change of heart and a shift towards a localised data approach.

Transparency

Second, transparency. Provision of transparent information regarding how a person’s data is to be used is a central requirement under the GDPR. This requires that information be provided in a concise, transparent, intelligible and easily accessible form, using clear and plain language.

Given that the App is to be used by the general population, the privacy notice will need to be carefully and skilfully drafted so that it is accessible to all whether young, old, or with reading difficulties. It is yet unknown what the age requirement will be for the App; but particular care will be needed for information addressed to children.

We also need to know who will be the “controller” of this data and with whom it may be shared and for what purpose. Will the controller be the NHS, or will it be the Government?

Risk assessment

Transparency will also be well served by making public the NHSX Data Protection Impact Assessment. Under GDPR, a DPIA – a form of risk assessment – is required whenever using a new technology that is likely to result in a high risk to the rights and freedoms of individuals. The GDPR says a DPIA is specifically required where the technology involves a systematic and extensive evaluation of personal aspects relating to individuals which is based on automated processing, and on which decisions are based that significantly affect the person; or where there is processing on a large scale of special categories of data such as health data; or where there is  systematic monitoring of a publicly accessible area on a large scale. In some ways, the App ticks all of these boxes and the DPIA will be a critical document.

The DPIA must contain a systematic description of the processing operations and the purposes for which the data will be used, an assessment of the necessity and proportionality of the processing in relation to these purposes, an assessment of the risks to the rights and freedoms of individuals and the measures to be taken to address these risks, including safeguards and security measures to ensure the security of the data.

NHSX must share this DPIA as soon as possible with the ICO (as contemplated by Art 36 GDPR) for consultation. While not a legal requirement, it should also be made public for wider consultation. Unless the government so requires, the DPIA does not need to be approved by the ICO as such; however, NHSX should consider and implement as appropriate any advice and recommendations that the ICO, as the independent privacy watchdog, may put forward.

Finally, the working of the App should be open to audit and review by independent experts, not as a one-off, but on an ongoing basis.

The lawful basis and consent

Under data protection laws, processing of personal data is only lawful if there is a “lawful basis” for the processing. The GDPR sets out six possibilities; the main options for the App will be user “consent” or “performance of a task in the public interest”. Health data requires an additional lawful basis which could be satisfied by “explicit consent” or for public health reasons.

It is not yet known which of these lawful bases will be applied. While the App is entirely voluntary to use, it may be that consent is not the best option as it can be difficult to establish that a valid consent has been obtained. However, consent may be required under the GDPR on the basis that the App involves “automated decision making”.

As the App accesses data on the device, it could be that consent is required under the Privacy and Communications Regulations (PECRs). If consent were required under PECRs, then it would also be necessary to use consent as the lawful basis under the GDPR. Consent will not be required under PECRs if the exemption applies where the access to the data is “strictly necessary for the provision of” the service requested by the user. If, however, the App is to access any data that is not “strictly necessary”, then consent would be required by law.

While the App may or may not rely on “consent” as the lawful basis, it is important for public trust that its use is truly voluntary. A person is free to download it, and delete it, as they wish. They are free to choose whether to update their health status or not. And – if warned that they have been in proximity with an infected person – they are free to self-isolate or not as they choose.

Data minimisation

One of the central principles of the GDPR is ‘data minimisation’ – that data being collected must be limited to what is necessary in relation to the purposes for which they are collected. It is essential for this, therefore, to identify and articulate the purpose and then test whether the data being collected is necessary for this.

For example, the App requires proximity data, but it does not require location data. If there is the potential with a centralised system to add additional data elements, such as location data, then that could breach this central principle of the GDPR.

It has been suggested that users of the App will not need to add their name or other identifiers, but will be required to enter the first half of their post code. This alone will not ordinarily be sufficient to identify a person, but may serve a purpose in enabling NHSX to spot clusters of infection.

Purpose limitation

Under GDPR data can only be collected for specified, explicit and legitimate purposes and must not be further processed in a manner that is incompatible with those purposes. The GDPR allows for further processing for scientific research or statistical purposes in addition to the initial purposes.  This is an important legal constraint on feature creep, but is it enough to give people confidence that their data will not be used for other purposes?

Storage limitation

A further principle is that data must not be kept for longer than is necessary for the purposes for which the personal data are processed. A key issue is what happens to all the data after the Covid-19 crisis has subsided and it will no longer be necessary to track and trace. The data should then be securely destroyed or completely anonymised, but what guarantee is there that this will happen? The data retention period in relation to the data must be set out in the privacy notice to be issued with the App. This will need to reflect this principle and we have to have confidence that NHSX will honour it.

Data security

It is a fundamental requirement of data protection that appropriate technical and organisational measures are taken to ensure a level of data security appropriate to the risks. This will require implementation of state-of-the-art encryption of the data at rest and in transit. Following the GDPR principle of data protection “by design and by default”, data security and compliance with the other principles must be designed in to the way the App is built and used.

While data security is never 100% guaranteed, the public will need to be satisfied through the provision of transparent information that rigorous safeguards are in place.

Do we need a specific NHSX App watchdog?

While we have the ICO who is the regulator for compliance with data protection laws, we do have separate watchdogs for specific areas, for example, biometrics and communications monitoring. Given the speed at which the App needs to be rolled out if it is to be effective, and given that the ICO is well established and respected as the regulator for data matters under GDPR and the Data Protection Act 2018, with powers to audit, investigate complaints and issue substantial fines, the ICO is the appropriate regulator and an additional regulatory regime should not be needed.

Is specific legislation needed?

Some have suggested that specific regulation is needed to enshrine some necessary safeguards in law. Again, given timing imperatives, and given the flexible and well developed structure we already have with the GDPR and the Data Protection Act 2018, this may be a “nice to have” but should not be necessary.

Thoughts for employers

Clearly, contact tracing could be highly beneficial to employers, since it could reduce the need to carry out manual contact tracing in the event an employee falls ill with coronavirus. So, can an employer make downloading the App compulsory?

The answer will depend to some extent on the lawful basis that is relied on for the processing of personal data through the App. If the lawful basis is “consent”, then compelling employees to download and use the App will invalidate any apparent consent since it will not have been freely given. If the lawful basis is “public interest”, then employers will need to decide if they should seek to compel, or alternatively strongly recommend, their employees to download and use the App. If they seek to compel, and an employee refuses, it is hard to see that the employee can with fairness be subjected to any detriment other than as required for health and safety.

We all have a strong interest in the App being rolled out, gaining maximum levels of public adoption and making a valuable contribution to fighting the virus. For this it will be necessary for the public to have a high level of trust in the App and its privacy safeguards. Good data protection will be an essential ingredient to achieving this trust.

Nigel Miller is a partner in Fox Williams LLP and leads the Data Protection and Privacy team. He is a Certified Information Privacy Professional (CIPP/E).

Ben Nolan is an associate in the Data Protection and Privacy team at Fox Williams LLP.