Managing data breaches – notification and risk management

[This article was first published in Computers & Law, the magazine of the Society for Computers and Law]

It’s Friday, late afternoon. The phone rings. “We think we’ve had a data breach. What do we need to do?”.  The first thing is to cancel your plans. The clock is now ticking.

According to figures from the Department for Culture, Media and Sports[i], over four in ten of all UK businesses suffered a breach or attack in the past 12 months. This figure rises to more than two thirds for large businesses. The most common breaches or attacks were via fraudulent emails, then malware and viruses.

The Information Commissioner, Elizabeth Denham, has said that there is no data privacy without data security; data protection and cyber-security go hand in hand.

Data security requirements

Data security is addressed by the sixth principle under Article 5 GDPR[ii] which requires that personal data must be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’)”.

Under the accountability principle, the controller must also be able to demonstrate compliance with this principle.

Article 32 deals specifically with data security and, importantly, applies to both controllers and processors.

Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk

The Article goes on to list examples of appropriate measures, including pseudonymisation (defined in Article 4(5)) and encryption. While these techniques fall short of being mandatory, they can “reduce the risks to the data subjects concerned and help controllers and processors to meet their data-protection obligations” (Recital 28). In the event of a data breach, if such measures had been implemented, the controller may not be required to notify the ICO or data subjects. Conversely, if such measures were not implemented, the risks will be much higher, the notification obligation cannot be avoided and there may be adverse consequences in terms of regulatory action and fines.

Data breach notification

Under the former Data Protection Act 1998, while the ICO recommended that serious breaches should be reported to the ICO[iii], there was no legal obligation to notify the ICO or data subjects[iv]. Because of the risk of adverse publicity and loss of goodwill in the event of a data breach, there was a tendency for organisations to prefer not to report. As a result, many data breaches went unreported.

One of the more impactful changes introduced by the GDPR, therefore, is mandatory data breach notification.

A personal data breach is defined in Article 4(12) as a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed.

The European Data Protection Board (EDPB) (formerly known as the Article 29 Data Protection Working Party) in their guidelines on personal data breach notification[v] (“the Guidelines”) define three types of breach:

  • “Confidentiality breach” – where there is an unauthorised or accidental disclosure of, or access to, personal data.
  • “Integrity breach” – where there is an unauthorised or accidental alteration of personal data.
  • “Availability breach” – where there is an accidental or unauthorised loss of access to, or destruction of, personal data.

Do you need to notify?

Under Article 33, there is an obligation to report a personal data breach to the ICO – unless the breach is “unlikely to result in a risk” to data subjects.

There is also an obligation to notify data subjects if the breach is “likely to result in a high risk” (Article 34) (emphasis added).

The GDPR explicitly says that notification to data subjects is not required if technical protection measures had been used to render the personal data unintelligible – such as encryption.

Accordingly, under the GDPR there is a legal duty to notify the ICO of the breach, even where the risk may not be “serious”, unless it can be said that there is no risk. However, there may be no obligation to notify data subjects if the risk is not “high”.

The GDPR is only engaged where there is a data breach involving personal data. A security incident involving corporate data or IP or disruption to systems, while potentially serious, will not require notification to the ICO if personal data is unaffected.

When to notify

The controller must notify the ICO “without undue delay and, where feasible, not later than 72 hours after having become aware of” the data breach.

There is, therefore, a very tight time-frame in order to assess what has happened and if the notification obligation arises. The 72 hours runs from “awareness”[vi]. However, it is not always clear when an organisation becomes aware. For example, if a junior member of the IT team is involved in a breach on a Friday pm, but does not report it to his manager until Monday morning, when did the period begin? If you identify an issue, but are not sure whether it is a “personal data breach” or not pending further investigation, the period kicks in once there is a “reasonable degree of certainty” that a data breach has occurred.

Turning a blind eye or not having systems to monitor for breaches is not an option. Recital 87 provides that technological protection and organisational measures should be implemented “to establish immediately whether a personal data breach has taken place”. It is possible, therefore, that you could be responsible for failure to notify a breach of which you were not actually aware, but of which you should have been aware had you implemented appropriate systems.

There is some limited flexibility provided by the “where feasible” qualification. However, if the notification is not made within the 72 hours, the controller must give reasons for the delay.

The GDPR recognises that controllers may not always have all of the necessary information concerning a breach within 72 hours of becoming aware of it. Article 33(4) provides that if it is not possible to provide all the information at the same time, the information may be provided in phases without undue further delay. Updates should then be provided to the ICO once more information becomes available.

Notification to data subjects

Even where notification to the ICO is required, it does not inevitably follow that the affected data subjects should also be notified. This depends on an assessment of the likelihood and severity of the risks presented by the breach and what the benefits of notification may be. Risks include identity theft, fraud, financial loss, damage to reputation and loss of confidentiality. Where the breach involves special categories of data, such damage should be considered likely to occur.

A key objective of notification is to help individuals to take steps to protect themselves from any negative consequences of the breach. For example, if there is a risk that bank details could be misused, notification might enable affected individuals to take steps to protect themselves by changing passwords.

Annex B of the Guidelines provides a non-exhaustive list of examples of when a breach may be likely to result in high risk to individuals. The Guidelines suggests that, if in doubt about notification, the controller should err on the side of caution and notify. Even where notification is not legally required, many organisations may consider notifying data subjects on a non-mandatory basis for transparency or to avoid data subjects finding out about the breach from other sources.

The GDPR states that communication of a breach to individuals should be made “without undue delay,” which means as soon as possible.

If communication to data subjects would “involve disproportionate effort”[vii], then controllers can notify them by some form of public communication.

Processors

Processors must notify their instructing controller if the processor suffers a data breach. Unlike the notification requirement on controllers, there is no fixed timeframe and a processor must notify the controller “without undue delay after becoming aware of” the breach. It is for the controller then to determine what if any notification requirement arises.

In data processing agreements with processors, controllers may want to be more specific about this time-frame. There is no particular logic to requiring the processor to notify within say 36 hours so that the controller can meet the 72 hour requirement, as the controller’s 72-hour-deadline only commences on receipt of notification from the processor. Fortunately, the EDPB have moved away from their former impracticable view that controllers should be deemed aware of the breach once the processor is aware.

Some processors may not want to accept a shorter notification requirement than is required of controllers. On the other hand, processors do not have to engage in any risk assessment – any breach is notifiable to the controller. However, the controller will want to ensure that processors provide sufficient detail about the breach to enable the controller to assess the risk and provide to the ICO the information required by Article 33(3).

Joint controllers

Pursuant to Article 26 joint controllers should set out in the arrangement between them which party will have responsibility for taking the lead on compliance with the breach notification obligations.

Data breach register

Under Article 33(5), controllers must document any personal data breaches, comprising the facts relating to the personal data breach, its effects and the remedial action taken. This documentation may be called for by the ICO in any enquiry as to whether the controller has complied with the notification requirements.

Data breaches need to be documented even where there is no requirement to notify the breach to the ICO. Where a decision is made not to notify a breach, while not specifically required, it is highly advisable also to document the reasoning behind the decision.

Failure to notify

The fines for failure to notify can be up to 10m EUR or 2% of global annual turnover, whichever is the higher. Even where there is compliance with the breach notification requirement it is possible that the breach could reveal an inadequacy of data security measures which could lead to a first tier fine in respect of inadequate security measures as a separate infringement.

In addition, under Article 82, any person who has suffered “material or non-material” damage has the right to claim compensation from the controller or processor for the damage suffered. Therefore, a data subject will be able to claim for any additional damage suffered as a result of their not having been notified of the breach when, if they had been notified in a timely way, they could have taken some steps to protect themselves.

Other notification obligations

Aside from the ICO, controllers and processors might also need to consider notifying other third parties such as the police (where there is evidence of criminal activity), professional bodies, and bank or credit card companies who may be able to assist in reducing the risk of financial loss to individuals.

FCA regulated entities will have a separate duty to notify the FCA on matters which may have a serious regulatory impact.

Under the NIS Regulations, operators in electricity, transport, water, energy, transport, health and digital infrastructure must notify the designated competent authority about any incident which has a significant impact on the continuity of the essential service, also within 72 hours[viii].

Telecom providers must notify the ICO within 24 hours under PECRs[ix].

Organisations with data / cyber breach insurance should notify the event to their insurers in case of any claims.

You may also need to consider if there are any contracts with third parties which require you to notify the third party if you, or a sub-processor, suffer a data breach. You may also need to consider the liability and indemnity provisions of those contracts a they apply to data breaches.

Data breach response plan

One of the most important internal policies to be implemented for GDPR purposes is a data breach or incident response plan. This will assist organisations swiftly to identify and respond to a data breach and ensure that staff within the organisation know how to recognise a breach and how to report it internally.

It will also set out who has responsibility within the organisation for managing a breach and the process to follow to contain the incident, assess the risks that could result from it and remedy any shortcomings in systems or policies.

As such, a data breach plan can be vital in helping to manage risk in the event of an incident.

Other risk management considerations

It is, of course, preferable to take steps to prevent a breach rather than to have to respond to one. The GDPR requires continuous evaluation of risk. It encourages use of encryption, pseudonymisation and anonymisation. It requires appropriate technical and organisational measures including internal policies to be in place having regard to the state of the art and also the costs of implementation. This includes systems to identify when a personal data breach has taken place. It requires personal data minimisation which reduces risk posed by processing superfluous data. It provides for data protection by design and by default. It requires organisations to regularly test, assess and evaluate the effectiveness of their technical and organisational data security measures. It requires organisations to have appropriate contracts in place with service providers. And it requires organisations to be accountable.

In the event of a breach, the notification requirement is only one element. There are potentially many other aspects to consider in terms of managing legal and business risk, including the following:

  • The incident response team should include communication and PR specialists in order to manage communications to affected data subjects, unaffected customers, the wider public, the media, shareholders and other stakeholders in a speedy and effective manner, and so as to minimise brand damage. Recent breaches have highlighted how important advance media training can be in preparing for an incident.
  • Take care not to rush to accept blame (liability) where that is or may not be due; in some cases organisations that suffer a data breach are victims and not necessarily at fault.
  • Where appropriate ensure communications are channelled via the legal team in order to preserve legal professional privilege over potentially sensitive materials that may affect liability for the breach.
  • If you have a data protection officer, the Guidelines recommend that the DPO is promptly informed about the existence of a breach and is involved throughout the breach management and notification process.
  • Where a breach involves a risk of identity theft, consider offering affected data subjects free credit monitoring services for a period.
  • It is important for organisations to be able to demonstrate an appropriate response to a data breach involving any of their staff; for example, if employee refresher training is needed, if updates to or additional policies or procedures are needed; or if disciplinary action should be taken in respect of an employee who did not follow the organisation’s policy.

 

Nigel Miller is a partner in Fox Williams LLP and is an SCL Fellow.

 

[i] The Cyber Security Breaches Survey 2018 was carried out for DCMS by Ipsos MORI, in partnership with the Institute for Criminal Justice Studies at the University of Portsmouth.

[ii] Regulation (EU) 2016/679 (General Data Protection Regulation)

[iii] ICO – Notification of data security breaches to the Information Commissioner’s Office (ICO) – 2012-07-23

[iv] Save for providers of public electronic communications services under the Privacy and Electronic Communications (EC Directive) Regulations 2003 (2003/2426) (as amended).

[v] Guidelines on Personal data breach notification under Regulation 2016/679 (WP250rev.01) adopted on 3 October 2017. As last revised and adopted on 6 February 2018.

[vi] The 72 hours includes public holidays, Sundays and Saturdays (Regulation No 1182/71 on the rules applicable to periods, dates and time limits).

[vii] WP29 Guidelines on transparency WP260 consider the issue of disproportionate effort.

[viii] The Network and Information Systems Regulations 2018 (2018 No. 506)

[ix] The Privacy and Electronic Communications (EC Directive) Regulations 2003; the European Commission Regulation 611/2013

Advertisements

Focus on fines

Nigel Miller
Nigel Miller

Since the GDPR came into force on 25 May 2018 the ICO has carried out a number of audits on organisations as well as a number of “advisory” visits. These do not necessarily lead to regulatory sanctions but sometimes they do.

Reportedly, since the GDPR came into force, there has been a 160 per cent rise in the number of complaints made to the ICO on the same period in 2017. This is a result of the build up to the GDPR which has heightened individuals’ awareness of their data rights.

In terms of fines, it is too soon after the GDPR came in for GDPR level fines to come down the line as these can take some months to be awarded after the initial complaint. Currently, the UK ICO is issuing fines under the old law where the initial complaint preceded 25 May 2018. Under the old law the maximum fine was £500k.

Under the GDPR, companies can be fined €20 million (£16.5m) or 4 per cent. of their worldwide turnover, whichever is the greater.

Examples of fines over the last month include:

September:

Equifax Ltd –  maximum fine £500,000 for failing to protect the personal information of up to 15 million UK citizens during a cyber-attack in 2017. The ICO investigation found that, although the information systems in the US were compromised, Equifax Ltd was responsible for the personal information of its UK customers. The UK arm of the company failed to take appropriate steps to ensure its American parent Equifax Inc, which was processing the data on its behalf, was protecting the information.

Everything DM –fined £60,000 for sending 1.42 million emails without consent.  Between May 2016 and May 2017, the firm used its direct marketing system called ‘Touchpoint’ to send emails on behalf of its clients

Bupa Insurance Services Limited (Bupa) – fined £175,000 for failing to have effective security measures in place to protect customers’ personal information.

Oaklands Assist –  fined £150,000 for making thousands of nuisance direct marketing phone calls.

October:

Heathrow Airport fined £120,000 for failing to ensure that the personal data held on its network was properly secured.

Boost Finance – fined £90,000 for millions of nuisance emails about pre-paid funeral plans.

It is reported that the ICO intends to fine Facebook the maximum £500k following the investigation launched in 2017 over the use of data for political campaigns. Were the breach to have happened after 25 May 2018, the ICO would have been able to issue a fine of up to 4% of Facebook’s annual worldwide turnover (reportedly meaning a maximum fine of £479m).

How fines are assessed under GDPR

Fines are regarded as an important tool for the supervisory authorities who have said they will not shy away from issuing fines or only use fines as a last resort.

The Regulator has said that fines under the GDPR are to be “effective, proportionate and dissuasive”.  Fines may be imposed in response to a wide range of infringements. Each case is to be assessed individually.

Factors to be taken into account in assessing fines are:

  • the nature, gravity and duration of the infringement;
  • the number of data subjects involved;
  • the categories of the personal data affected (e.g. special categories, directly identifiable data, data whose dissemination would cause damage/distress to the individual);
  • is it is an isolated event or symptomatic of a more systemic breach or lack of adequate routines in place;
  • if data subjects have suffered damage, and the level of the damage;
  • action taken to mitigate the damage suffered by data subjects;
  • the intentional or negligent character of the infringement, and the degree of responsibility of the controller or processor taking into account technical and organisational measures implemented;
  • any relevant previous infringements by the controller or processor;
  • the degree of cooperation with the supervisory authority;
  • whether, and if so to what extent, the controller or processor notified the infringement;
  • any other aggravating or mitigating factor applicable to the circumstances, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement.

The use of location data by mobile apps post-GDPR

This article was first published on Lexis®PSL TMT on 24 September 2018.

From the perspective of a party providing an app via an app store, what regulations govern the use of location data by that mobile app?

The key consideration is data privacy and, therefore, the main regulation to consider is the General Data Protection Regulation (GDPR) which came into force on 25 May 2018. This will apply to the app provider if they carry out processing of personal data on the device.

While there is as yet no specific guidance under the GDPR on the use of location data by Apps, in 2011 the Article 29 Data Protection Working Party (now the European Data Protection Board (EDPB)) adopted Opinion 13/2011 on “Geolocation services on smart mobile devices” and in 2013 Opinion 2/2013 on “Apps on smart devices”. Although these opinions relate to the Data Protection Directive (95/46/EC), much of the content of the Opinions is still relevant under the GDPR.

In the UK, you should also take into account the Data Protection Act 2018 which supplements the GDPR in certain areas (such as in relation to special categories of personal data and data subject rights) although not specifically in relation to location data.

To what extent / in what circumstances will the Privacy and Electronic Communications Regulations 2003 regulate the use of location data by mobile app providers? What exemptions apply and does PECR 2003 apply to ‘information society services’?

Under regulation 6 of PECR (as amended by the 2011 Regulations), it is unlawful to gain access to information stored in the terminal equipment of a subscriber or user unless the subscriber or user (a) is provided with clear and comprehensive information about the purposes of the access to that information; and (b) has given his or her consent. This applies irrespective of whether or not the location data is “personal data”.

Regulation 14 relates specifically to the processing of location data and provides that you can only process location data if you are a public communications provider, a provider of a “value-added service”, or a person acting on the authority of such a provider, and only if: (a) the data is anonymous; or (b) you have the user’s consent to use it for a value-added service, and the processing is necessary for that purpose. This does not apply to data collected independently of the network or service provider such as GPS-based location data or data collected by a local wifi network. However, the use of such data will still need to comply with the GDPR.

To what extent / in what circumstances will the GDPR regulate the use of location data collected from mobile apps by mobile app providers?

The GDPR will apply if the app provider collects the location data from the device and if it can be used to identify a person.

If the data is anonymized such that it cannot be linked to a person, then the GDPR will not apply. However, if the location data is processed with other data related to a user, the device or the user’s behavior, or is used in a manner to single out individuals from others, then it will be “personal data” and fall within the scope of the GDPR even if traditional identifiers such as name, address etc are not known.

Opinion 13/2011 sets out the regulator’s view that a device is usually intimately linked to a specific individual and that location data will, therefore, be regarded as “personal data”. Indeed, the definition of “personal data” in the GDPR, specifically includes location data as one of the elements by reference to which a person can be identified.  The Opinion comments that the providers of geolocation based services gain “an intimate overview of habits and patterns of the owner of such a device and build extensive profiles.”

Furthermore, in certain contexts, location data could be linked to special category personal data (sensitive personal data). For example, location data may reveal visits to hospitals or places of worship or presence at political demonstrations.

How is compliance with such laws commonly addressed by app providers?

To process the data derived from the device or the app, the app provider needs to have a legal basis.

Contract necessity may apply to some uses of the location data. For other uses, depending on the app, it may be problematic to rely on “legitimate interests” as a lawful basis for tracking individuals using location data, for example, to serve location specific ads. Therefore, in many cases the app provider will need to rely on the user’s “consent” for processing location data.

How should app providers respond to recent changes in the law (e.g., the introduction of GDPR) impacting their apps’ use of location data?

Where app providers rely on “consent” as the legal basis, they will need to ensure that this meets the stricter requirements for consent under GDPR. This can be challenging given the constraints of the mobile app environment.

Transparency is essential. The Article 29 Guidelines on transparency WP260 rev.01 indicate that, for apps, the Article 13 privacy information should be made available from the app store before download. Once the app is installed, the privacy information needs to be easily accessible from within the app. The recommendation is that it should never be more than “two taps away” (e.g. by including a “Privacy” option in the app menu). Use of layered notices and contextual real time notifications will be particularly helpful on a mobile device.

The device’s operating system (such as IOS) may require the user’s permission to use the location data, for example via a dialogue box asking if the user agrees to allow the app to access the user’s location, either while using the app or in the background. Clicking on the “allow” button enables location service on the device and may also help signify consent provided that this has been sufficiently informed and is sufficiently granular.

If the app integrates with a third-party provider to enable, for example, location-based advertising the consent to use location data must be sufficiently explicit to include consent to data collection for advertising purposes by the third party, including the identity of the third party. Data sharing arrangements may also be required between the app provider and the third party.

Where children (in UK, under 13) may be involved, the consent must be given or authorised by the holder of parental responsibility over the child.

Following GDPR, app providers should review their data security and retention policies for compliance with the Article 5 principles.

App providers should be mindful of the principles of privacy by design and by default, and so for example location services should, by default, be switched off and its use should be customizable by the user.

Finally, using location data may involve “profiling” within the meaning of Article 4(4) which specifically refers to analysing location data. As such, consideration should be given to whether a data protection impact assessment (DPIA) is required under Article 35 or, if not required, should be undertaken as good practice.

Are there any forthcoming or anticipated changes to the law which may impact on use of location data by mobile app providers?

The ePrivacy Directive on which PECR is based is currently under review to be updated and aligned with GDPR in the form of the ePrivacy Regulation.

This is not yet finalised and its implementation date is not certain but may be in 2019 or 2020. However, GDPR-grade consent will still be required for use of location data subject to certain exceptions including where strictly necessary for providing an information society service specifically requested by the individual. Assuming the ePrivacy Regulation takes effect after Brexit, it remains to be seen if / how it will be implemented in the UK but this can be expected in the interests of UK “adequacy” status.

 

Nigel Miller leads Fox Williams’ technology and data protection group. Nigel is a Certified Information Privacy Professional/Europe (CIPP/E).

The consent trap

Nigel Miller

Having got passed 25 May 2018, the day the GDPR came into effect, the torrent of GDPR emails is beginning to abate.

It would be interesting to analyse how many GDPR emails were sent in the run up to the go live date seeking consent to continue being in contact, as against the percentage of recipients who then responded to opt-in. And how many trumpeted a new privacy policy, as against the percentage of recipients who actually read the new policy. I suspect the percentages in each case will be low! Indeed, many people have expressed satisfaction that, by doing nothing and not confirming consent when requested, they can reduce the flow of unwanted spam into their inbox.

But were all these emails necessary, and in particular, was it actually necessary to seek consent?

In many cases it was not necessary to seek consent to “stay in touch” and continue email marketing.

Under GDPR consent is one of the legal basis for processing, but is not the only one. In most cases, organisations will be able to rely on the “legitimate interests” ground to remain in contact with their contact list. Recital 47 GDPR expressly says that processing of personal data for direct marketing purposes may be regarded as carried out for a legitimate interest. Subject to confirming this in a “legitimate interests assessment”, many businesses can rely on the concept of ‘legitimate interest’ to justify processing client personal data on their mailing lists without the need to re-affirm the consent. GDPR expressly acknowledges that businesses may have a legitimate interest in direct marketing activities, which could include circulating invitations to events, new products and services, or updates etc. This is an appropriate basis for data processing where you use data in ways that people would reasonably expect and has a minimal privacy impact especially as a recipient should always be able to easily opt-out of future marketing.

While permission based marketing is certainly to be preferred, unless it is required, there is no need to seek specific GDPR-grade consent which may predictably result in the contact database being decimated as a result of recipient inertia and GDPR fatigue.

That all said, there is a key exception where consent to email marketing may be required.  This requirement is not to be found in the GDPR; instead it is in the Privacy and Electronic Communications Regulations (“PECR”). These have been around since 2003 and are currently being upgraded to GDPR level with a new ePrivacy Regulation, although this did not make it into law at the same time as GDPR as was the plan; it is likely to come on stream within the next year or so.

PECR contains supplemental rules on consent for electronic marketing (i.e. marketing by email, phone, SMS or fax). Whilst you may not need consent under the GDPR, you may need consent under PECR.

Different rules apply depending on whether the marketing is sent to an ‘individual’ or ‘corporate’ subscriber’.

Marketing to a corporate email address does not need consent. However, if you are sending unsolicited marketing emails to individual subscribers (a personal email address), then you will need the individual’s consent, unless the so called “soft opt-in” applies (e.g. where the individual is an existing customer).

In summary, assuming you can justify “legitimate interests” for the continued contact, consent is not needed to continue marketing by post, or by email to existing customers or to contacts at corporate email addresses. Consent will only be needed to send direct marketing by email to personal email addresses of individuals who are not customers for similar products and services.

Ironically, in an effort to be compliant, the email requesting consent to future marketing may itself be unlawful if consent was not already in place, and the ICO has fined organisations for engaging in this (e.g. Honda and Flybe). So, sending emails seeking consent may be either unnecessary or unlawful.

New ePrivacy Regulation – implications for Ad-tech

Josey Bright
Josey Bright

On 10 January this year, the European Commission published a proposal for a new ePrivacy Regulation (the “ePrivacy Regulation”) to update and replace the current ePrivacy Directive (the “Directive”).

The ePrivacy Regulation, which is part of the Commission’s Digital Single Market Strategy, is designed to closely align with the provisions of the General Data Protection ePrivacy Regulation (GDPR) which was adopted in May 2016. The Commission intends that the ePrivacy Regulation will come into force on the same date as the GDPR, the 25 May 2018. However, as it is still yet to be finalised and approved this timetable may be overly ambitious.  It is currently reported that the aim is to finalise the ePrivacy Regulation by end 2018.

As it is a ePrivacy Regulation, just like the GDPR, it will be directly applicable in all EU Member States without the need for implementing national laws.

The main aim of the ePrivacy Regulation is to increase privacy protection for users of electronic communications.

The key features of the proposed ePrivacy Regulation are:

  1. Broader scope

 The new ePrivacy Regulation will apply to any company processing data in connection with communication services including all providers of electronic communications services.

This includes “over-the-top” service providers such as text message, email and messaging app providers so services such as WhatsApp, Facebook Messenger and Skype will be within scope of the ePrivacy Regulation.

Like the GDPR, the ePrivacy Regulation will have an extended reach in that non-EU providers providing electronic services to users in the EU will also be within scope of the ePrivacy Regulation.

  1. Content and metadata included

 All electronic communications data are cover by the ePrivacy Regulation. However, the ePrivacy Regulation distinguishes between content data (what is actually said in the communication) and metadata (data related to the communication such as time, location and duration of a call or website visit). Separate rules apply in respect of each type of data:

  • Content can only be used if the end user has consented to its use for a specified purpose and the processing is necessary for the provision of the service.
  •  Metadata can only be use where it is necessary for the quality of the service such as billing, payments, detecting and/or stopping fraudulent or abusive use of the service.

In circumstances where all end users have consented to the use of content or metadata for a purpose which cannot be fulfilled if the information is anonymised, the data may be used provided that the service provider has consulted the competent EU Data Protection Authority (in the UK, the Information Commissioner’s Office (ICO)) before the processing is carried out.

The threshold for consent under the ePrivacy Regulation is defined by reference to the GDPR. This means consent must be “freely given, specific, informed and unambiguous” given by “a statement or by a clear affirmative action”. Like the GDPR, end users must also be given the right to withdraw their consent at any time.

  1. Storage and erasure of data required

The ePrivacy Regulation includes provisions requiring service providers to erase or anonymise all content after it is received by the end user.

All metadata must also be erased or anonymised once the permitted purpose has been fulfilled, except where such data is required for billing purposes.

  1. Cookie consent options

Like the Directive, the ePrivacy Regulation also provides that the consent of the end user is required for the use of cookies and similar technology. However, the current proposal is that consent can be built into the browser software set-up so that users can tailor their cookie consent choices at the point of installation, rather than by using cookie banners and pop ups.

In addition, analytics cookies which are non-privacy intrusive will not require consent (i.e. those which measure web audience measuring, remember shopping cart details or login information for the same session).

  1. Direct marketing rules

The ePrivacy Regulation distinguishes between business to consumer communications (B2C) and business to business communications (B2B).

Like the Directive, unsolicited commercial communications are not permitted. In B2C marketing prior consent (opt-in) is required. Consent will not be required where marketing similar products or services but a right to object must be provided.

For B2B marketing, the ePrivacy Regulation allows for Member States to determine that the legitimate interests of corporate end users are sufficiently protected from unsolicited communication.

  1. Enforcement and higher fines in line with GDPR

The Information Commission’s Office (ICO) will be responsible for enforcement of the ePrivacy Regulation and the GDPR in the UK.

Currently, ICO can only fine companies up to £500,000 for breaches of the PECR (the national legislation which implements the Directive).  The ePrivacy Regulation introduces fine which are in line with the GDPR (i.e. up to 20,000,000 EUR or 4% of total worldwide annual turnover, whichever is higher).

In addition, the ePrivacy Regulation confers users electronic communications services  with a right to seek compensation directly from services providers if they have “suffered material or non-material damage as a result of an infringement”.

Implications

The ePrivacy Regulation is critically important for many ad-tech businesses where the need to get specific opt in consent could be highly problematic for intermediaries who do not have a direct relationship with the end users and where soliciting that consent via publishers while legally possible may be impracticable.

All this is not helped by the fact that there is uncertainty around the final form of the ePrivacy Regulation; for example, as to whether valid consent can be managed within the browser.

As if compliance with GDPR did not present enough challenges, the ad-tech industry, as well as individual businesses, need to move quickly to prepare for these forthcoming changes in ePrivacy.

 

Josey Bright is an associate in the commerce & technology team at City law firm Fox Williams LLP and can be contacted at jbright@foxwilliams.com