As 2015 draws to a close, the Information Commissioner’s Office has fined the Telegraph Media Group Ltd £30,000 for a serious breach of the UK Privacy and Electronic Communications Regulations (“PECR”). The PECR set out specific rules in respect of electronic communications. In particular, the PECR prevent the sending of unsolicited marketing and advertising by electronic means without the individual’s consent to such marketing and advertising.
On the day of the general election earlier this year, the Telegraph Media Group sent out its daily editorial e-bulletin which included a letter from the editor of the Telegraph newspaper urging its readers to vote Conservative. Whilst subscribers to the Telegraph Media Group had signed up, and hence consented to receiving, the editorial e-bulletin, the ICO found that by promoting a particular election campaign the nature of the e-bulletin had changed from an editorial communication to a ‘marketing communication’.
In order to amount to valid consent to receiving a particular electronic communication under the PECR, consent must be knowingly given, clear, and specific. In the circumstances, the Telegraph Media Group did not have the specific consent of the readers to send such a marketing communication and the communication was sent in breach of the PECR. The ICO Head of Enforcement considered that the Telegraph had been negligent in sending the letter from the editor as part of the e-bulletin and explained that “people signed up to The Telegraph’s email service so they could catch up on the news or find out about subjects they were interested in. They did not expect to be told who they should be voting for.”
The ICO has the power to impose a monetary penalty on a data controller of up to £500,000 in respect of such a breach. However, the relatively low amount of £30,000 was determined by the fact that only 17 complaints were received, and that the email in question was a late addition to the usual mailing. The ICO acknowledged that there was pressure to distribute it quickly and little time to properly consider whether it should be included in the mailing.
This case serves as a reminder of the scope of the PECR and the enforcement action open to the ICO for those who ignore the rules.
With the increasing use of social media (according to Twitter, the number of tweets has grown to 500 million Tweets per day and to around 200 billion a year) there is greater risk of blurring the distinction between work and home, with repercussions for employers and individuals.
It is perhaps surprising that there have been relatively few reported cases and that the courts in the employment arena are not keen to set hard and fast ground rules. So what can we learn from the cases in the last 12 months?
The majority of the cases have been claims of unfair dismissal as a result of employees sharing information and expressing views via social media, with the key question being whether dismissal can be justified (or to use the legal test, is within the range of reasonable responses). The extent to which postings, tweets or blogs are private, who has access and the impact on an organisation’s reputation are all relevant considerations in determining the appropriate response. Most important, is demonstrating that the employer has made clear what is unacceptable via rules or a social media policy and when dismissal may result.
There seems to be increased recognition by the courts that derogatory comments do impact on an employer’s reputation and that employees should take greater care when using social media to the extent of even curbing some freedoms:
In an early case from 2011, Ms Witham was found to have been unfairly dismissed for making negative comments about her colleagues – the Tribunal concluded they were relatively minor and there was no evidence that it had harmed client relationship. In this case the key relationship was with Skoda/VW and the individual was a Team Leader working with this client. She complained on her Facebook: “I think I work in a nursery and I do not mean working with plants”. Witham v Club 24 Limited t/a Ventura
Another early case rejected the argument that an employer was able to fairly dismiss an employee who had posted his views on gay marriage on Facebook; this was despite the fact many colleagues were Facebook friends and it was also clear from his profile that he was a manager at the Trust. The High Court in that view concluded that his Facebook was personal (Smith v Trafford Housing Trust)
So we move on to the cases in the last 12 months , which seem to have moved the position forward and recognised that even private postings can overlap into the work place, leading to an impact on colleagues and the employer’s reputation. It also suggests that the courts are now recognising the impact such media can have in the workplace.
Game Retail Ltd v Laws illustrates this best : As a risk and loss prevention investigator , Mr Laws was responsible for 100 Game stores and had his own Twitter account , which was followed by 65 stores including both staff and managers. He posted 28 tweets containing expletives or bad language , such as “ This week I have mainly been driving to towns the arse end of nowhere …” going on to complain about other road users. The Appeal Tribunal accepted these tweets could be read by staff and customers – even though it would be those who chose to follow him. In addition there was no need to show that employees or customers had actually been offended – Game Retail had formed an honest and reasonable belief that they might have caused offence.
During the summer there was another Facebook case before the Tribunal which contrasted markedly with Smith v Trafford. In the case of British Waterways Board v Smith the employee posted comments on Facebook about drinking alcohol whilst on standby duties and complaining about his supervisors in colourful terms: “the f****** don’t even pay us for this s***” and “why are gaffers such p********…” His dismissal was found to be fair by the Appeal tribunal, who disagreed with the original employment tribunal. It was useful that the Board’s social media policy stated “any action on the internet which might embarrass or discredit BW (including defamation of third parties for example by posting on bulletin boards or chat rooms” was prohibited.
So the legal position, from an employment law perspective, is evolving and this evolution looks set to continue. A good resolution for the New Year might well be to revisit employer social media and disciplinary policies in light of these lessons.
First proposed in January 2012, agreement has finally been reached between the European Commission, the European Parliament and the Council (the so-called ‘trilogue’) regarding a new General Data Protection Regulation (GDPR).
Current data protection rules are based on the 1995 Data Protection Directive, which predates mainstream internet, social media, big data, the cloud and other advances in technology which shape the way business operates today. It’s a classic case of legislation not keeping pace with technological development; its overhaul is well overdue.
A key benefit of the GDPR will be a single harmonised data protection law covering the whole of the EU. At present, each EU state has implemented its own version of the 1995 Directive and differences can apply in different member states.
The main highlights are summarized as follows:
A stricter regulatory environment
Reflecting ever increasing concerns about how personal data is used in the digital economy, and the continuous flow of news reports about data security breaches, the GDPR imposes a much higher burden of compliance on business. Specific points include:
Fines – the maximum fine for breach of the GDPR is to be set at 4 per cent. of a company’s worldwide turnover. Currently the maximum fine under the DPA is £500,000. This alone should be enough to put the GDPR onto every Board’s agenda.
Easier access to data: individuals will have (and businesses will be required to provide) more information on how their data is processed and this information should be available in a clear and understandable way.
Consent – a new more expansive and specific definition of consent requires that it must be a “freely given, specific, informed and unambiguous indication of his or her wishes” by which the data subject, either “by a statement or by a clear affirmative action”, signifies agreement to personal data relating to them being processed.
Additional administrative burden – businesses must keep a record of any data processing activities under their responsibility (referred to as documentation) and must carry out data protection impact assessments (DPIAs) if they are processing date using new technologies and this is likely to result in a high risk to personal data.
Rules for innovation – the regulation requires that data protection safeguards are built into products and services from the earliest stage of development (privacy by design). Privacy-friendly techniques such as pseudonymisation are encouraged by the GDPR, to allow the benefit of big data innovation while protecting privacy.
Data protection officers – companies will be required to appoint data protection officers if they process sensitive data or collect information from consumers on a large scale. This will be an additional cost to many companies, although there is an exemption applicable to SMEs – see below.
Data processors – the GDPR treats data processors as data controller if they process personal data otherwise than in accordance with the data controller’s instructions and subjects data to processors fines for breaches of the GDPR; under current rules, in general, only the data controller is responsible for compliance.
Data breach notification – companies and organisations must notify the national supervisory authority (that’s the ICO in the UK) of serious data breaches as soon as possible so that users can take appropriate measures.
Individual rights
As well as the above, the new rules strengthen existing rights to include:
a right to data portability – the GDPR will make it easier for consumers to transfer personal data between service providers such as social network platforms and SaaS service providers;
right to be forgotten– EU citizens will have a stronger right to require that their data is deleted provided that there are no legitimate grounds for retaining it, which may require a business to rethink its current policy on data retention and deletion.
International aspects
Impact on non-EU businesses – the new rules will apply to companies who do not have a physical presence in the EU but offer services in the EU and collect data about EU data subjects. This will, for example, affect many US companies that provide services into the EU.
International data transfers – the position regarding transfers of data outside of the EU is unsatisfactory, highlighted by the recent invalidation of the Safe Harbor framework in respect of transfers to the US. However, it seems that the position under the GDPR will be largely unchanged from the current position.
One continent, one law – The GDPR will establish one single set of rules for the whole of the EU which will make it simpler and cheaper for companies to do business in the EU.
One-stop-shop – businesses will only have to deal with one single supervisory authority.
Exemptions for SMEs
Under the new rules, SMEs benefit from certain exemptions to reduce the burden of compliance:
No more notifications: the requirement to notify to / register with the ICO is to be scrapped.
Subject access: Where requests to access data are manifestly unfounded or excessive, SMEs will be able to charge a fee for providing access.
Data Protection Officers: SMEs are exempt from the obligation to appoint a data protection officer insofar as data processing is not their core business activity.
Impact Assessments: SMEs will have no obligation to carry out an impact assessment unless there is a high risk.
Next steps
Before the GDPR becomes law, the final text must be formally adopted by the European Parliament and Council, which is set to happen at the beginning of 2016.
The new rules will then become applicable across the EU two years thereafter.
Market forecasts predict that the commercial and civil drone market will boom over the next decade. The use of drones, also called unmanned aerial systems (UAS) or unmanned aerial vehicles (UAVs), is becoming increasingly popular with their use already being championed by the likes of Amazon, DHL and Shell. Companies such as Royal Mail are considering both drones for air-mail as well as autonomous delivery vans and major insurance companies are investing in drone technology in order to monitor crop yields amongst other things.
The global economic potential for commercial drone use is looking extremely positive with a recent US study estimating that over the 10 year span from 2015 to 2025, drone integration within national air space will account for $82.1 billion in US job creation and economic growth. These figures aren’t surprising given the advantages of drones to assist businesses whether by offering the capability to streamline delivery, efficient aerial photography and contribute to safe infrastructure maintenance and management.
And so with the prospect of drones being integrated into businesses on a larger scale it is absolutely crucial that businesses understand the legal and other risks attributed to drone use.
A key area of concern is privacy. As a result drone use is an area that the Information Commissioners Office (ICO) has looked to become more involved with as the issue of drones and their impact on privacy has become more prominent. The ICO gave evidence to a Parliamentary Committee in autumn last year, on the risk to privacy posed by UAS and underlined that their use for commercial purposes must be carried out in accordance with the Data Protection Act (DPA).
Earlier this year the ICO issued guidance on drone use for individuals and organisations. The ICO recommends that users of drones, also called unmanned aerial systems (UAS) or unmanned aerial vehicles (UAVs) with cameras should be operated in a responsible way to respect individuals’ privacy rights. Therefore, if a drone has a camera, its use has the potential to be covered by the DPA. If a business is using a drone for commercial purposes, then it is important that you understand your legal obligations as a data controller. Where UAS are used for business purposes, operators will need to comply with data protection obligations and it will be good practice for users to be aware of the potential privacy intrusion which the use of UAS can cause to make sure they aren’t in breach of any data protection or privacy provisions.
Potential risks
The ICO has provided guidance as to the potential data protection risks that businesses may be exposed to when using drones:
The use of UAS has a high potential for collateral intrusion by recording images of individuals unnecessarily and therefore can infringe individuals’ privacy rights. For example, there is a high probability of recording individuals inadvertently, because of the height they can operate at and the unique vantage point they can obtain. Individuals may not always be directly identifiable from the footage captured by UAS, but can still be identified through the context they are captured in or by using the devices zoom capability.
Collateral Intrusion
As such, it is very important that you can provide in your Privacy Impact Assessment (PIA) (discussed later in this article) that there is a strong justification for the recording use of the drone. You may be able to reduce the risk of privacy infringement by incorporating privacy restrictive methods in the design of the drone. For example, you may be able to procure a device that has restricted vision so that its focus is only in one place. Privacy by design can be incorporated into your PIA and can form part of your procurement process.
Recording Systems
It is important that the recording system on UAS can be switched on and off when appropriate. This is particularly important given the potential for the cameras to capture large numbers of individuals from a significant height. Unless you have a strong justification for doing so, and it is necessary and proportionate, recording should not be continuous. This is something which you should look at as part of your PIA.
One major issue with the use of UAS is the fact that on many occasions, individuals are unlikely to realise that they are being recorded, or may not know that UAV have a camera attached. Businesses can however introduce innovative ways of providing this information. The ICO recommends examples such as, wearing highly visible clothing identifying yourself as the UAS operator, placing signage in the area you are operating UAS explaining its use and having a privacy notice on a website that you can direct people to, or some other form of privacy notice, so they can access further information.
Coverage of the ‘whole’ system
The ICO guidelines advise organisations that data protection issues concerning UAS cover the whole system, rather than just the device in the air, so you need to ensure that the whole system is compliant. You should ensure that any data which has been collected is stored securely. This can be achieved by using encryption or another appropriate method of restricting access to the information. It is also important to ensure that data is retained for the minimum time necessary for its purpose and disposed of appropriately when no longer required.
Cyber Security
Unencrypted data links found within drones are particularly vulnerable to jamming, interception and manipulation. There are clear cyber security risks that may arise because a drone could be hacked, its data link or live feed intercepted, or the aircraft could be “spoofed” i.e. its GPS signal manipulated during flight. Businesses should be aware that when operating in an urban environment, due to the heavy use of communications, equipment and other sources of electromagnetic spectrum/radio frequency are at risk of being manipulated or interfered with. Businesses also need to consider mitigation for the consequences of weak or lost GPS signal due to masking by buildings along with the general radio frequency saturation level.
How to be best prepared
Privacy Impact Assessments
A PIA is a process which helps a business to identify and reduce the privacy risks of a project. They enable an organisation to systematically and thoroughly analyse how a particular project or system will affect the privacy of the individuals involved. A PIA will help you decide if using UAS is the most appropriate method to address the need that you have identified.
With regard to the use of drones, a PIA should consider identifying the drone’s potential effects upon privacy and data protection compliance, how detrimental effects of the drone may be overcome and how the use of the drone can comply with data protection principles.
The DPA does not oblige organisations to conduct PIAs, but the ICO has said they are useful tools for organisations to use in order to help them comply with the requirements set out in the DPA.
It is possible that organisations who undertake PIAs can also hope to be treated more leniently by regulators if they experience a data protection breach and are subject to legal action. There is an understanding by the regulator that not all data breaches are preventable. It is possible to show through a PIA that you assessed the risks of processing personal data, took measures to mitigate those risks, or otherwise identified the reasons why it decided to proceed with certain projects, despite data protection risks being present.
Please contact Daniel Geller for further information.