The ICO’s new Code of Practice on Communicating Privacy Information to Individuals goes beyond the form of privacy notice that we are accustomed to seeing when we hand over our personal information. It advocates a blended approach of selecting a number of different techniques to communicate privacy details to individuals when they hand over their personal data.
According to the ICO, the benefits of the blended approach include:
greater control for individuals over how their personal data is used;
greater choice for individuals over how their personal data is used;
can be used to demonstrate that personal data is being used fairly and transparently;
preference management tools will mean that you are more likely to get better and more specific information from individuals; and
more likely to demonstrate that informed consent has been provided.
Drafting privacy notices in accordance with the Code
The Code is full of detailed and helpful guidance on preparing privacy notices, including the following:
Have a plan – consider whether your intended uses of the information would be reasonably expected by the individual? If not, your privacy notice should explain the uses in greater detail. Make predictions of likely future uses, especially as part of big data, and include this information in the notice. Put yourself in the shoes of the individual: carry out a privacy impact assessment.
Blended approach – make use of the privacy-enhancing technologies available such as just-in-time solutions, voice or video, privacy dashboards, icons and symbols.
Avoid catch-all privacy notices – instead, have separate notices tailored to groups.
Control – it is good practice to link the notice to a preference management tool such as a privacy dashboard; be clear about the information that is required and that which is optional
Adapt to your business model – the privacy notice should cover all platforms through which the individual can access your services.
Consent – consider whether the individual needs to consent to the processing described in the privacy notice and, if so, include a mechanism for giving and obtaining consent at the appropriate time.
Active communication – when appropriate privacy information should be actively communicated to individuals (as opposed to the individual having to seek it out through, e.g., a web link), for example if the uses are likely to be unexpected, or if information could be shared with other sources to build a more detailed picture about an individual.
Collaborative resource – where several data controllers are involved, the ICO suggests that in addition to individual privacy notices, a collaborative resource which brings together all privacy information could be the way forward. Such a resource could allow the individual to make and apply privacy preferences across all data controllers.
Encourage individuals to take notice – word privacy notices in an engaging way and embed them into the user journey.
When dealing with complex transactions or platforms which involve personal data collection, compliance with the principles may require a range of privacy communication techniques to be used. The key is to employ these techniques with a focus on how they can enhance the user experience, rather than over-complicate it.
What do you think about the proposed new Code? The Code is open for consultation until 24 March 2016.
The market for and consumer awareness of wearable tech has rocketed over the last few years, and is predicted by some analysts to be worth $25 billion by 2019. From fitness bands for wrists and the first generation of smartwatches and smart eyewear, we will soon be able to purchase smart clothes with sensors to monitor fitness and athletic performance. And with the technology developing at a dizzying pace, ingestibles and embeddables are just over the horizon, taking the form of digital pills, and chips to be inserted into muscles or under the skin.
Each new generation of wearable tech aims to be more sophisticated and less obtrusive than the last. The less obtrusive it becomes, however, the greater the risk of it becoming more intrusive, as the wearer (and potentially third parties who come into close proximity with the wearer) are at risk of having their personal data used in ways which they may not have anticipated.
The data protection concerns inherent in wearable tech have been exercising regulators for some time. Part of the problem is that the current legislation in the UK – the Data Protection Act 1998 – was drafted in a time when smart technology was in its very early development phase. Despite this, regulators have emphasised that all stakeholders involved in the production and operation of wearable tech must comply with data protection laws.
Wearable tech companies will be “data controllers” for the purposes of the data protection legislation if their device collects “personal data” from users, and if (as is likely) the wearable tech company determines the purposes for which and the manner in which such data is to be used.
“Personal data” is any data which relates to a living individual who can be identified from that data alone, or from that data when it is combined with other information which is in the possession of the data controller. A common assumption is that personal data is limited to someone’s name, photograph, email address and mobile number, but in fact the definition goes much wider. Data such as an IMEI number of a smartwatch can be personal data, if it is used to differentiate an individual from others.
There are various requirements with which data controllers have to comply under data protection legislation, including the following:
The processing of the data must be fair and lawful. As part of this, the company will need to tell the user what data it is collecting and what the data will be used for. Given that some wearable tech devices collect different sorts of data using different sensors, it is crucial that the user is aware of all the data being collected by all enabled sensors.
The consent of the user to the processing of their personal data will almost always be needed for the processing to be fair. Consent must be freely given, specific and informed. In relation to sensitive personal data (such as data relating to an individual’s health) the requirements for consent are more stringent. Data controllers collecting data relating to an individual’s health (which will be a large proportion of the wearable tech industry) will need to ensure that their users give “explicit” consent before such data is collected. Opt-in consent is required in these circumstances, not opt-out consent.
The data must be protected by appropriate technical and organisational measures against unauthorised or unlawful use, and against accidental loss, destruction or damage. Given the extent of the personal data collected by many wearables, the sensitivity of that data and the rise of hacking, data security must be a top priority for wearable tech companies.
Personal data must not be transferred to a country outside the EEA unless that country ensures an adequate level of protection of personal data. For US-based wearable tech companies selling into the EU, it should be borne in mind that the US is not considered by the European Commission to adequately protect personal data and that it is no longer possible to rely on Safe Harbors. An alternative solution should be put in place to ensure transfers outside of the EEA are lawful.
With the increasing use of social media (according to Twitter, the number of tweets has grown to 500 million Tweets per day and to around 200 billion a year) there is greater risk of blurring the distinction between work and home, with repercussions for employers and individuals.
It is perhaps surprising that there have been relatively few reported cases and that the courts in the employment arena are not keen to set hard and fast ground rules. So what can we learn from the cases in the last 12 months?
The majority of the cases have been claims of unfair dismissal as a result of employees sharing information and expressing views via social media, with the key question being whether dismissal can be justified (or to use the legal test, is within the range of reasonable responses). The extent to which postings, tweets or blogs are private, who has access and the impact on an organisation’s reputation are all relevant considerations in determining the appropriate response. Most important, is demonstrating that the employer has made clear what is unacceptable via rules or a social media policy and when dismissal may result.
There seems to be increased recognition by the courts that derogatory comments do impact on an employer’s reputation and that employees should take greater care when using social media to the extent of even curbing some freedoms:
In an early case from 2011, Ms Witham was found to have been unfairly dismissed for making negative comments about her colleagues – the Tribunal concluded they were relatively minor and there was no evidence that it had harmed client relationship. In this case the key relationship was with Skoda/VW and the individual was a Team Leader working with this client. She complained on her Facebook: “I think I work in a nursery and I do not mean working with plants”. Witham v Club 24 Limited t/a Ventura
Another early case rejected the argument that an employer was able to fairly dismiss an employee who had posted his views on gay marriage on Facebook; this was despite the fact many colleagues were Facebook friends and it was also clear from his profile that he was a manager at the Trust. The High Court in that view concluded that his Facebook was personal (Smith v Trafford Housing Trust)
So we move on to the cases in the last 12 months , which seem to have moved the position forward and recognised that even private postings can overlap into the work place, leading to an impact on colleagues and the employer’s reputation. It also suggests that the courts are now recognising the impact such media can have in the workplace.
Game Retail Ltd v Laws illustrates this best : As a risk and loss prevention investigator , Mr Laws was responsible for 100 Game stores and had his own Twitter account , which was followed by 65 stores including both staff and managers. He posted 28 tweets containing expletives or bad language , such as “ This week I have mainly been driving to towns the arse end of nowhere …” going on to complain about other road users. The Appeal Tribunal accepted these tweets could be read by staff and customers – even though it would be those who chose to follow him. In addition there was no need to show that employees or customers had actually been offended – Game Retail had formed an honest and reasonable belief that they might have caused offence.
During the summer there was another Facebook case before the Tribunal which contrasted markedly with Smith v Trafford. In the case of British Waterways Board v Smith the employee posted comments on Facebook about drinking alcohol whilst on standby duties and complaining about his supervisors in colourful terms: “the f****** don’t even pay us for this s***” and “why are gaffers such p********…” His dismissal was found to be fair by the Appeal tribunal, who disagreed with the original employment tribunal. It was useful that the Board’s social media policy stated “any action on the internet which might embarrass or discredit BW (including defamation of third parties for example by posting on bulletin boards or chat rooms” was prohibited.
So the legal position, from an employment law perspective, is evolving and this evolution looks set to continue. A good resolution for the New Year might well be to revisit employer social media and disciplinary policies in light of these lessons.
First proposed in January 2012, agreement has finally been reached between the European Commission, the European Parliament and the Council (the so-called ‘trilogue’) regarding a new General Data Protection Regulation (GDPR).
Current data protection rules are based on the 1995 Data Protection Directive, which predates mainstream internet, social media, big data, the cloud and other advances in technology which shape the way business operates today. It’s a classic case of legislation not keeping pace with technological development; its overhaul is well overdue.
A key benefit of the GDPR will be a single harmonised data protection law covering the whole of the EU. At present, each EU state has implemented its own version of the 1995 Directive and differences can apply in different member states.
The main highlights are summarized as follows:
A stricter regulatory environment
Reflecting ever increasing concerns about how personal data is used in the digital economy, and the continuous flow of news reports about data security breaches, the GDPR imposes a much higher burden of compliance on business. Specific points include:
Fines – the maximum fine for breach of the GDPR is to be set at 4 per cent. of a company’s worldwide turnover. Currently the maximum fine under the DPA is £500,000. This alone should be enough to put the GDPR onto every Board’s agenda.
Easier access to data: individuals will have (and businesses will be required to provide) more information on how their data is processed and this information should be available in a clear and understandable way.
Consent – a new more expansive and specific definition of consent requires that it must be a “freely given, specific, informed and unambiguous indication of his or her wishes” by which the data subject, either “by a statement or by a clear affirmative action”, signifies agreement to personal data relating to them being processed.
Additional administrative burden – businesses must keep a record of any data processing activities under their responsibility (referred to as documentation) and must carry out data protection impact assessments (DPIAs) if they are processing date using new technologies and this is likely to result in a high risk to personal data.
Rules for innovation – the regulation requires that data protection safeguards are built into products and services from the earliest stage of development (privacy by design). Privacy-friendly techniques such as pseudonymisation are encouraged by the GDPR, to allow the benefit of big data innovation while protecting privacy.
Data protection officers – companies will be required to appoint data protection officers if they process sensitive data or collect information from consumers on a large scale. This will be an additional cost to many companies, although there is an exemption applicable to SMEs – see below.
Data processors – the GDPR treats data processors as data controller if they process personal data otherwise than in accordance with the data controller’s instructions and subjects data to processors fines for breaches of the GDPR; under current rules, in general, only the data controller is responsible for compliance.
Data breach notification – companies and organisations must notify the national supervisory authority (that’s the ICO in the UK) of serious data breaches as soon as possible so that users can take appropriate measures.
As well as the above, the new rules strengthen existing rights to include:
a right to data portability – the GDPR will make it easier for consumers to transfer personal data between service providers such as social network platforms and SaaS service providers;
right to be forgotten– EU citizens will have a stronger right to require that their data is deleted provided that there are no legitimate grounds for retaining it, which may require a business to rethink its current policy on data retention and deletion.
Impact on non-EU businesses – the new rules will apply to companies who do not have a physical presence in the EU but offer services in the EU and collect data about EU data subjects. This will, for example, affect many US companies that provide services into the EU.
International data transfers – the position regarding transfers of data outside of the EU is unsatisfactory, highlighted by the recent invalidation of the Safe Harbor framework in respect of transfers to the US. However, it seems that the position under the GDPR will be largely unchanged from the current position.
One continent, one law – The GDPR will establish one single set of rules for the whole of the EU which will make it simpler and cheaper for companies to do business in the EU.
One-stop-shop – businesses will only have to deal with one single supervisory authority.
Exemptions for SMEs
Under the new rules, SMEs benefit from certain exemptions to reduce the burden of compliance:
No more notifications: the requirement to notify to / register with the ICO is to be scrapped.
Subject access: Where requests to access data are manifestly unfounded or excessive, SMEs will be able to charge a fee for providing access.
Data Protection Officers: SMEs are exempt from the obligation to appoint a data protection officer insofar as data processing is not their core business activity.
Impact Assessments: SMEs will have no obligation to carry out an impact assessment unless there is a high risk.
Before the GDPR becomes law, the final text must be formally adopted by the European Parliament and Council, which is set to happen at the beginning of 2016.
The new rules will then become applicable across the EU two years thereafter.
Market forecasts predict that the commercial and civil drone market will boom over the next decade. The use of drones, also called unmanned aerial systems (UAS) or unmanned aerial vehicles (UAVs), is becoming increasingly popular with their use already being championed by the likes of Amazon, DHL and Shell. Companies such as Royal Mail are considering both drones for air-mail as well as autonomous delivery vans and major insurance companies are investing in drone technology in order to monitor crop yields amongst other things.
The global economic potential for commercial drone use is looking extremely positive with a recent US study estimating that over the 10 year span from 2015 to 2025, drone integration within national air space will account for $82.1 billion in US job creation and economic growth. These figures aren’t surprising given the advantages of drones to assist businesses whether by offering the capability to streamline delivery, efficient aerial photography and contribute to safe infrastructure maintenance and management.
And so with the prospect of drones being integrated into businesses on a larger scale it is absolutely crucial that businesses understand the legal and other risks attributed to drone use.
A key area of concern is privacy. As a result drone use is an area that the Information Commissioners Office (ICO) has looked to become more involved with as the issue of drones and their impact on privacy has become more prominent. The ICO gave evidence to a Parliamentary Committee in autumn last year, on the risk to privacy posed by UAS and underlined that their use for commercial purposes must be carried out in accordance with the Data Protection Act (DPA).
Earlier this year the ICO issued guidance on drone use for individuals and organisations. The ICO recommends that users of drones, also called unmanned aerial systems (UAS) or unmanned aerial vehicles (UAVs) with cameras should be operated in a responsible way to respect individuals’ privacy rights. Therefore, if a drone has a camera, its use has the potential to be covered by the DPA. If a business is using a drone for commercial purposes, then it is important that you understand your legal obligations as a data controller. Where UAS are used for business purposes, operators will need to comply with data protection obligations and it will be good practice for users to be aware of the potential privacy intrusion which the use of UAS can cause to make sure they aren’t in breach of any data protection or privacy provisions.
The ICO has provided guidance as to the potential data protection risks that businesses may be exposed to when using drones:
The use of UAS has a high potential for collateral intrusion by recording images of individuals unnecessarily and therefore can infringe individuals’ privacy rights. For example, there is a high probability of recording individuals inadvertently, because of the height they can operate at and the unique vantage point they can obtain. Individuals may not always be directly identifiable from the footage captured by UAS, but can still be identified through the context they are captured in or by using the devices zoom capability.
As such, it is very important that you can provide in your Privacy Impact Assessment (PIA) (discussed later in this article) that there is a strong justification for the recording use of the drone. You may be able to reduce the risk of privacy infringement by incorporating privacy restrictive methods in the design of the drone. For example, you may be able to procure a device that has restricted vision so that its focus is only in one place. Privacy by design can be incorporated into your PIA and can form part of your procurement process.
It is important that the recording system on UAS can be switched on and off when appropriate. This is particularly important given the potential for the cameras to capture large numbers of individuals from a significant height. Unless you have a strong justification for doing so, and it is necessary and proportionate, recording should not be continuous. This is something which you should look at as part of your PIA.
One major issue with the use of UAS is the fact that on many occasions, individuals are unlikely to realise that they are being recorded, or may not know that UAV have a camera attached. Businesses can however introduce innovative ways of providing this information. The ICO recommends examples such as, wearing highly visible clothing identifying yourself as the UAS operator, placing signage in the area you are operating UAS explaining its use and having a privacy notice on a website that you can direct people to, or some other form of privacy notice, so they can access further information.
Coverage of the ‘whole’ system
The ICO guidelines advise organisations that data protection issues concerning UAS cover the whole system, rather than just the device in the air, so you need to ensure that the whole system is compliant. You should ensure that any data which has been collected is stored securely. This can be achieved by using encryption or another appropriate method of restricting access to the information. It is also important to ensure that data is retained for the minimum time necessary for its purpose and disposed of appropriately when no longer required.
Unencrypted data links found within drones are particularly vulnerable to jamming, interception and manipulation. There are clear cyber security risks that may arise because a drone could be hacked, its data link or live feed intercepted, or the aircraft could be “spoofed” i.e. its GPS signal manipulated during flight. Businesses should be aware that when operating in an urban environment, due to the heavy use of communications, equipment and other sources of electromagnetic spectrum/radio frequency are at risk of being manipulated or interfered with. Businesses also need to consider mitigation for the consequences of weak or lost GPS signal due to masking by buildings along with the general radio frequency saturation level.
How to be best prepared
Privacy Impact Assessments
A PIA is a process which helps a business to identify and reduce the privacy risks of a project. They enable an organisation to systematically and thoroughly analyse how a particular project or system will affect the privacy of the individuals involved. A PIA will help you decide if using UAS is the most appropriate method to address the need that you have identified.
With regard to the use of drones, a PIA should consider identifying the drone’s potential effects upon privacy and data protection compliance, how detrimental effects of the drone may be overcome and how the use of the drone can comply with data protection principles.
The DPA does not oblige organisations to conduct PIAs, but the ICO has said they are useful tools for organisations to use in order to help them comply with the requirements set out in the DPA.
It is possible that organisations who undertake PIAs can also hope to be treated more leniently by regulators if they experience a data protection breach and are subject to legal action. There is an understanding by the regulator that not all data breaches are preventable. It is possible to show through a PIA that you assessed the risks of processing personal data, took measures to mitigate those risks, or otherwise identified the reasons why it decided to proceed with certain projects, despite data protection risks being present.