Theories and Realities of Privacy Law: An Overview

There is no universally agreed upon theory of privacy, with a number of competing theories and perspectives as to what privacy actually is and what value we as a society ought to place on an individual’s right to privacy. In this blog post, I will provide a brief overview of the most common or interesting theories concerning privacy law and will then discuss well-known privacy incidents that have occurred in recent years.

Traditional Theories: What is Privacy?

Bygrave identifies four influential theories of privacy that have provided the foundation for work in privacy jurisprudence over the years.

  • Privacy ‘in terms of non-interference‘ (e.g. Warren and Brandeis formulation of ‘the right to be let alone’ which emerged in 1890 – arose from a concern that government, the press and other institutions had begun to invade previously inaccessible aspects of personal activity. Strongly concerned with individual freedom);
  • Privacy ‘in terms of degree of access to a person’ (e.g. Ruth Gavison’s ‘condition of limited accessibility’ involving (i) secrecy (‘the extent to which we are known to others’); (ii) solitude (‘the extent to which others have physical access to us’) and (iii) anonymity (‘the extent to which we are the subject of others’ attention’));
  • Privacy ‘in terms of information control‘ (e.g. Alan Westin’s ‘privacy is the claim of individuals, groups or institutions to determine when, how and to what extent information about them is communicated to others’; or the German Constitutional Court’s ‘right of informational self-determination’)
  • Privacy by relating it ‘exclusively to those aspects of persons’ lives that are “intimate” and/or “sensitive” (e.g. Julie Innes’ limitation of privacy to those aspect of people’s lives that are ‘intimate’ or ‘sensitive’ e.g. to avoid embarrassment, preserve dignity).

Challenges to Traditional Theories

There are many who are sceptical about whether privacy rights can feasibly be protected in the modern day and age. Michael Froomkin argues that in light of the rapid growth of privacy-destroying technologies, it is increasingly unclear whether privacy can be protected at a bearable cost, or whether we are approaching an era of zero informational privacy, or what is commonly referred to as a “dataveillance” world. There are indeed many privacy-threatening technologies: CCTV surveillance, smartphone apps, Facebook, and smart homes. These new technologies pose a number of privacy threats that have not really been considered before because they involve unprecedented data collection and aggregation. This poses the question as to whether we should be prepared to trade off some of our privacy rights for the convenience of being able to use these new technologies.

Others challenge the traditional theories of privacy on the basis that they are elusive and value-driven. Since privacy means different things to different people, and each individual has different expectations as to what ought to be considered private, privacy is often viewed as incapable of one agreed upon theory or definition. The Law Reform Commission for its own part has stated that ‘it is difficult, if not impossible, to define the parameters of the right to privacy in precise terms.’ For example, to a lot of people, privacy is important because it means respect for human autonomy. Others argue that there is a psychological need for privacy. On the other hand, many believe openness and transparency is more important than individual privacy because these values help to facilitate democracy.

New ‘Privacy Skeptic’ Theories

Not everyone supports attempts to protect privacy by law or to limit surveillance by law. A number of new theories have been put forth by so-called ‘privacy skeptics’ who are cynical of privacy and privacy law, due to economic, practical or moral reasons.

Privacy as economic inefficiency

One of the main arguments of privacy skeptics is that privacy rights are economically inefficient. For example, in an influential article called The Right to Privacy, Richard Posner stated that an unfortunate consequence of information privacy is that it allows people to conceal personal facts about themselves in order to mislead or misrepresent their character. Richard Posner argued that other people, including government institutions, have a legitimate interest in unmasking that misrepresentation. However, while there is some credence to this theory, it is generally well-accepted that there needs to be a balancing exercise and there obviously comes a point where an invasion of privacy goes too far and becomes unjustifiable.

Technological defeatism

As I alluded to earlier, many people argue that, whatever the merits of privacy as a value, it is futile to attempt to protect it in the face of the rapid technological developments we are experiencing. This theory is known as ‘technological defeatism,’ and is reflected in the famous sound bite of a former technology company CEO, Scott McNealy, who said ‘You have zero privacy. Get over it.’

However, a solution that is commonly put forward to technological defeatism is the ‘if you can’t beat them so join them’ approach of maximising the surveillance of all those who have control over data and surveillance within society. This is also known as ‘watching the watchers.’ It could involve, for example, monitoring and policing the people who collect our data. In Australia the Office of the Australian Commissioner oversees organisations and agencies by conducting investigations, handling complaints and enforcing decisions when a privacy breach has occurred.

Privacy Issues

Let’s now consider some topical privacy issues. Privacy issues have been rife in recent years, from Cloud computing and the dark web to controversial data breach incidents such as the Ashley Madison hack.

Cloud Computing and the Dark Web

Statistics about the growing use of cloud services and the lack of visibility into sensitive information in the cloud indicate that the cloud is likely to result in more damaging or costly data breaches. A Netspoke study was conducted in 2016 that surveyed 643 IT and IT security professionals in the US and Canada who were familiar with their company’s use of cloud services. 85% of respondents said that their on-premises security is equal to or more secure than the cloud, and most respondents admitted that their organisation’s use of cloud resources diminishes its ability to protect confidential or sensitive information. The survey also indicated that 60% of enterprises don’t scan their cloud services for malware, 57% of enterprises have cloud malware, and 34% don’t even know it. This highlights the privacy and security threat that can be posed by using cloud based software.

A widely reported data breach incident in the cloud occurred in relation to cloud storage service Dropbox. Dropbox is a file hosting service where users can store and synchronize their photos, documents and videos. Last year, following a widely publicized data breach, details of more than 68 million user accounts were reportedly leaked online. The data was posted on the “dark web,” and, dangerously, records of email addresses and hashed passwords were leaked widely across the Internet. The Dropbox dump is just one of a string of high-profile data breaches. In 2015, a hacker was reportedly looking to sell 117 million passwords from a LinkedIn breach on the dark web, and last year, a hacker claimed to be selling 655,000 alleged patient healthcare records on the dark web, containing information such as social security numbers, addresses, and insurance details.

In another incident, a hacker attacked a company called Code Spaces. Code Spaces was not a well known company and its hack didn’t affect millions of people, but it is an interesting case study because it is an example of a company put completely out of business by a single cloud security incident. In short, a hacker compromised Code Spaces’ Amazon Web Services account and demanded a ransom. When the company declined, the hacker started destroying the company’s resources until there was barely anything left, effectively putting the company out of business altogether.

Ashley Madison data breach

Breaches of privacy can often have ethical implications and can involve very sensitive and damaging information, as evident in the controversial Ashley Madison data breach case. In 2015, a group calling itself “The Impact Team” stole the user data of Ashley Madison, a commercial website that was developed to facilitate discreet, extramarital affairs. The group copied personal information about the site’s user base and threatened to release users’ names and personally identifying information if Ashley Madison was not immediately shut down. Ultimately, the group leaked more than 25 gigabytes of company data online, including records of real names, home addresses, search histories and credit card transaction records, and many users were publicly humiliated and shamed for engaging in extramarital affairs.

The Australian Privacy Commissioner, Timothy Pilgrim, and the Privacy Commissioner of Canada, Daniel Therrien, opened a joint investigation into the breach, and found that Ashley Madison was the target of a data breach as a result of inappropriate security safeguards.

According to the findings, Ashley Madison’s security framework lacked the following elements: documented information security policies or practices including appropriate training, resourcing and management; an explicit risk management process – including periodic and pro-active assessments of privacy threats, and evaluations of security practices to ensure ALM’s security arrangements were, and remained, fit for purpose.

Statistics on Data Breaches

Australia leads the Asia Pacific region for data breaches according to security indexes.

The Cost of Data Breach Global Analysis Study of eight countries including the US, UK, France, Germany, Italy, India, Japan and Australia found that Australian companies experienced the highest average number of breached records and faced the second highest detection and escalation cost.

Chatbots

Chatbots may be exposed to and collect a vast amount of personal data and other commercial information in the course of interaction with Internet users. Therefore, data privacy policies for chatbots need to be up to date, it must be clear where the data is collected and where it will be processed, and there must be internal policies that govern the extent of the chatbot’s permitted activities and what data it is permitted to collect. There is also a risk that Chatbots can go ‘rogue’ and there have been cases of Chatbots extracting personal data and bank account information from users. An example of a Chatbot that went rogue was Microsoft’s chatbot ‘Tay’ that hit the headlines when it started posting offensive tweets, swearing, and making racist and inflammatory political remarks.

As can be seen, privacy issues are rife in contemporary society. It is more important than ever that we understand the value of privacy conceptually so that we are able to define the boundaries that we would expect organisations do not cross when it comes to collection, use and disclosure of our personal information. However, the question as to where that line ought to be drawn will perhaps always be the subject of considerable controversy, whether it’s from privacy skeptics, privacy traditionalists or technological defeatists. To borrow the words of academic Raymond Wacks, ‘an acceptable theory of privacy remains elusive.’

Leave a comment

Design a site like this with WordPress.com
Get started