Featured

The Mandatory Data Breach Laws in Australia One Year On

The mandatory data breach notification laws have now been in force for over 12 months. While quite convoluted and amorphous in some respects, the regime imposes an obligation on APP entities to notify affected individuals and the Australian Information Commissioner of suspected eligible data breaches affecting personal information, credit information or tax file numbers, provided that there are reasonable grounds for this belief. APP entities that contravene their notification obligations could be fined up to $1.8 million.

The Commissioner has recently released the Notifiable Data Breaches Scheme 12 month Insights Report. In this blog post I will briefly recap how the mandatory data breach regime operates, before discussing key insights from the Commissioner’s report.

The Mandatory Data Breach Regime

What is an ‘eligible data breach?’

When an entity believes on reasonable grounds that there has been an eligible data breach affecting personal information, health records, credit information or tax file numbers, it will be required to report it to the Australian Information Commissioner and the people whose information has been lost or stolen. An entity must also give a notification if it is directed to do so by the Commissioner.

An eligible data breach happens if:

  • (a) there is unauthorised access to, unauthorised disclosure of, or loss of, personal information held by an entity; and
  • (b) the access, disclosure or loss is likely to result in serious harm to any of the individuals to whom the information relates.

An APP entity is required to assess whether there has been an eligible data breach, having regard to matters such as the kind and sensitivity of information disclosed, relevant security measures, the persons who have obtained or could obtain the information, and the nature of the harm caused.

How does notification occur?

The APP entity will have to describe the breach and the kind or kinds of information concerned to affected individuals, and make recommendations about what the individuals should do.

APP entities that contravene the relevant notification obligations may be fined up to $1.8 million.

One Year On

The Insights Report examines the first four quarters of statistics from the scheme, and shows that:

  • 964 eligible data breaches were notified to affected individuals and the OAIC from 1 April 2018 to 31 March 2019;
  • 60 per cent of breaches were traced back to malicious or criminal attacks;
  • The leading cause of data breaches during the 12-month period was phishing (people tricked into revealing information such as passwords) causing 153 breaches;
  • More than a third of all notifiable data breaches were directly due to human error;
  • That includes personal information being emailed to the wrong recipient, which caused 97 data breaches, or one in ten;
  • The remaining 5 per cent of all notifiable data breaches involved system faults;
  • 168 voluntary notifications were also received by the OAIC, where the reporting threshold or ‘serious harm’ test was not met, or the entity was not regulated under the Privacy Act.

The Commissioner has provided a media statement regarding key findings:

“Data breaches involving personal information may be prevented through effective training and enhanced systems, analysis of the first 12 months of mandatory notifications reveals.”

“Releasing the at the start of Privacy Awareness Week in Sydney today, Australian Information Commissioner and Privacy Commissioner Angelene Falk called on regulated entities to heed its lessons.”

“By understanding the causes of notifiable data breaches, business and other regulated entities can take reasonable steps to prevent them,” Ms Falk told the Privacy Awareness Week Business Breakfast this morning.

“Our report shows a clear trend towards the human factor in data breaches — so training and supporting your people and improving processes and technology are critical to keeping customers’ personal information safe.

“After more than 12 months in operation, entities should now be well equipped to meet their obligations under the scheme, and take proactive measures to prevent breaches of personal information.”

“The requirement to notify individuals of eligible data breaches goes to the core of what should underpin good privacy practice for any entity — transparency and accountability.”

Featured

The Road to Legal Uncertainty: Autonomous Vehicles and the Law

A hallmark of the twenty-first century artificial intelligence renaissance is the autonomous vehicle (AV), an automotive advancement that has pushed scientific frontiers and is poised to transform the transport industry. Though still in their infancy, it is conceivable that AVs will set the wheels in motion for a mobility revolution and will pave the road to increased productivity, efficiency and safety. However, there are significant liability risks and uncertainties associated with AVs, particularly in relation to collision liability, the Queensland Road Rules, and use of data created by AVs. To ameliorate uncertainty, Queensland should implement a no-fault insurance system and amend the law to resolve AV data ownership issues and clarify liability under the Road Rules.

AUTONOMOUS VEHICLES

A The Technology of AVs

AVs are an example of weak artificial intelligence[1] characterised by autonomy, reactivity, goal-centeredness and temporal continuousness.[2] An AV’s computer processes input via cameras, GPS, radar and lidar in order to understand the AV’s surrounding environment and make driving decisions.[3] This is achieved through machine-learning algorithms including deep neural networks, convolution neural networks, regression, pattern recognition, clustering and decision matrix algorithms.[4] Generally, vehicles are classified under one of five levels of automation ranging from no automation or conditional automation to full automation.[5] Buses, trucks, emergency vehicles and planes are all potential types of AVs,[6] but self-driving cars provide the greatest scope for discussion.

B Benefits of AVs

AVs are significantly less susceptible to everyday human driving errors[7] and may therefore reduce road fatalities, injuries and lawsuits associated with negligent driving.[8] Furthermore, AVs offer the potential to alleviate traffic congestion[9] and maximise fuel efficiency.[10] For example, one study found that self-driving cars could improve highway capacity by up to 273%.[11] AVs may also provide social and economic benefits: 70% of Australians support self-driving cars,[12] and it is predicted that AVs will save the US economy $1.3 trillion per year through productivity gains and accident avoidance savings.[13]

C Why Consider AV Liability Risks?

It is imperative that vehicle users, manufacturers and other interested parties are acutely aware of the liability risks associated with AVs. These risks are starkly illustrated by some recent collision incidents. On 7 December 2017, a General Motors self-driving car collided with a motorcyclist who was attempting to overtake it, resulting in a lawsuit.[14] In March 2018, an Uber self-driving car in autonomous mode killed a woman crossing a dark road in Arizona.[15] These incidents raise significant questions regarding the imposition of criminal liability upon AV users and manufacturers[16] and the appropriate negligence and product defect standards that should apply.[17]

In addition, the Australian Competition and Consumer Commission has been steadfast in its commitment to the enforcement of product safety and consumer guarantees in the automotive industry.[18] The Product Safety website has recorded over 3900 transport product safety recalls in its history, and there has been a consistent year-by-year increase in recalls of this type over the past decade.[19] Recently, the automotive industry has also faced scrutiny in the wake of a mandatory Takata airbag recall.[20] Accordingly, retailers and manufacturers of AVs may face substantial risks of prosecution if they do not comply with mandatory product safety standards or consumer guarantees.

COLLISION LIABILITY

A Attribution of Liability

A key issue to determine in evaluating collision liability risks is who will be held predominantly responsible for an accident. This warrants a consideration of competing policy arguments in relation to the attribution of liability to relevant parties.[21]

1 AVs as Legal Agents

Many academics have postulated that AVs can be considered genuine legal agents and can therefore be held legally responsible for their own actions.[22] This contention is grounded in utilitarian concerns and is predicated upon the notion that no metaphysical barriers restrict the attribution of agency to autonomous machines.[23] However, the law does not currently afford legal agency to AVs and to impose liability would ‘venture into new ground.’[24] Moreover, it may be difficult conceptually to regard AVs as legal agents,[25] and full legal autonomy arguably requires moral choice.[26] Therefore, it is improbable that AVs will be held liable as legal entities in their own right in the foreseeable future.

2 Manufacturers and Users

More feasibly, attribution of liability issues will turn upon the tension between holding manufacturers responsible for crashes and imposing strict or fault-based liability upon AV users. On the one hand, it could be argued that car manufacturers should be held mainly responsible for any crash caused by an autonomous vehicle that they designed.[27] This would ensure that manufacturers do not economise on safety and is consistent with two overarching goals of ‘liability law’, namely minimalising accidents and compensating victims.[28] In a press release, vehicle manufacturer Volvo supported this position, stating that it would accept full liability for its cars when they are in autonomous mode.[29]

However, on the other hand, many academics posit that imposing an excessive liability burden on manufacturers may stifle innovation and technological advancement.[30] On this basis, it is contended that AV users should observe a duty to pay attention,[31] returning liability to the driver where they had the ability to intervene to prevent the accident.[32] Some academics even suggest that AV users should be strictly liable because they assumed the risk of using an AV.[33] If taken too far, however, imposing liability upon users of AVs would be equally problematic. Morally, it may be a form of defamation to blame a person for their inattention if they had no real chance to intervene.[34] Further, manufacturers should not be disincentivised from making marginal safety improvements to AVs to avoid liability.[35] The law will therefore need to strike a balance in attributing responsibility. This balance will likely be achieved by apportioning liability based on ‘whether a human driver or the AV system was mainly operating the vehicle at the time of loss.’[36]

B Liability Risks

In the event of a collision, aggrieved parties may seek redress under the common law and Australian Consumer Law (ACL), thus exposing AV manufacturers, owners and occupants to significant liability risks.

1 Negligence Claim against Occupant or Owner

By virtue of the Motor Accident Insurance Act 1994 (Qld), Queensland operates a common law fault-based compulsory third party (CTP) insurance scheme. Under this scheme, an injured third party can claim for damages if they can establish negligence against an owner or driver of a motor vehicle. Liability risks may therefore arise for CTP Insurers of AV owners and occupants, subject to any full or partial contributions from parties such as the vehicle manufacturer.[37]

One possible cause of action is a negligence claim against the CTP Insurer of an AV occupant who is found in breach of a duty to pay attention.[38] For only partially automated AVs, one would reasonably anticipate that legal responsibility would reside with a person sitting in the driver’s seat who is able to resume control or is prompted to do so. However, it is likely that liability risks resulting from human inattention will devolve to manufacturers as technology shifts towards full AV autonomy.[39]

If an injury arose out of negligent AV maintenance, a claim may also be available against the vehicle owner or CTP Insurer.[40] This would be based on “a failure to service, maintain or upgrade the vehicle or its software in line with manufacturer instructions.”[41]

2 Claims against Manufacturers and Suppliers

Researchers have surmised that widespread usage of AVs will lead to increased manufacturer and retailer liability.[42] This is a strong possibility as plaintiffs in AV collisions could pursue a negligence claim, or seek an ACL remedy for defective products,[43] breach of consumer guarantees[44] or misleading or deceptive conduct.[45]

In relation to negligence, a manufacturer owes a duty to take reasonable care to avoid injury being suffered by AV users and reasonably foreseeable bystanders,[46] including road users and passengers.[47] To determine whether a manufacturer has exercised reasonable care, the Court will consider the state of technical and scientific knowledge available at the time of manufacture or distribution.[48]

Similarly, the ACL confers statutory rights of action on persons who suffer injury or loss caused by a manufacturer’s defective goods.[49] In defining ‘defect,’ the ACL adopts an objective test[50] based on consumer expectations.[51] As with negligence at common law, the safety defect provisions afford an exception based on the ‘state of scientific or technical knowledge’ at the time of supply of the goods.[52]

3 Uncertainty

Perhaps the greatest legal risk to potential legal actors is that collision liability is markedly uncertain, due to the amorphous legal tests embedded in the law and the complexities associated with attributing liability. Most problematically, standards at common law and under the ACL are unsuited to an AV context.[53] The ACL consumer expectation test has been criticised as being inapplicable to complex products that are not very familiar to consumers.[54] Similarly, application of the ‘state of scientific or technical knowledge’ test to AVs is riddled with uncertainty in a field experiencing rapid technological progress. Professor Bryant Smith notes that thinking about product defects ‘in terms of the decisions that [a vehicle] makes’ is anomalous.[55] Furthermore, there is uncertainty about whether to compare the driving standard of the AV to that of a human driver or another AV.[56] This uncertainty in the law might be compounded by complexities in attributing liability where there are joint liability or contributory negligence issues,[57] such as where a vehicle manufacturer claims against designers of AV software.[58]

OTHER RISKS

A Privacy

There is a strong impetus for insurers and manufacturers to consider potential privacy issues pertaining to AVs, due to possible risks of hacking, vehicle modification and unauthorised use of and access to data.[59] Most relevantly, the Privacy Act provides that where a manufacturer or insurer collects personal information about an individual for a particular purpose, they will be prohibited from using or disclosing that information for a secondary purpose.[60] This principle is subject to exceptions such as consent.[61] Relevant personal information for the purposes of the Privacy Act could include driver information, location data, or systems operation data.[62]

The use of event recordings data created by AVs is a key issue for manufacturers and insurers, as it could serve to clarify liability issues or improve an AV model’s future performance.[63] To diminish the risk of breaching the Privacy Act, manufacturers and insurers could obtain broad consents for use of an AV owner’s data,[64] and should limit use of information for a secondary purpose where possible.[65]

B Road Rules[66]

A final issue to consider is the uncertain liability risks for AV occupants under Queensland’s Road Rules.[67] Under the Road Rules, ‘a driver must not drive a vehicle unless the driver has proper control of the vehicle.’[68] It is also an offence for a person to drive without due care and attention.[69]

‘Driving’ has been defined in the case law as ‘causing a vehicle to move … down a road, and controlling the handlebars and brakes.’[70] The courts have held that pushing a vehicle with one hand on the steering wheel is not considered to be driving.[71] This presumably means the more passive act of occupying an AV would be insufficient, however further clarification is needed. Additionally, guidance is required as to how definitions such as ‘proper control’ and ‘vehicle’ will be applied in the context of AVs. In most States and Territories, police interpret ‘proper control’ to mean ‘having one hand on the steering wheel,’[72] but it is questionable how the term will be applied to AV usage.

SUGGESTED RECOMMENDATIONS

It is recommended that Queensland adopt a no-fault insurance system, in order to alleviate uncertainties and complexities associated with collision incidents, such as attribution of liability issues.[73] This reform would also safeguard individual drivers, stimulate marketplace innovation and facilitate quicker provision of compensation to victims.[74] Another recommendation is to clarify AV data ownership and the rights of manufacturers, insurers and law enforcement to access event-recording data.[75] Finally, traffic liability should be clarified for AV operators, and terms such as ‘proper control’ and ‘driving’ in the Queensland Road Rules should be amended to provide greater certainty in the AV context.[76]


[1] Selmer Bringsjord and Bettina Schimankski, ‘What is Artificial Intelligence? Psychometric AI as an Answer’ (2003) 18 Eighteenth International Joint Conference on Artificial Intelligence 887, 892; Michael Negnevitsky, Artificial Intelligence: A Guide to Intelligent Systems (Pearson Education Limited, 2nd ed, 2005) 18.

[2] Bartosz Brożek, Jaap Hage and Bipin Indurkhya, ‘Introduction to the special issue on machine law’ (2017) 25(3) Artificial Intelligence and Law 251, 252.

[3] Tom Standage, ‘How Does a Self-driving Car Work’, The Economist (online), 12 May 2015 <https://www.economist.com/blogs/economist-explains/2013/04/economist-explains-how-self-driving-car-works-driverless&gt; visited 27 March 2018; Bill Robertson, ‘How Do Self-Driving Cars Work?’ (2017) 54(9) Science and Children 72, 73.

[4] Ashish Sukhadeve, How Machine Learning Will Drive Autonomous Vehicles (26 May 2017) Datafloq <https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152&gt; visited 27 March 2018.

[5] Owen Hayford et al, Driving into the Future: regulating driverless vehicles in Australia (19 September 2016) Clayton Utz <https://www.claytonutz.com/articledocuments/178/Clayton-Utz-Driving-into-the-future-regulating-driverless-vehicles-2016.pdf.aspx?Embed=Y&gt; 7 visited 25 March 2018.

[6] Lisa Collingwood, ‘Privacy Implications and Liability Issues of Autonomous Vehicles’ (2017) 26(1) Information and Communications Technology Law 32, 32.

[7] Umar Zakir Abdul Hamid et al, ‘Current Collision Mitigation Technologies for Advanced Driver Assistance Systems’ (2016) 6(2) Perintis eJournal 78, 78.

[8] Ravi Shanker et al, ‘Autonomous Cars: Self-Driving the New Auto Industry Paradigm’(Blue Paper, Morgan Stanley, 6 November 2013) <https://orfe.princeton.edu/~alaink/SmartDrivingCars/PDFs/Nov2013MORGAN-STANLEY-BLUE-PAPER-AUTONOMOUS-CARS%EF%BC%9A-SELF-DRIVING-THE-NEW-AUTO-INDUSTRY-PARADIGM.pdf&gt; visited 22 March 2018; see also: Hamid et al, above n 7, 78.

[9] David K Gibson, ‘Can we banish the phantom traffic jam?’, BBC (online), 28 April 2016 <http://www.bbc.com/autos/story/20160428-how-ai-will-solve-traffic-part-one&gt; visited 27 March 2018; Shanker et al, above n 8.

[10] John Miller, Self-Driving Car Technology’s Benefits, Potential Risks and Solutions (20 August 2014) The Energy Collective <http://www.theenergycollective.com/jemiller_ep/464721/self-driving-car-technology-s-benefits-potential-risks-and-solutions&gt; visited 22 March 2018.

[11] Patcharinee Tientrakool, Ya-Chi Ho and Nicholas Maxemchuk, ‘Highway Capacity Benefits from Using Vehicle-to-Vehicle Communication and Sensors for Collision Avoidance’ (Paper presented at Vehicular Technology Conference, San Francisco, 5 September 2011).

[12] Australian Driverless Vehicle Initiative, Submission to House of Representatives Standing Committee on Industry, Innovation, Science and Resources, Parliament of Australia, Inquiry into Social Issues relating to land-based driverless vehicles in Australia, February 2017, 9.

[13] Shanker et al, above n 8.

[14] Jon Fingas, ‘GM faces lawsuit over self-driving car collision’, Engadget (online), 28 January 2018 <https://www.engadget.com/2018/01/28/gm-faces-lawsuit-over-self-driving-car-collision/&gt; visited 22 March 2018.

[15] Greg Miskelly, ‘Uber suspends self-driving car tests after vehicle hits and kills woman crossing the street in Arizona’, ABC News (online), 20 March 2018 <http://www.abc.net.au/news/2018-03-20/uber-suspends-self-driving-car-tests-after-fatal-crash/9565586&gt; visited 22 March 2018; see also: Sam Levin, ‘Video released of Uber self-driving crash that killed woman in Arizona’, The Guardian (online), 22 March 2018 <https://www.theguardian.com/technology/2018/mar/22/video-released-of-uber-self-driving-crash-that-killed-woman-in-arizona&gt; visited 22 March 2018.

[16] Carolyn Said, ‘Exclusive: Tempe police chief says early probe shows no fault by Uber’, San Francisco Chronicle (online), 19 March 2018 <https://www.sfchronicle.com/business/article/Exclusive-Tempe-police-chief-says-early-probe-12765481.php&gt; visited 22 March 2018.

[17] Ethan Baron, ‘Blame Game: Self-Driving Car Crash highlights tricky legal question’, The Mercury News (online), January 23 2018 <https://www.mercurynews.com/2018/01/23/motorcyclist-hit-by-self-driving-car-in-s-f-sues-general-motors/&gt; visited 22 March 2018.

[18] See, eg, Australian Competition and Consumer Commission, New Car Retailing Industry – a market study by the ACCC (14 December 2017) Australian Competition and Consumer Commission <https://www.accc.gov.au/system/files/New%20car%20retailing%20industry%20final%20report_0.pdf&gt; visited 26 March 2018; see also: Australian Competition and Consumer Commission, Compliance and Enforcement Policy (20 February 2018) Australian Competition and Consumer Commission <https://www.accc.gov.au/system/files/D18-20423%20Enf%20-%20Admin%20Other%20-%20CLEAN%20VERSION%20final%20draft%20Combined%20Complia…%20%5Bfinal.%5D.pdf&gt; visited 26 March 2018.

[19] Product Safety Australia, Browse all Transport Recalls (20 March 2018) Product Safety Australia <https://www.productsafety.gov.au/recalls/browse-all-recalls?f%5B0%5D=field_accc_psa_product_category%3A4792&gt; visited 20 March 2018.

[20] The Hon Michael Sukkar MP, Explanatory Statement (28 February 2018) Product Safety Australia <https://www.productsafety.gov.au/system/files/Attachment%20B%20-%20Explanatory%20Statement%20Takata.pdf&gt; visited 20 March 2018.

[21] See, eg, Chris Nichols, ‘Liability Could be Roadblock for Driverless Cars’, San Diego Union Tribune (online), 30 October 2013 <http://www.sandiegouniontribune.com/news/sdut-liability-driverless-car-transovation-google-2013oct30-story.html&gt; visited 23 March 2018.

[22] See, eg, Brożek, Hage and Indurkhya, above n 2, 252.

[23] Jaap Hage, ‘Theoretical foundations for the responsibility of autonomous agents’ (2017) 25(3) Artificial Intelligence and Law 255, 271.

[24] Said, above n 16.

[25] Bartosz Brożek and Marek Jakubiec, ‘On the legal responsibility of autonomous machines’ (2017) 25(3) Artificial Intelligence and Law 293, 304.

[26] Frodo Podschwadek, ‘Do androids dream of normative endorsement?’ (2017) 25(3) Artificial Intelligence and Law 325, 339.

[27]Alexander Hevelke and Julian Nida-Rumelin, ‘Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis’ (2015) 21(3) Science and Engineering Ethics 619, 620.

[28] Maurice Schellekens, ‘Self-driving cars and the chilling effect of liability law’ (2015) 31 Computer Law and Security Review 506, 509.

[29] Volvo, ‘US urged to establish nationwide Federal guidelines for autonomous driving’ (Media Release, 7 October 2015) 1 <https://www.media.volvocars.com/global/en-gb/media/pressreleases/167975/us-urged-to-establish-nationwide-federal-guidelines-for-autonomous-driving&gt; visited 28 March 2018.

[30] Gary Marchant and Rachel Lindor, ‘The Coming Collision Between Autonomous Vehicles and the Liability System’ (2012) 52 Santa Clara Law Review 1321, 1335-1336.

[31] Hevelke and Nida-Rumelin, above n 27, 619.

[32] Jeffrey Gurney, Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles (JD Thesis, University of South Carolina, 2014) 247 <http://illinoisjltp.com/journal/wp-content/uploads/2013/12/Gurney.pdf&gt; visited 28 March 2018.

[33] Sophia H. Duffy and Jamie Patrick Hopkins, ‘Sit, Stay, Drive: The Future of Autonomous Car Liability’ (2013) 16 Science and Technology Law Review 453, 453-455; Hevelke and Nida-Rumelin, above n 27, 626.

[34] Hevelke and Nida-Rumelin, above n 27, 619.

[35] Marchant and Lindor, above n 30, 1321.

[36] Munich RE, Autonomous Vehicles – Consideration for Personal and Commercial Lines Insurers (28 March 2016) Munich RE < https://www.munichre.com/site/mram-mobile/get/documents_E1725865033/mram/assetpool.mr_america/PDFs/3_Publications/Autonomous_Vehicles.pdf&gt; visited 20 March 2018.

[37] Owen Hayford et al, above n 5, 17.

[38] Henry Silvester and Jacinta Daher, The unstoppable drive of automated vehicles – likely impacts on CTP Insurers (30 May 2017) Barry.Nilsson. Lawyers <https://www.bnlaw.com.au/page/Insights/Insurance_Alerts/Compulsory_Third_Party/The_Unstoppable_Drive_of_Automated_Vehicles_-_Likely_Impacts_on_CTP_Insurers/&gt; visited 24 March 2018.

[39] Owen Hayford et al, above n 5, 2.

[40] Silvester and Daher, above n 38.

[41] Taylor Wessing, ‘Who’s in the driving seat? Driverless cars, liability and insurance’ (2017) Lexology 1, 1.

[42] James M. Anderson et al, Autonomous Vehicle Technology: A Guide for Policy Makers (RAND Corporation, 1st ed, 2016) xxii-xxiii.

[43] Competition and Consumer Act 2010 (Cth) sch 2 pt 3-5 div 1.

[44] Ibid pt 3-2, div 1. See particularly ss 54, 55, 60 and 61.

[45] Ibid pt 2-1.

[46] The Law Society of New South Wales, Submission to National Transport Commission, Regulatory barriers to more automated road and rail vehicles, 6 February 2017, 4.

[47] Owen Hayford et al, above n 5, 20.

[48] Ibid.

[49] Competition and Consumer Act 2010 (Cth) sch 2 ss 9, 138-139, 142(a)(c); The Law Society of New South Wales, above n 46, 4.

[50] Carey-Hazell v Getz Bros & Co Pty Ltd [2004] FCA 853 [186] (Kiefel J); Glendale Chemical Products Pty Ltd v ACCC (1999) 90 FCR 40, 47.

[51] Competition and Consumer Act 2010 (Cth) sch 2 s 9(1).

[52] Competition and Consumer Act 2010 (Cth) sch 2 s 142.

[53] Henry Prakken, ‘On the problem of making autonomous vehicles conform to traffic law’ (2017) 25(3) Artificial Intelligence and Law 341.

[54] Carolyn Sappideen and Prue Vines, Product Liability (October 28 2010) Thomson Reuters <https://legal.thomsonreuters.com.au/product/AU/files/720502336/chapter_23.pdf&gt; visited 27 March 2018.

[55] Baron, above n 17.

[56] Schellekens, above n 28, 510.

[57] The Law Society of New South Wales, above n 46, 4; see also: Sappideen and Vines, above n 54.

[58] Silvester and Daher, above n 38.

[59] See generally National Transport Commission, ‘Regulatory reforms of automated vehicles’ (Policy Paper, National Transport Commission Australia, November 2016) <http://www.ntc.gov.au/Media/Reports/(32685218-7895-0E7C-ECF6-551177684E27).pdf> visited 25 March 2018; The Law Society of New South Wales, above n 46, 5; Christopher Dolan, ‘Self-Driving Cars and the Bumpy Road Ahead’ (2016) 1 American Association for Justice Trial Magazine 1; House of Representatives Standing Committee on Industry, Innovation, Science and Resources, Parliament of Australia, Social Issues Relating to Land-Based Automated Vehicles in Australia (2017) 30.

[60] Privacy Act 1988 (Cth) sch 1 APP 6.1; see also: Owen Hayford et al, above n 5, 29.

[61] Privacy Act 1988 (Cth) sch 1 APP 6.1-6.4.

[62] House of Representatives Standing Committee on Industry, Innovation, Science and Resources, above n 59, 17.

[63] Andrew Garza, “Look Ma, No Hands!” Wrinkles and Wrecks in the Age of Autonomous Vehicles (JD Thesis, University of Connecticut, 2009) 611-613 <http://www.boyleshaughnessy.com/Collateral/Documents/English-US/Garza%20No%20Hands.pdf&gt; visited 28 March 2018; House of Representatives Standing Committee on Industry, Innovation, Science and Resources, above n 59, 16-17; Owen Hayford et al, above n 5, 29; National Transport Commission, above n 59, 63.

[64] Owen Hayford et al, above n 5, 29.

[65] Cara Bloom et al, ‘Self-driving cars and data collection: Privacy Perceptions of Networked Autonomous Vehicles’ (2017) 13 Proceedings of the Thirteenth Symposium on Usable Privacy and Security 357, 357; Senator Edward J. Markey, ‘Tracking and Hacking: Security and Privacy Gaps Put American Drives at Risk’ (Report, Markey Senate, 6 February 2015) 10.

[66] For a comprehensive overview of potential traffic law and criminal law liability in Queensland, see Kieran Tranter, ‘The Challenges of Autonomous Motor Vehicles for Queensland Road and Criminal Laws’ (2016) 16(2) QUT Law Revue 59.

[67] Transport Operations (Road Use Management—Road Rules) Regulation 2009 (Qld).

[68] Ibid s 297(1).

[69] Transport Operations (Road Use Management) Act 1995 (Qld) s 83.

[70] McNaughtan v Garland [1979] Qd R 240; Wallace v Major [1946] KB 473.

[71] Ibid.

[72] National Transport Commission, above n 59, 33.

[73] Shanker et al, above n 8, 7.

[74] See, eg, Anderson et al, Autonomous Vehicle Technology: A Guide for Policy Makers (RAND Corporation, 1st ed, 2016).

[75] Owen Hayford et al, above n 5, 28; National Transport Commission, above n 59, 65.

[76] Tranter, above n 66, 81.

Featured

Theories and Realities of Privacy Law: An Overview

There is no universally agreed upon theory of privacy, with a number of competing theories and perspectives as to what privacy actually is and what value we as a society ought to place on an individual’s right to privacy. In this blog post, I will provide a brief overview of the most common or interesting theories concerning privacy law and will then discuss well-known privacy incidents that have occurred in recent years.

Traditional Theories: What is Privacy?

Bygrave identifies four influential theories of privacy that have provided the foundation for work in privacy jurisprudence over the years.

  • Privacy ‘in terms of non-interference‘ (e.g. Warren and Brandeis formulation of ‘the right to be let alone’ which emerged in 1890 – arose from a concern that government, the press and other institutions had begun to invade previously inaccessible aspects of personal activity. Strongly concerned with individual freedom);
  • Privacy ‘in terms of degree of access to a person’ (e.g. Ruth Gavison’s ‘condition of limited accessibility’ involving (i) secrecy (‘the extent to which we are known to others’); (ii) solitude (‘the extent to which others have physical access to us’) and (iii) anonymity (‘the extent to which we are the subject of others’ attention’));
  • Privacy ‘in terms of information control‘ (e.g. Alan Westin’s ‘privacy is the claim of individuals, groups or institutions to determine when, how and to what extent information about them is communicated to others’; or the German Constitutional Court’s ‘right of informational self-determination’)
  • Privacy by relating it ‘exclusively to those aspects of persons’ lives that are “intimate” and/or “sensitive” (e.g. Julie Innes’ limitation of privacy to those aspect of people’s lives that are ‘intimate’ or ‘sensitive’ e.g. to avoid embarrassment, preserve dignity).

Challenges to Traditional Theories

There are many who are sceptical about whether privacy rights can feasibly be protected in the modern day and age. Michael Froomkin argues that in light of the rapid growth of privacy-destroying technologies, it is increasingly unclear whether privacy can be protected at a bearable cost, or whether we are approaching an era of zero informational privacy, or what is commonly referred to as a “dataveillance” world. There are indeed many privacy-threatening technologies: CCTV surveillance, smartphone apps, Facebook, and smart homes. These new technologies pose a number of privacy threats that have not really been considered before because they involve unprecedented data collection and aggregation. This poses the question as to whether we should be prepared to trade off some of our privacy rights for the convenience of being able to use these new technologies.

Others challenge the traditional theories of privacy on the basis that they are elusive and value-driven. Since privacy means different things to different people, and each individual has different expectations as to what ought to be considered private, privacy is often viewed as incapable of one agreed upon theory or definition. The Law Reform Commission for its own part has stated that ‘it is difficult, if not impossible, to define the parameters of the right to privacy in precise terms.’ For example, to a lot of people, privacy is important because it means respect for human autonomy. Others argue that there is a psychological need for privacy. On the other hand, many believe openness and transparency is more important than individual privacy because these values help to facilitate democracy.

New ‘Privacy Skeptic’ Theories

Not everyone supports attempts to protect privacy by law or to limit surveillance by law. A number of new theories have been put forth by so-called ‘privacy skeptics’ who are cynical of privacy and privacy law, due to economic, practical or moral reasons.

Privacy as economic inefficiency

One of the main arguments of privacy skeptics is that privacy rights are economically inefficient. For example, in an influential article called The Right to Privacy, Richard Posner stated that an unfortunate consequence of information privacy is that it allows people to conceal personal facts about themselves in order to mislead or misrepresent their character. Richard Posner argued that other people, including government institutions, have a legitimate interest in unmasking that misrepresentation. However, while there is some credence to this theory, it is generally well-accepted that there needs to be a balancing exercise and there obviously comes a point where an invasion of privacy goes too far and becomes unjustifiable.

Technological defeatism

As I alluded to earlier, many people argue that, whatever the merits of privacy as a value, it is futile to attempt to protect it in the face of the rapid technological developments we are experiencing. This theory is known as ‘technological defeatism,’ and is reflected in the famous sound bite of a former technology company CEO, Scott McNealy, who said ‘You have zero privacy. Get over it.’

However, a solution that is commonly put forward to technological defeatism is the ‘if you can’t beat them so join them’ approach of maximising the surveillance of all those who have control over data and surveillance within society. This is also known as ‘watching the watchers.’ It could involve, for example, monitoring and policing the people who collect our data. In Australia the Office of the Australian Commissioner oversees organisations and agencies by conducting investigations, handling complaints and enforcing decisions when a privacy breach has occurred.

Privacy Issues

Let’s now consider some topical privacy issues. Privacy issues have been rife in recent years, from Cloud computing and the dark web to controversial data breach incidents such as the Ashley Madison hack.

Cloud Computing and the Dark Web

Statistics about the growing use of cloud services and the lack of visibility into sensitive information in the cloud indicate that the cloud is likely to result in more damaging or costly data breaches. A Netspoke study was conducted in 2016 that surveyed 643 IT and IT security professionals in the US and Canada who were familiar with their company’s use of cloud services. 85% of respondents said that their on-premises security is equal to or more secure than the cloud, and most respondents admitted that their organisation’s use of cloud resources diminishes its ability to protect confidential or sensitive information. The survey also indicated that 60% of enterprises don’t scan their cloud services for malware, 57% of enterprises have cloud malware, and 34% don’t even know it. This highlights the privacy and security threat that can be posed by using cloud based software.

A widely reported data breach incident in the cloud occurred in relation to cloud storage service Dropbox. Dropbox is a file hosting service where users can store and synchronize their photos, documents and videos. Last year, following a widely publicized data breach, details of more than 68 million user accounts were reportedly leaked online. The data was posted on the “dark web,” and, dangerously, records of email addresses and hashed passwords were leaked widely across the Internet. The Dropbox dump is just one of a string of high-profile data breaches. In 2015, a hacker was reportedly looking to sell 117 million passwords from a LinkedIn breach on the dark web, and last year, a hacker claimed to be selling 655,000 alleged patient healthcare records on the dark web, containing information such as social security numbers, addresses, and insurance details.

In another incident, a hacker attacked a company called Code Spaces. Code Spaces was not a well known company and its hack didn’t affect millions of people, but it is an interesting case study because it is an example of a company put completely out of business by a single cloud security incident. In short, a hacker compromised Code Spaces’ Amazon Web Services account and demanded a ransom. When the company declined, the hacker started destroying the company’s resources until there was barely anything left, effectively putting the company out of business altogether.

Ashley Madison data breach

Breaches of privacy can often have ethical implications and can involve very sensitive and damaging information, as evident in the controversial Ashley Madison data breach case. In 2015, a group calling itself “The Impact Team” stole the user data of Ashley Madison, a commercial website that was developed to facilitate discreet, extramarital affairs. The group copied personal information about the site’s user base and threatened to release users’ names and personally identifying information if Ashley Madison was not immediately shut down. Ultimately, the group leaked more than 25 gigabytes of company data online, including records of real names, home addresses, search histories and credit card transaction records, and many users were publicly humiliated and shamed for engaging in extramarital affairs.

The Australian Privacy Commissioner, Timothy Pilgrim, and the Privacy Commissioner of Canada, Daniel Therrien, opened a joint investigation into the breach, and found that Ashley Madison was the target of a data breach as a result of inappropriate security safeguards.

According to the findings, Ashley Madison’s security framework lacked the following elements: documented information security policies or practices including appropriate training, resourcing and management; an explicit risk management process – including periodic and pro-active assessments of privacy threats, and evaluations of security practices to ensure ALM’s security arrangements were, and remained, fit for purpose.

Statistics on Data Breaches

Australia leads the Asia Pacific region for data breaches according to security indexes.

The Cost of Data Breach Global Analysis Study of eight countries including the US, UK, France, Germany, Italy, India, Japan and Australia found that Australian companies experienced the highest average number of breached records and faced the second highest detection and escalation cost.

Chatbots

Chatbots may be exposed to and collect a vast amount of personal data and other commercial information in the course of interaction with Internet users. Therefore, data privacy policies for chatbots need to be up to date, it must be clear where the data is collected and where it will be processed, and there must be internal policies that govern the extent of the chatbot’s permitted activities and what data it is permitted to collect. There is also a risk that Chatbots can go ‘rogue’ and there have been cases of Chatbots extracting personal data and bank account information from users. An example of a Chatbot that went rogue was Microsoft’s chatbot ‘Tay’ that hit the headlines when it started posting offensive tweets, swearing, and making racist and inflammatory political remarks.

As can be seen, privacy issues are rife in contemporary society. It is more important than ever that we understand the value of privacy conceptually so that we are able to define the boundaries that we would expect organisations do not cross when it comes to collection, use and disclosure of our personal information. However, the question as to where that line ought to be drawn will perhaps always be the subject of considerable controversy, whether it’s from privacy skeptics, privacy traditionalists or technological defeatists. To borrow the words of academic Raymond Wacks, ‘an acceptable theory of privacy remains elusive.’

Featured

Mandatory Data Breach Notification Regime: A Comprehensive Overview

Following the passage of a new mandatory data breach notification bill in Parliament on 13th February 2017, many Australian businesses will soon need to notify the Office of the Australian Information Commissioner and potentially affected individuals of “eligible data breaches.” This brings Australian law in line with developments internationally, with some regions of Europe and the US enacting mandatory data breach notification laws. In the wake of this new law, organisations that are required to comply with the Privacy Act should consider taking preparatory compliance measures such as adopting a data breach response plan and better training, systems and practices.

The Mandatory Data Breach Notification Laws in Brief

The new regime imposes an obligation upon an APP entity to notify affected individuals and the Australian Information Commissioner of suspected eligible data breaches affecting personal information, credit information or tax file numbers, provided that there are reasonable grounds for this belief. An eligible data breach arises where there is unauthorised access to, unauthorised disclosure of, or loss of this information and the relevant transgression is likely to cause serious harm to affected individuals (s 26WE(2)). APP entities that contravene their notification obligations could be fined up to $1.8 million (s 26WK(3)). There are several common sense exceptions embedded in the new law, including a remedial action exception, a law enforcement exception, declarations by the Commissioner, and inconsistency with secrecy provisions.

What is an ‘eligible data breach?’

When an entity believes on reasonable grounds that there has been an eligible data breach affecting personal information, health records, credit information or tax file numbers, it will be required to report it to the Australian Information Commissioner and the people whose information has been lost or stolen: s 26WK. An entity must also give a notification if it is directed to do so by the Commissioner: s 26WR.

An eligible data breach happens if (s 26WE(2)):

  • (a) there is unauthorised access to, unauthorised disclosure of, or loss of, personal information held by an entity; and
  • (b) the access, disclosure or loss is likely to result in serious harm to any of the individuals to whom the information relates.

An APP entity is required to assess whether there has been an eligible data breach, having regard to matters such as the kind and sensitivity of information disclosed, relevant security measures, the persons who have obtained or could obtain the information, and the nature of the harm caused: s 26WG.

How does notification occur?

The APP entity will have to describe the breach and the kind or kinds of information concerned to affected individuals, and make recommendations about what the individuals should do: s 26WR(4).

APP entities that contravene the relevant notification obligations could be fined up to $360,000 and organisations up to $1.8 million: s 26WK(3).

A Supplement to the APPs

It is envisaged that the new scheme is to operate in a supplementary fashion alongside existing APP obligations, thereby ensuring coherence with current privacy laws and reinforcing compliance. Most notably, the new law builds upon the bedrock of existing privacy law obligations under APP 11.1 APP 11.1 provides that an APP entity holding personal information must take reasonable steps to protect said information from misuse, interference and loss, and from unauthorised access, modification or disclosure. A parallel can be drawn between this existing terminology and the wording of the new mandatory data breach notification laws, which refers to ‘unauthorised access to,’ ‘unauthorised disclosure of’ or ‘loss of’ personal information causing serious harm: s 26WE(2).

Given that the new legislation effectively borrows pre-existing terms from APP 11, the relevant Commissioner-issued APP guidelines which define these terms provides some guidance as to how the new law will likely be interpreted.

In particular:

  • ‘Unauthorised access’ of personal information occurs when personal information held by an APP entity is accessed by an unpermitted individual or entity. According to the APP Guidelines, unauthorised access could occur by an employee or independent contractor of the entity, or an external third party.
  • ‘Loss’ of personal information covers accidental or inadvertent loss of information held by an APP entity, whether said information is lost physically or electronically.
  • ‘Unauthorised disclosure’ of personal information occurs when an APP entity makes personal information accessible to others outside the entity and releases that information from its control in an impermissible manner under the Privacy Act. While disclosure is not defined in the Privacy Act, the APP Guidelines suggest that the relevant release may be a proactive or accidental release, a release in response to a request, or an unauthorised release by an employee. In determining whether release was justified under the Privacy Act, regard should be had to whether APP 6 has been complied with. Effectively, under APP 6.1, an APP entity can only disclose information for the purpose for which it was collected, unless an exception applies such as consent, a permitted general or health situation, or an enforcement related activity exception.

However, while this framework provides guidance in substantiating relevant terminology, ultimately the new reforms impose unprecedented privacy obligations that never existed under APP 11. Under the new Act, even if an APP entity is totally compliant with its existing obligations under APP 11, it may nevertheless become liable under the amendments regardless of whether or not reasonable steps have been taken. The new law is applicable in any case where there has been unauthorised access, unauthorised disclosure or loss of personal information likely to occasion serious harm within the meaning of the Act to affected individuals. It appears to be immaterial whether an eligible data breach has arisen as a result of unavoidable human error, or whether the breach could have been anticipated in advance.

It is likely that in the course of investigating non-compliance with mandatory data breach notification laws, a number of APP obligations will incidentally become relevant. In addition to APP 11, there is a strong potential for an interrelationship between the mandatory data breach laws and APPs 6 and 12.

It is likely that APP 6 will often arise where the Commissioner investigates a breach of the mandatory data breach notification laws. Breach of APP 6.1 occurs where an APP entity holds personal information about an individual that was collected for a particular purpose, and the entity uses or discloses the information for another purpose. There are great prospects of overlap between APP 6 and mandatory data breach notification laws, as an unauthorised disclosure will breach both obligations. Furthermore, both APP 6 and the new law confer an exception where the APP entity believes the use or disclosure of the information is reasonably necessary for law enforcement related activities.

Under APP 12.1, an APP entity must give an individual access to the personal information they hold about them. However, an APP entity needs to take precautions not to provide information that could relate to personal information of other individuals, or else could be held liable for unauthorised access under the mandatory data breach laws. There could also be a breach of other APPs in this situation, such as APP 11.1 or APP 6.1. Complying with APP 12.1 by giving individuals access to their personal information when they request it could also help APP entities comply with the mandatory data breach laws as individuals may seek to correct irrelevant, incorrect or misleading information under APP 13. By having information like this corrected, the risk of a data breach occurring that is likely to cause serious harm could be reduced.

Overall the mandatory data breach laws have been designed to work in conjunction with the existing array of privacy obligations. This signals that the new laws are intended to empower the Commissioner to broadly investigate potential privacy infringements, and were not designed to be standalone provisions to consider in isolation from existing privacy requirements.

Remedial Action Exception

APP entities would be well advised to proactively revise their privacy policies and data security practices, policies and systems, to ensure compliance with APP 11.1 and mitigate the risk of an eligible data breach. The new law provides a ‘remedial action’ exception that rewards swift and effective data breach responses. This applies where an APP entity has taken action before serious harm is sustained to individuals to whom the personal information relates, and therefore it is objectively unlikely that the individuals will suffer harm: s 26WF. The implication of this provision is that APP entities should consider monitoring systems and adopting a formal data breach plan to strengthen their position to respond swiftly to data breach threats.

Disclosure by Overseas Entities

In certain circumstances, the amending legislation also renders an APP entity accountable for the eligible data breach of an overseas third party to whom personal information has been disclosed. The new laws state that where APP 8.1 applied to a disclosure, the mandatory notification laws will have effect as if the personal information were held by the relevant APP entity. This requirement (s 26WC), known as the ‘deemed holding of information’ provision, means that an APP entity will be required to notify affected individuals and the OAIC where the third party recipient entity has committed an eligible data breach.

Enhancing the Commissioner’s Power to Investigate

The ultimate effect of the laws is to afford the Commissioner greater scope to investigate a wide suite of potential breaches, including the APPs and the mandatory data breach notification rules. The Commissioner is empowered upon complaint to investigate or can commence an own motion investigation, as will be discussed below.

Under the new regime there are two broad triggers that would result in an ‘interference with the privacy of an individual’: s 13(4), which will provide the Commissioner with an avenue to pursue an investigation.

The first trigger occurs where an APP entity has failed to assess a suspected eligible data breach where there are reasonable grounds to believe that one exists (s 26WH(2)), or where the entity has not prepared a notification statement and given a copy to the Commissioner where there are reasonable grounds (s 26WK(2)), or where the entity has not notified affected individuals where there are reasonable grounds (s 26WL(3)).

A second possibility is that a Commissioner may direct an entity to notify affected individuals of an eligible data breach of an entity where the Commissioner believes on reasonable grounds that this has occurred: s 26WR(1). Before directing an entity to provide notification, the Commissioner must invite the entity to make a submission in relation to the proposed direction: s 26WR(3). Non-compliance with this direction as soon as practicable after it is given amounts to an interference with the privacy of an individual under s 13(4).

Own Motion Investigations

Under the Privacy Act, the Australian Information Commissioner, on the Commissioner’s own initiative, is empowered to investigate an act or practice if there could be a breach of the privacy of an individual, or if the Commissioner thinks it is desirable that the act or practice be investigated: s 40(2). This will clearly occur where one of the two triggers above is satisfied so that the entity’s conduct amounts to an ‘interference with the privacy of an individual.’ Thus, an interference will open up an avenue for the Commissioner to carry out an own motion investigation and exercise their broad investigatory and enforcement powers.

Complaints by Individual

The Commissioner could also investigate a suspected data breach where an individual makes a complaint to the Commissioner under s 36 (or a representative complaint under s 38): s 40(1). The Commissioner may decide not to investigate in certain circumstances. For example, if the complaint is frivolous or vexatious, or the act or practice is not an interference with the privacy of an individual, or the investigation is not warranted, the Commissioner may refuse to investigate: s 41.  To decide whether the Commissioner has the power to investigate the complaint, preliminary inquiries can be made of the respondent or any other person: s 42.

The Commissioner is entitled to carry out an investigation in such manner as he or she sees fit. In the course of the Commissioner’s investigation, the Commissioner boasts a range of powers including the power to examine witnesses (s 45) and compel production of documents (s 47) and may discover potential APP breaches. The Commissioner is therefore well-equipped under the new laws to examine privacy law transgressions broadly, including existing obligations under the APPs.

Enforcement

Where a complaint is made by an individual, the Commissioner may make a determination dismissing the complaint or declaring that an individual’s privacy has been interfered with and steps (including compensatory measures) to be taken by the entity (s 52(1)). Similar enforcement powers, including the power to compensate, reside in the Commissioner in the event that they launch an own motion investigation (s 52(1A)). The Commissioner may also impose civil penalties of up to 2000 penalty units, or $1.8 million, upon a non-complying entity if the interference with privacy is serious or repeated: s 13G(b). A final enforcement power available to the Commissioner is to seek an enforceable undertaking under s 33E. There are ostensibly a range of tools at the Commissioner’s disposal to enforce compliance with the mandatory data breach laws and the APP obligations more broadly.

Key Takeaways

  • Consider adopting a robust data breach response plan, implementing better policies and practices, and ensuring better staff training. This will help to mitigate risks in relation to data breaches, and could also bring your company within exceptions such as the ‘remedial action’ exception.
  • Consider existing guidance from the OAIC to work out the potential scope of future obligations under the new legislation. The OAIC also previously operated a voluntary data breach notification scheme and has published resources online to assist APP entities in strengthening their data breach prevention and management practices.
  • APP entities should revisit their information sharing agreements in light of the ‘deemed holding’ provision. Furthermore, APP entities should consider inserting obligations into contracts with overseas information processors requiring notification where there has been a serious data breach.

Featured

The Future of Australian Privacy Law: The Introduction of a Statutory Tort?

In Victoria Park Racing v Taylor, the majority of the High Court held that Australian courts could not provide protection ‘on the mere ground of invasion of privacy,’ and accordingly, development of civil remedies for privacy invasion was ‘largely stillborn’ in succeeding years. However, in ABC v Lenah Game Meats, the High Court evinced a newfound willingness to consider the development of a cause of action in privacy. In particular, Gummow and Hayne JJ indicated that ‘Victoria Park does not stand in the path of … such a cause of action,’ Gleeson CJ asserted that ‘the law should be more astute’in protecting privacy interests, and Callinan J opined that ‘the time is ripe’ for consideration of an invasion of privacy tort.

It could be argued that since Lenah, Australian courts have afforded privacy protection to some degree as two intermediate-level court decisions have recognised invasion of privacy torts, and the scope of the breach of confidence doctrine has expanded to provide extensive relief against impartation of private information. However, as I will discuss, ultimately the significant gaps, issues and uncertainty associated with existing protections are problematic. The judiciary has not been sufficiently bold in the development of a cause of action in privacy. It follows that the appropriate avenue to develop an Australian cause of action in privacy is through the introduction of a statutory tort.

BOLDNESS OF AUSTRALIAN COURTS POST-LENAH:

Development of a Privacy Tort

In Grosse v Purvis, a woman endured years of intentional harassment, stalking and abusive phone calls from a former lover. Consequently, she sustained post-traumatic stress disorder and her capacity to work was substantially undermined. In this case, Skoien J made passing comments to the effect that a ‘bold step’ should be taken to recognise ‘a civil action for damages based on the actionable right of an individual person to privacy’. To this end, His Honour proposed that a wrongful intrusion tort should be available where a willed act of a defendant intrudes upon the privacy or seclusion of a plaintiff in a manner that would be considered highly offensive to a reasonable person of ordinary sensibilities, and thereby causes the plaintiff mental, emotional or psychological detriment or hinders their freedom. In many respects, this test bears resemblance to that of the intrusion upon seclusion tort present in the US. Skoien J’s wrongful intrusion tort, if affirmed by appellate courts, would constitute a marked development in the law as it allows aggrieved parties to seek redress for invasion of privacy via a standalone tortious cause of action.

Similarly, the consideration of a tort of wrongful disclosure of private information in Doe v ABC could be regarded as an intrepid step in the development of a cause of action in privacy. Effectively, this case involved the publication of details about a rape victim and her assailant, contrary to provisions of the Victorian Judicial Proceedings Act 1958. Having regard to the ‘rapidly growing trend’ apparent in Lenah, Grosse and UK authorities, Hampel J took the ‘next incremental step in the development of protection against … breach of privacy.’ Her Honour’s model for wrongful disclosure of private information requires private information, unjustified publication and an absence of any overarching public interest in disclosure. This suggested test is relatively consistent not only with similar actions that have arisen in the UK and New Zealand, but also with the US tort of ‘publicity given to private life.’ Hampel J’s recognition of a tort of wrongful disclosure of private information reinforces the argument that some lower-level Australian courts have exhibited boldness in developing a cause of action in privacy.

However, while privacy torts have evidently been recognised in two lower-level Australian cases, no appellate court has corroborated the existence of a privacy tort. Furthermore, the position with respect to the existence of a common law tort of privacy in the inferior courts is similarly shrouded in uncertainty: the matter has been described in cases as ‘arguable,’ ‘a little unclear,’ and subject to ‘further development in the law.’ In other cases, judges have expressly indicated that a generalised tort of privacy is ‘not yet recognised in Australia’ or that the weight of authority explicitly controverts such a proposition. This suggests that the obiter dicta comments in Doe and Grosse are disputable at the very least. As such, development of a cause of action through tort law has arguably been manifestly inadequate and uncertain.

Breach of Confidence

Consideration of the equitable doctrine of breach of confidence begs the question of whether the courts can adequately safeguard privacy interests incrementally via pre-existing actions, without formulating a general overarching tort. In short, the elements of breach of confidence include confidential information, disclosure in circumstances importing an obligation of confidence, and unauthorised use of the information causing detriment. Breach of confidence has evolved from its initial emphasis on confidential relationships to embrace concepts largely predicated upon privacy such as human autonomy and dignity. Accordingly, the doctrine now encompasses non-confidential private information, and no longer requires a pre-existing confidential relationship. Moreover, the equitable action of breach of confidence provides extensive remedies for the purposes of privacy protection. In Wilson v Ferguson substantial compensation and an injunction were available against a defendant who published explicit ‘revenge porn’ footage of his ex-girlfriend on Facebook. Furthermore, the breach of confidence doctrine may offer a remedy where tort law cannot. For instance, in Giller v Procopets damages were awarded on the basis of mental distress incurred after an ex-boyfriend distributed private sex-tapes.

However, the breach of confidence doctrine in Australia has not yet been subject to ‘a sustained and deliberate transformation into an action for breach of privacy,’ which occurred with regards to the breach of confidence action in the UK. In this sense, Australian courts have arguably not been as bold or transparent as the UK courts in developing a cause of action for privacy via the breach of confidence route. A cause of action in breach of confidence also does not accommodate for intrusions upon personal space, thereby creating a gap in protection for non-informational invasions.

In addition, many commentators suggest that incremental development of privacy protection through breach of confidence and other existing actions is problematic as it may lead to a strained piecemeal-like approach. In essence, actions such as breach of confidence were initially crafted to protect non-privacy interests. For the court to depart from the traditional bases of these actions could lead to obscure and fragmented law as well as unprincipled development of the law. In turn, this may generate uncertainty, thus making it difficult for individuals to organise their affairs.

Therefore, in light of the significant gaps, issues and uncertainty surrounding existing common law privacy protections, evidently the courts have not been sufficiently bold in developing an appropriate cause of action in privacy.

APPROPRIATE LEGAL PATH FOR CAUSE OF ACTION IN PRIVACY

Ultimately, the most promising legal avenue for the development of an Australian cause of action in privacy would be by enacting a statutory tort. Considering the uncertainty surrounding the status of a privacy tort within Australia, and that appellate courts have not yet accepted the obiter dicta in Grosse and Doe, statutory intervention is nothing short of essential for the purpose of safeguarding privacy rights. The development of a privacy tort as an action in its own right is to be preferred over fragmentation and straining of existing common law actions.

Justification for Statutory Cause of Action

Certainty/Clarity
Proponents of the common law approach commonly assert that the concept of privacy is inherently vague and imprecise, and that judges would still be required to balance evidence and make judgments on a case-by-case basis so therefore clarity would not be improved. However, a clear and detailed statutory framework could be included to help guide the courts. For instance, this could be achieved by incorporating a list of the types of invasions and remedies that are captured by the relevant legislation, thus helping to clarify the scope of privacy protection. This would enhance certainty and clarity to a significant degree.

Practicality
Another issue that is raised by opponents of a statutory privacy tort is that its implementation might be impractical as it could undermine law enforcement and security operations or clash with freedom of speech and freedom of the press. While there is some credence to this viewpoint, such objections could easily be ameliorated by the implementation of defences that take into account countervailing public interest considerations.

Likewise, it is also difficult to argue that a statutory cause of action would be impractical simply because it would be difficult to enact uniformly nationwide. To the contrary, enactment of a privacy tort by statute would likely be quicker and more efficient than common law development as Parliament would not be required to expand the law incrementally and would not be bound by precedent.

Additionally, introduction of a cause of action by elected parliamentarians would be more democratic compared to judicial development.

Unpredictability
Finally, to contend that a statutory tort may give rise to unintended circumstances or become out-dated is largely unjustifiable. Flexible guidelines that are adaptable to technological and social change and are subject to a degree of judicial discretion would surely mitigate such a concern.

The Specifics of the Statutory Tort                 

In order to avert inconsistency and ‘forum shopping’ between jurisdictions, it is essential that the statutory tort of privacy be introduced uniformly nationwide. The tort should only be applicable to natural persons and not corporations, consistent with the majority view in Lenah and ‘individual notions of autonomy, dignity and freedom.’

It is imperative that the tort encapsulates two separate limbs such that a claim can be based upon either wrongful intrusion or wrongful publication of private information. Having these two limbs would encourage courts to further advance the tortious developments that were foreshadowed in Doe and Grosse, while also ensuring that the cause of action is not unduly vague or imprecise and that courts can attach separate conditions to different circumstances.

The best possible approach would be for the two limbs to be formulated having regard to a model propounded by Professor Des Butler absent the requirement that the intrusion or publicity be considered highly offensive to a person of reasonable sensibilities. The limb of unreasonable intrusion upon privacy would be established if there is an intentional intrusion upon another person in circumstances where there is a reasonable expectation of privacy. The limb relating to disclosure of private facts would be satisfied if there is a reasonable expectation of privacy and the plaintiff incurs emotional stress, embarrassment or humiliation from the publication. In not imposing a stringent threshold requirement of objectively high offensiveness, this approach aligns more with the tests adopted in Doe and Giller rather than Grosse. Ultimately, the absence of such a requirement is well-founded and justified as ‘highly offensive’ is an inherently vague concept, overlaps with ‘reasonable expectation of privacy’ and fails to consider cases where conduct lacks a personal dimension.

It should also be noted that under this approach, proof of fault is only required for the unreasonable intrusion upon privacy limb of the tort, pursuant to ALRC, VLRC and NSWLRC recommendations. This coincides with the approach in Doe in that it affords an action to plaintiffs even where publication is negligent rather than wilful. If Jane Doe had been denied an action in that case for lack of wilful breach this would have ‘severely curtailed the protection for privacy that the law should provide for.’

In regards to defences, it is recommended that the VLRC approach should be followed insofar as it affords defences of public interest, privilege, fair comment and consent. This would effectively broaden the scope of potential liabilities, shifting the burden to the defendant to establish that their interference is warranted for reasons such as public interest or consent. Finally, pursuant to ALRC and NSWLRC recommendations, courts should be empowered to grant the remedy most appropriate in the circumstances, such as damages, an account of profits, injunctions, apology or correction orders, or a declaration, thus permitting flexibility.

Ultimately, the statutory tort proposed, if implemented in the manner suggested, would be likely to substantially broaden and enhance the scope of Australian privacy protection that is afforded to aggrieved individuals.

Design a site like this with WordPress.com
Get started