Privacy reform edges forward
An ongoing review of Australia’s core information privacy laws took another step forward in late September with the release of the Government’s response to the ‘Privacy Act Review Report’. The report, born out of the ACCC’s Digital Platforms Inquiry, and produced by the Attorney General Department (AGD) following multiple rounds of public consultation, recommended a suite of reforms to bring the law into ‘the digital age’.
The review and reform process has been long, and it’s not over yet. While the Government has embraced the vast majority of AGD’s recommendations, much of its agreement is ‘in-principle’ - subject to ‘further engagement with regulated entities and a comprehensive impact analysis to ensure the right balance can be struck’ - processes likely to go well into next year. This post, half primer, half opinion, will quickly canvas some of the main takeaways from the response.
Clarifying Concepts
To the relief of many, the reforms will likely include tweaks to some of key definitions in the legislation to remove ambiguities which have disturbed privacy commentators for years.
For one, the concept of ‘personal information’, which was complicated by an unduly narrow construction by the Administrative Appeals Tribunal and the Full Federal Court,[1] will be honed by amendments to undo the ATT/Federal Court’s narrow reading, and regulatory guidance to clarify when a person can be ‘reasonable identifiable’. The Government has taken the sensible view that someone can be ‘reasonably identifiable’ when they are distinguishable even if their identity is unknown. That’s important because data which is used in ways which affect our behaviours and lives (say, data which is harnessed by recommendation systems to target content at us) may not technically reveal our identity. Antoinette Rouvroy sums up the issue well, pointing out that ‘the issue is that today power lies less in identifying people than it does in modelling their possible behaviour, but collectively.’ Having just come to grips with the erosion of the identifiable/non-identifiable dichotomy in an environment shaped by recommender and predictive systems, we’ve barely scratched the surface of the implications of generative AI applications for these constructs.
Another definitional change, to the term ‘collection’, will go some way toward ensuring the legislation encompasses data processing for AI. The Government has accepted the need to clarify that the concept takes in collection by creation, specifically inference and generation - crucial to ensuring that new and existing rules about how personal data can be gathered extend to the outputs of predictive and generative AI systems.
Paradigm Shifts
In addition to small (but significant) drafting tweaks, the Government has also embraced (‘in-principle’) two major paradigm shifts advocated extensively during consultations.
First, an explicit recognition of the public interest which inheres in privacy.
Second, a partial shift in emphasis from individual responsibility to ‘self-manage’ one’s own privacy to ‘organisational accountability’, which would mean ‘the proactive mitigation of privacy-related risks and build community trust in the entity as a responsible steward of personal information.’
Recognising the public dimension of privacy deviates from the traditional framing of privacy in the Act as an individual interest. This framing underpins the ‘self-management’ model of privacy protection in the legislation. As the primary beneficiaries of privacy protection, individuals are the primary stewards of their own privacy, and are given the tools to do so through various mechanisms for notice and consent. As many, many commentators have argued, this model ignores the complicated, constant and incomprehensible flows of data which underpin digital life.
So, the Government has taken an important step by acknowledging the overwhelming message that information self-management isn't working in the data-intensive digital world. While amending the objects of the Act will have some effect on the statutory interpretation of its operative clauses, how this philosophical turn manifests in the Privacy Principles will be what really counts.
A Substantive Fairness Requirement
Which brings us to what is, in my view, one of the most consequential impending reforms to the Act.
The Government has agreed (in-principle) that the Act should include a new requirement that collections, uses and disclosures of personal information be ‘fair and reasonable in the circumstances’. The introduction of an overarching fairness and reasonableness requirement would bring the Act roughly in line with similar standards in the EU, Canada, and other jurisdictions. The EU’s GDPR requires that personal data only be collected for ‘legitimate’ purposes, which may be business interests and may even be trivial, but must be balanced against ‘the interests or fundamental rights and freedoms of the data subject…’. The Canadian PIPEDA’s requires that ‘an organisation may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances’.
Under the current Privacy Act, once collected, an organisation may use and disclose personal information for the purpose it was originally collected for, a secondary related purpose, or another purpose to which the individual subject consents. While the collection process must be by fair and lawful means, the subsequent use and disclosure need not be fair.
What does this mean in practice? Let’s consider a now notorious example to illustrate. Clearview AI, purveyor of facial recognition software, built a database of face images using a web crawler to scrape publicly available data from the internet. Clearview’s clients could upload images of individuals to the system, which would then be matched against Clearview’s database. Incorporated in Delaware, the company offered its services abroad, including in Australia, and scraped images of Australians, including from Australian servers. The federal privacy regulator got wind of the operation and issued a decision that Clearview had breached several Privacy Principles. It couldn’t say much about the ways and purposes for which the images had been used. Once collected, Clearview was basically free under the Privacy Act to use and disclose the information for its original collection purpose. At one stage, this included facial recognition by private companies, including banks, retailers, large event venues, casinos, fitness centres, and others. But the regulator could take issue with the manner of collection. APP 3.5 requires collections be by ‘lawful and fair means’. The regulator found Clearview’s covert collection was not. This finding was subsequently rejected by the AAT.
After acknowledging the difficulty of judging fairness, the AAT reasoned that
Before the applicant can collect an image a person has to have decided to place the image on the internet without any access restrictions. In circumstances where a person has done that, I am not satisfied that it can be said that it is unfair to then collect that image using a web crawler.
That images of people are posted on the ‘public internet’ all the time without their knowledge; that many post images of themselves in one context without expecting that said images will be scraped for inclusion in a vast facial recognition database, didn’t factor. The ATT did note that the collection might have been unfair if in breach of limitations in say, LinkedIn’s terms of service, but ‘cease and desist’ letters without further legal action from LinkedIn were not enough to convince that Clearview’s covert scraping processes were not above board.
The case turns the assumption of many privacy commentators and practitioners that APP 3.5, though limited, at least prevents surreptitious collection personal information, on its head. The AAT’s judgement hews to a common but outmoded notion that the measure of privacy one can reasonably expect rapidly declines once something enters ‘the public’ (whatever that means). The construction of reasonable expectations of privacy by courts around the world is often influenced by notions of contemporary social expectations and common practice which fail to square how people habituate to once shocking practices over time, irrespective of whether the threat of individual or societal harm diminishes at the same rate.
Will a new ‘fair and reasonable’ requirement, which is unable to be overridden even by consent, constrain this kind of activity? Will it be an effective antidote to the privacy threats posed by the rapidly proliferating predictive and generative AI systems increasingly embedded in our day-to-day interactions with public and private entities?
It will depend on the contours of the requirement, which are to be mapped by judges and the federal privacy regulator over time. Like the Canadian provision, the Australian proposal explicitly requires a contextual analysis, providing that purposes must be fair ‘in the circumstances’. It will be important for that analysis to be shaped by an awareness of the effects of privacy fatigue, time and cognitive constraints, and economic or social imperatives which drive participation in data-intensive transactions and in turn may lead to incongruence between individual behaviour and privacy expectations.
It should also be shaped by an understanding of power disparities and structural inequities underpinning data extraction and infrastructure across different contexts. Whether the concepts of ‘fairness’ and ‘reasonableness’, weighed down as they are by liberal individualism, can be stretched to recognise such dynamics seems unlikely. Work outside privacy law reform offers richer concepts and frameworks. Indigenous Data Sovereignty frameworks, such as the CARE Principles, require data use collectively benefit Indigenous communities and self-determination. Some theorists suggest imposing duties of loyalty, care, and confidentiality, or fiduciary standards, to information collection.
Large Gaps in Protection Will Persist
A notable exception to the Government’s broad agreement (either unconditional or in-principle) to the AGD’s proposed reforms relates to the political exemptions. The Government noted but did not agree to the AGD’s recommendation to remove the exemptions for political parties from the Act, as well as the exemptions for sitting representatives for activities related to political processes. As I’ve written before, the shaky justifications for the political exemptions have become increasingly untenable with the rise of data-intensive political campaigning, writ large by various privacy scandals during recent political contests. The stated rationale for the exemptions – ‘to encourage freedom of political communication and enhance the operation of the electoral and political process in Australia’ – ignores the vital public, specifically democratic, interests which are served by protections voter privacy.
[1] Specifically, in the Grubb litigation, the AAT and Court found that the question of whether information is ‘personal information’ is a two-step inquiry - the information must be ‘about’ an individual and if so, it must identify or reasonably identify them. In the Grubb case, certain metadata related to journalist Ben Grubb’s mobile service held by the telco giant Telstra was not ‘about’ him, it was about the service. The upshot is to potentially exclude great swathes of digital metadata from the scope of the Act - for example, in the context of adtech, this might include click-through rates, conversion rates.