Diamonds Background Image Diamonds Background Image

New Privacy Law Guidance about AI Highlights the Need for a Cautious Approach

10 min read
14 November 2024
Share:

Key Takeaways

  • Recent guidance on Australian privacy laws in the context of AI systems shows that there are many complex issues, and careful controls are necessary to protect businesses from fines and reputational damage.
  • Businesses using AI to make decisions, or as part of important or high-risk work, should be especially careful and should consider blanket prohibitions.
  • Even seemingly innocent uses (e.g., using AI systems to take meeting notes, or as a chatbot to talk to customers) are high risk activities to be done carefully (if at all).

Businesses should take careful note of the risks of using artificial intelligence (AI) and should implement controls appropriate to their business to ensure use is careful or prohibited.

A breach of Australian privacy law (for example, the Privacy Act 1988 (the Act) may result in significant fines or reputational damage. Given the recency of AI commercialisation, businesses should be especially careful about their compliance when using AI systems as enough time has not passed for best practice steps to develop.

Ensuring sufficient controls (or ensuring prohibition) is especially important where the AI is exposed to personal information, makes decisions for a business (e.g., reviewing and sorting resumes), or is engaging in impactful work (e.g., drafting court material).

As a general comment, it should also be understood that AI systems are often wrong and that their output should be thoroughly factchecked to confirm accuracy.

Best practice will be to not input personal information (especially not sensitive information) into publicly available AI tools, or indeed any AI system unless appropriate safeguards and restrictions are in place.

OAIC Guidance

The Office of the Australian Information Commissioner (OAIC) has issued guidance regarding the deployment of AI systems within an organisation subject to the Privacy Act (APP Entity) to provide a product or service, particularly in the context of generative AI (OAIC AI Guidance).

This guidance is crucial as AI systems are highly complicated and carry numerous complex privacy risks.

We urge all businesses contemplating the use of AI in their business to read the OAIC’s AI Guidance in detail and particularly note the included checklists before undertaking any use of an AI System.

This article does not propose to summarise or repeat the OAIC’s AI Guidance in detail, however a number of key takeaways should be emphasised.

Due Diligence

Prior to use of any AI system, due diligence will be critical, you must understand:

  1. the terms and conditions for the use of the AI system;
  2. how the AI system has been trained and what information it was trained on;
  3. how the AI system will treat the information included in prompts (e.g., is it used to train the AI in future, is it saved locally or remotely);
  4. whether any information included in a prompt will be accessible by publisher of the system (if so, use of the AI system may constitute a disclosure of personal information which is subject to further rules than a use of personal information);
  5. how the AI system is protected from data breaches; and
  6. whether there have been previous data breaches.

You should regularly check and confirm whether any changes occur in respect of the above during the use of the AI system.

Use of Personal and Sensitive Information in AI Systems

Your privacy policy must clearly state how your business uses AI. In some circumstances e.g., where an AI is used to take meeting notes, this will likely be insufficient on its own and the meeting participants should be given an opportunity to opt out.

APP Entities are required by Australian Privacy Principal 6 to only use or disclose personal information for a particular purpose if the information was obtained for that purpose. There are exceptions that permit a use or disclosure for a secondary purpose (e.g., if consent from the individual was obtained). One such exception is where the individual would reasonably expect the APP Entity to use or disclose the information for that secondary purpose if that purpose is related to the primary purpose (or directly related if the information is sensitive information).

The OAIC Guidance relevantly provides that “[i]f your organisation cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to the primary purpose, to avoid regulatory risk you should seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out. Importantly, you should only use or disclose the minimum amount of personal information sufficient for the secondary purpose.”

Controls

Prior to using an AI system, a business should consider the worst case scenario, as some AI systems are black boxes and their “reasoning” cannot be extracted and examined. E.g., the OAIC AI Guidance notes that using AI in recruitment could discriminate against candidates based on perceived biases. For this reason, any commercial use of an AI System should include sufficient controls to analyse and manage risks associated with the black box nature of the software.

These controls are discussed in further detail below in our commentary on a recent report by the OVIC.

Businesses which permit the internal use of AI should perform a privacy impact assessment, implement an AI policy containing express requirements for the use of AI systems, and undertake regular staff training on the use of AI.

Generation of personal Information

You should consider that AI systems are trained on a wide range of information, which means they are capable of generating personal information. The OAIC AI Guidance references an example where workplace psychosocial hazard training was partially created with AI and the AI generated a real situation using the real names of the persons involved (who were involved in an ongoing court matter at the time). This event may be considered collecting personal information under the Act and the information collected would need to be treated accordingly as unsolicited personal information.

Meeting minute making

While seemingly innocuous, the risks of using an AI system to record a meeting are substantial – meetings can veer off topic, in which case any personal and sensitive information discussed may well be information the business is not permitted to collect. In that case, that information should be erased or deidentified. Without proper systems in place, this can be easily overlooked from time to time.

AI and images

You also need to be aware that any images generated by an AI may copy part (or all) of an image it was trained on. Such generated images may reproduce personal or sensitive information and may breach copyright laws.

Uploading of images to AI systems should generally be avoided even where no personal or sensitive information is apparent, as the image may contain metadata or sufficient information to identify a location or other personal information may be present to identify a location.

Chatbots

Our view is that any business seeking to use an AI chatbot should seek legal advice beforehand as such activity may result in collection of improper personal and sensitive information. Chatbots also raise particular risks regarding Australian Privacy Principal 10 (ensuring the accuracy of personal information collected) and Australian Privacy Principal 3 which requires that unless unreasonable or impractical to do so, personal information must be collected from the individual directly.

OVIC decision

A deputy Commissioner of the OVIC recently performed an investigation into the use by a child protection worker (the Worker) employed in the Victorian Department of Families, Fairness and Housing (DFFH).

This example is an illustrative example of what controls may or may not be sufficient to guard against the risks of using AI systems.

Conduct

In this example, the Worker used ChatGPT to assist in the drafting of a protection application report, which is submitted to the Children’s Court to assist the court in deciding whether a child needs protection.

The use by ChatGPT of the Worker was plainly inappropriate and dangerous as “the Protection Application Report mistakenly described a child’s doll, that was used by the child’s father for sexual purposes, as a mitigating factor, in that the parents had provided the child with “age appropriate toys””.[1]

Of some interest is the 9 factors identified by the DFFH in their investigation which indicated ChatGPT involvement:[2]

  1. sophisticated language;
  2. overly positive descriptors;
  3. inaccurate information;
  4. unusual content;
  5. unusual terminology;
  6. unusual reference to legal intervention;
  7. unusual Child Protection intervention;
  8. nonsensical references; and
  9. American spelling and/or phrasing.

Any business that, as part of its AI controls, audits work for evidence of AI use, should take note of these examples.

Breach

It was determined that this conduct constituted a breach of Information Privacy Principals 3.1 and 4.1.

Information Privacy Principal 3.1

An organisation must take reasonable steps to make sure that the personal information it collects, uses or discloses is accurate, complete and up to date.

Information Privacy Principal 4.1

An organisation must take reasonable steps to protect the personal information it holds from misuse and loss and from unauthorised access, modification or disclosure.

Controls

The DFFH had the following controls in place at the time of the conduct:

  1. “an acceptable Use of Technology Policy;
  2. eLearning modules on privacy awareness and security awareness;
  3. the DFFH values;
  4. the VPS code of conduct;
  5. Human Rights legislation and associated eLearning module;
  6. communications to leadership and management by way of three education sessions in May 2023 that referred to data security, privacy and other risks associated with GenAI.”[3]

The OVIC decided that these controls were insufficient and there was a need to train all employees, not only management staff.[4]

Since the conduct took place the DFFH created specific “Generative Artificial Intelligence Guidance” (which was circulated on several instances to all DFFH staff), which included two “critical rules”:

  1. “Employees should be able to explain, justify and take ownership of their advice and decisions;”[5] and
  • “Employees should assume that any information they input into public GenAI tools could become public. They must not input anything that could reveal classified, personal or otherwise sensitive information.”[6]

However, the report noted that:

  1. “DFFH has almost no visibility on how GenAI tools are being used by staff. It has no way of ascertaining whether personal information is being entered into GenAI tools and how GenAI-generated content is being applied. Further, as is always the case with policy and guidance, there is no way of guaranteeing that all staff will properly read, understand, and apply these.” [7]
  • “The risks of harm from using GenAI tools are too great to be managed by policy and guidance alone. At present, there are insufficient controls in place regarding staff access to GenAI tools coupled with a lack of assurance capabilities to verify that such use is appropriate. In other words, these controls are insufficient to prevent a re-occurrence of incidents like the PA Report incident.”[8]

Decision

The OVIC decided to issue a compliance notice, with 6 specified actions required (some of which DFFH can apply to amend), including:

  1. DFFH must direct child protection staff to not use any generative AI tools as part of their duties;
  2. DFFH must block access to 15 specified generative AI tools between 5 November 2024 and 5 November 2026;
  3. DFFH must between 5 November 2024 and 5 November 2026 “implement and maintain a program to regularly scan for web-based or external” generative AI tools similar to those directed to be blocked; and
  4. “DFFH must implement and maintain controls to prevent Child Protection staff from using Microsoft365 Copilot” between 5 November 2024 and 5 November 2026.[9]

Takeaway

Businesses which handle important or high risk personal information should be on notice they may not be able to implement sufficient controls around AI systems to prevent breaches of Australian privacy laws and should consider blanket prohibitions to avoid fines or reputational damage.

Hillhouse Legal Partners can assist if you have any questions about treatment of personal or sensitive information, you require the preparation of a privacy policy, or you have experienced a data breach. Feel free to reach out to John Davies, Lawyer or Craig Hong, Director to discuss.


[1] Office of the Victorian Information Commissioner, Investigation into the use of ChatGPT by a Child Protection Worker, available: https://ovic.vic.gov.au/wp-content/uploads/2024/09/DFFH-ChatGPT-investigation-report-20240924.pdf p5.

[2] Ibid p21.

[3] Ibid p23.

[4] Ibid p24 – 25.

[5] Ibid p26.

[6] Ibid p26.

[7] Ibid p28.

[8] Ibid p28.

[9] Ibid p29-30.


The information in this blog is intended only to provide a general overview and has not been prepared with a view to any particular situation or set of circumstances. It is not intended to be comprehensive nor does it constitute legal advice. While we attempt to ensure the information is current and accurate we do not guarantee its currency and accuracy. You should seek legal or other professional advice before acting or relying on any of the information in this blog as it may not be appropriate for your individual circumstances.

Stay Up-To-Date

Subscribe to receive updates specific to your preferences.

Subscribe

More Knowledge

All posts