Article
30 October 2024
Payroll Tax Update: GP Exemption & Dental Clinic Amnesty
Further important updates to payroll tax for GPs and dental clinics in Queensland have been announced. GP Payroll Tax Exemption As part of […]
Businesses should take careful note of the risks of using artificial intelligence (AI) and should implement controls appropriate to their business to ensure use is careful or prohibited.
A breach of Australian privacy law (for example, the Privacy Act 1988 (the Act) may result in significant fines or reputational damage. Given the recency of AI commercialisation, businesses should be especially careful about their compliance when using AI systems as enough time has not passed for best practice steps to develop.
Ensuring sufficient controls (or ensuring prohibition) is especially important where the AI is exposed to personal information, makes decisions for a business (e.g., reviewing and sorting resumes), or is engaging in impactful work (e.g., drafting court material).
As a general comment, it should also be understood that AI systems are often wrong and that their output should be thoroughly factchecked to confirm accuracy.
Best practice will be to not input personal information (especially not sensitive information) into publicly available AI tools, or indeed any AI system unless appropriate safeguards and restrictions are in place.
OAIC Guidance
The Office of the Australian Information Commissioner (OAIC) has issued guidance regarding the deployment of AI systems within an organisation subject to the Privacy Act (APP Entity) to provide a product or service, particularly in the context of generative AI (OAIC AI Guidance).
This guidance is crucial as AI systems are highly complicated and carry numerous complex privacy risks.
We urge all businesses contemplating the use of AI in their business to read the OAIC’s AI Guidance in detail and particularly note the included checklists before undertaking any use of an AI System.
This article does not propose to summarise or repeat the OAIC’s AI Guidance in detail, however a number of key takeaways should be emphasised.
Due Diligence
Prior to use of any AI system, due diligence will be critical, you must understand:
You should regularly check and confirm whether any changes occur in respect of the above during the use of the AI system.
Use of Personal and Sensitive Information in AI Systems
Your privacy policy must clearly state how your business uses AI. In some circumstances e.g., where an AI is used to take meeting notes, this will likely be insufficient on its own and the meeting participants should be given an opportunity to opt out.
APP Entities are required by Australian Privacy Principal 6 to only use or disclose personal information for a particular purpose if the information was obtained for that purpose. There are exceptions that permit a use or disclosure for a secondary purpose (e.g., if consent from the individual was obtained). One such exception is where the individual would reasonably expect the APP Entity to use or disclose the information for that secondary purpose if that purpose is related to the primary purpose (or directly related if the information is sensitive information).
The OAIC Guidance relevantly provides that “[i]f your organisation cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to the primary purpose, to avoid regulatory risk you should seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out. Importantly, you should only use or disclose the minimum amount of personal information sufficient for the secondary purpose.”
Controls
Prior to using an AI system, a business should consider the worst case scenario, as some AI systems are black boxes and their “reasoning” cannot be extracted and examined. E.g., the OAIC AI Guidance notes that using AI in recruitment could discriminate against candidates based on perceived biases. For this reason, any commercial use of an AI System should include sufficient controls to analyse and manage risks associated with the black box nature of the software.
These controls are discussed in further detail below in our commentary on a recent report by the OVIC.
Businesses which permit the internal use of AI should perform a privacy impact assessment, implement an AI policy containing express requirements for the use of AI systems, and undertake regular staff training on the use of AI.
Generation of personal Information
You should consider that AI systems are trained on a wide range of information, which means they are capable of generating personal information. The OAIC AI Guidance references an example where workplace psychosocial hazard training was partially created with AI and the AI generated a real situation using the real names of the persons involved (who were involved in an ongoing court matter at the time). This event may be considered collecting personal information under the Act and the information collected would need to be treated accordingly as unsolicited personal information.
Meeting minute making
While seemingly innocuous, the risks of using an AI system to record a meeting are substantial – meetings can veer off topic, in which case any personal and sensitive information discussed may well be information the business is not permitted to collect. In that case, that information should be erased or deidentified. Without proper systems in place, this can be easily overlooked from time to time.
AI and images
You also need to be aware that any images generated by an AI may copy part (or all) of an image it was trained on. Such generated images may reproduce personal or sensitive information and may breach copyright laws.
Uploading of images to AI systems should generally be avoided even where no personal or sensitive information is apparent, as the image may contain metadata or sufficient information to identify a location or other personal information may be present to identify a location.
Chatbots
Our view is that any business seeking to use an AI chatbot should seek legal advice beforehand as such activity may result in collection of improper personal and sensitive information. Chatbots also raise particular risks regarding Australian Privacy Principal 10 (ensuring the accuracy of personal information collected) and Australian Privacy Principal 3 which requires that unless unreasonable or impractical to do so, personal information must be collected from the individual directly.
OVIC decision
A deputy Commissioner of the OVIC recently performed an investigation into the use by a child protection worker (the Worker) employed in the Victorian Department of Families, Fairness and Housing (DFFH).
This example is an illustrative example of what controls may or may not be sufficient to guard against the risks of using AI systems.
Conduct
In this example, the Worker used ChatGPT to assist in the drafting of a protection application report, which is submitted to the Children’s Court to assist the court in deciding whether a child needs protection.
The use by ChatGPT of the Worker was plainly inappropriate and dangerous as “the Protection Application Report mistakenly described a child’s doll, that was used by the child’s father for sexual purposes, as a mitigating factor, in that the parents had provided the child with “age appropriate toys””.[1]
Of some interest is the 9 factors identified by the DFFH in their investigation which indicated ChatGPT involvement:[2]
Any business that, as part of its AI controls, audits work for evidence of AI use, should take note of these examples.
Breach
It was determined that this conduct constituted a breach of Information Privacy Principals 3.1 and 4.1.
Information Privacy Principal 3.1
An organisation must take reasonable steps to make sure that the personal information it collects, uses or discloses is accurate, complete and up to date.
Information Privacy Principal 4.1
An organisation must take reasonable steps to protect the personal information it holds from misuse and loss and from unauthorised access, modification or disclosure.
Controls
The DFFH had the following controls in place at the time of the conduct:
The OVIC decided that these controls were insufficient and there was a need to train all employees, not only management staff.[4]
Since the conduct took place the DFFH created specific “Generative Artificial Intelligence Guidance” (which was circulated on several instances to all DFFH staff), which included two “critical rules”:
However, the report noted that:
Decision
The OVIC decided to issue a compliance notice, with 6 specified actions required (some of which DFFH can apply to amend), including:
Takeaway
Businesses which handle important or high risk personal information should be on notice they may not be able to implement sufficient controls around AI systems to prevent breaches of Australian privacy laws and should consider blanket prohibitions to avoid fines or reputational damage.
Hillhouse Legal Partners can assist if you have any questions about treatment of personal or sensitive information, you require the preparation of a privacy policy, or you have experienced a data breach. Feel free to reach out to John Davies, Lawyer or Craig Hong, Director to discuss.
[1] Office of the Victorian Information Commissioner, Investigation into the use of ChatGPT by a Child Protection Worker, available: https://ovic.vic.gov.au/wp-content/uploads/2024/09/DFFH-ChatGPT-investigation-report-20240924.pdf p5.
[2] Ibid p21.
[3] Ibid p23.
[4] Ibid p24 – 25.
[5] Ibid p26.
[6] Ibid p26.
[7] Ibid p28.
[8] Ibid p28.
[9] Ibid p29-30.
The information in this blog is intended only to provide a general overview and has not been prepared with a view to any particular situation or set of circumstances. It is not intended to be comprehensive nor does it constitute legal advice. While we attempt to ensure the information is current and accurate we do not guarantee its currency and accuracy. You should seek legal or other professional advice before acting or relying on any of the information in this blog as it may not be appropriate for your individual circumstances.