Artificial Intelligence has arrived in schools – what you can do to manage the risk

AI is already a central part of everyday life. When your call is answered by a chat-bot (“press ONE for reception”), that’s AI. When your streaming service recommends a new TV series for you, that’s AI. When your car suggests you avoid the freeway to shave ten minutes off your travel time? Yes, this is AI too. These kinds of AI operate largely in the background – and most of the time we probably don’t even notice that the AI is there, helping us. With the arrival of generative AI, the use of AI more generally has come rightly under scrutiny and required schools and other organisations to balance risk and opportunities inherent in the emerging technology and its ever-changing capabilities.

There is no doubt that AI has arrived, and it’s not going anywhere. Your staff and students are already using AI in every day school life, sometimes without knowing, and at other times knowingly but without adequate safeguards and guidelines.

Should I be worried?

As with any tool, AI can be used for good, or as a weapon to cause harm. Just last week, a Victorian school student was reportedly expelled for using AI to generate ‘deepfake’ nude images of female students in grades 9-12. A deepfake is a kind of image that takes a persons’ face (for example) and places it on a completely unrelated body. The result is an extremely convincing fake image a person in a situation that never occurred; often engaging in explicit or otherwise harmful activities. The technology can also create very convincing fake videos – recent news reports suggest that AI was used in the United States by an aggrieved school employee to frame a school principal with making racist remarks. The kind of AI that ‘creates’ content like this is called Generative AI. Generative AI can create images, write recipes, poetry, and even compose music. One of the scary things about generative AI is that practically anyone can use it: it is readily available and requires no special skills or training. Young people are increasingly accessing and being harmed by others using generative AI in the school environment.

AI can, on the other hand be used to help you streamline your school’s business operations, support student learning and help you draft documents by giving useful feedback on documents and suggest improvements. However, even when used with good intentions, AI can pose risks to you and your organisation. Schools have a duty of care to protect students from reasonably foreseeable harm. If the proper safeguards are not in place, the use of AI in your school could place your staff and students’ personal information and safety at risk, as well as expose your school to regulatory sanctions, or even legal action.

How can AI help me? What are the risks?

AI needs to be fed information in order to generate output. For example, AI can’t make a deepfake image without being given somebody’s photos to use and manipulate. It can’t write an article without being told what it is about and the kind of language to use (we assure you this article however is 100% human-authored!) This information in, information out process is where a lot (but not all) of the risk arises. AI is hungry for information, and you can never be sure what will happen to information once you feed it into the system. Some of the key risks that can arise from AI-use are outlined below.

Risk 1: Data breaches and non-compliance with privacy obligations

Let’s say you want AI to help you generate a new and more efficient way to prioritise student enrolment offers. To do this, you feed the AI existing student applications and the current rules your school uses to prioritise offers. The AI might then sort through the information you fed it and suggest a new way to organise or use that information to your benefit – wonderful! However, unbeknownst to you, you have just ‘disclosed’ (per APP6) students’ personal information to the internet at large. AI systems could potentially be used to track students’ online activities, infer sensitive information, or make predictions about their future behaviours or outcomes. These scenarios could infringe on students’ privacy rights and autonomy. The second you enter information into AI, you lose control of that information. It is now ‘on the internet’ forever.

This inadvertent disclosure of information could constitute a Notifiable Data Breach and attract regulatory sanctions from the Office of the Australian Information Commissioner (OAIC); particularly as enrolment applications generally include information about the health, and religious, cultural or racial identities of prospective students. It could also lead to serious harm to the people whose information has been disclosed – not to mention the reputational damage to your school.

Risk 2: Loss of intellectual property and inadvertent copyright breaches

Imagine that you would like to use AI to improve your school’s suite of policies and procedures. AI could potentially give useful feedback on how to streamline and otherwise improve your operational documents. However, as with the above example, you can only get out of AI what you put into it. In order to have AI assess and critique your policies, it needs to read them. Once you have given that commercial information to the AI, it will use it again to help it ‘learn’ and respond to other users’ questions. Perhaps your school’s valuable intellectual property will be used by the AI the next time anyone asks it to write a policy. If you don’t want to share your commercial-in-confidence material or intellectual property with the entire internet, beware of feeding it into the AI.

The risk also presents in the other direction. Imagine the AI suggests you re-write your procedures a certain way and you implement that suggestion. Then, to your horror, your school is sued for using another organisation’s intellectual property without their consent. It turns out the AI had given you a copy of someone else’s documents to use, and you had no idea. Without knowing it, your school has now infringed on somebody else’s copyright.

Risk 3: Teaching, learning and academic dishonesty

AI can be used in schools to personalise learning, provide real-time feedback and create immersive educational experiences. It can be used to develop teaching and assessment tasks and streamline assessment and grading. Be wary, however, of relying too much on AI to help you grade and assess student work (or to perform any other tasks involving students). Although AI could help you to save time and create engaging materials, keep in mind:

  • The nature of AI is that it exhibits strong biases. This is because AI can only work from what it knows – and it is well-known that inherent gender, socio-economic and racial biases are built into practically everything on the internet. If you are not conscious of these biases in the AI, you could find your school becomes exposed to an action in discrimination.
  • It is also well-known that AI can be used by students to cheat on assessment tasks. For example, students can ask an app like ChatGPT to write a 700-word essay on the literary importance of a book. The quality isn’t always great (AI hallucinates, see below), but it can often be enough for students to achieve pass-marks, even at university.
  • AI can hallucinate. AI is currently considered unsuitable for research because it will make up whatever it doesn’t know. Students need to be educated about primary source evidence research vs AI algorithms which scan for correlation or opinion and present it as fact.

What should your school do to minimise risks?

There are countless benefits and risks associated with using AI. Below are some tips to help you prepare for, respond to and use AI, while minimising the risk to your school and community:

  • Introduce safeguards:
    • Use compliant Privacy impact assessments suitable for the schools sector to ensure a privacy-by-design approach to any projects or work that may involve the use of AI.
    • Demand transparency: Schools and ed-tech companies should clearly communicate to students, parents, and educators what data is being collected, how it is being used, and who it is being shared with. They should also provide options for opting out of data collection where possible.
  • Staff training on AI, its capabilities and how to avoid data breaches:
    • Whitelist any AI tools you have approved (e.g for report writing) and any that are used only within your closed system.
  • Provide guidance:
    • Establish clear guidelines for data handling, specifying the types of data that can be fed into AI systems and ensuring that only de-identified or otherwise relevant and necessary information is used.Promote a culture of privacy awareness: Encourage a culture of privacy awareness within the school community, emphasising the importance of safeguarding students’ personal information and fostering a sense of responsibility and accountability among educators.

How we can help

Our Education team is in demand for up-to-date, informative and entertaining staff PD on privacy matters including AI. We can prepare all policies and procedures and assist organisations to maintain best practice in privacy and data protection. If you have existing policies, our team can assist with reviewing and updating these policies to ensure your organisation continues to mitigate risks posed by new technologies – you might be surprised by the gaps which exist! We can also provide tailored, interactive training to your organisation on your obligations under the Australian Privacy Principles, and other regulatory schemes in your jurisdiction.

Contact us

Please contact us for more detailed and tailored help.

Subscribe to our email updates and receive our articles directly in your inbox.

Disclaimer: This article provides general information only and is not intended to constitute legal advice. You should seek legal advice regarding the application of the law to you or your organisation.

Authors