09 May 2024
Cybersecurity and AI at schools with Kirra Pendergast
In this webinar, you will
- develop enhanced familiarity of the cyber safety implications associated with emerging technologies in schools
- effectively navigate the complexities of managing AI-related risks in the school setting as leaders and teachers
- engage with key strategies for fostering a safe and responsible integration of AI in educational practices
Participants will actively engage with expert-guided strategies and acquire practical skills and insights to navigate the potential risks, challenges and opportunities presented by the use of artificial intelligence in schools.
About Kirra Pendergast
Kirra Pendergast is the Founder and CEO of the Safe on Social, the premier privately-owned provider of online trust, safety education, and consulting services globally. Kirra is a sought-after public speaker and international media commentator on online safety, cyberbullying, and the responsible, safe and ethical management of AI use in education.
Length: 1:24:35
Download
Transcript
ANAM JAVED:
Hi, everybody. Given that it's 4:00, I think we might get started. My name is Anam Javed, and I'm the master teacher in residence for technologies within the Teaching Excellence Division here at the Academy. We welcome you to today's presentation, which is part of the Academy's thought leadership series.
We also appreciate and recognize all the work that you do for your students, for your colleagues, and for your communities to provide teaching and learning of the highest quality. And so we do thank you for privileging this time to attend this professional learning session at the end of what we imagine is a very busy day. We'd also like to acknowledge, on behalf of the Academy, the traditional owners and custodians of the many lands that we are on today. I'm on Wurundjeri land of the Kulin nation, land that was never ceded. I pay my respects to Aboriginal and Torres Strait Islander elders, past and present, and I extend this respect to Aboriginal colleagues who are here with us today.
We recognize that the Academy, the continuing connection to land, waters, and community. It's my honor now to introduce our guest today, Kirra Pendergast, this afternoon, who will discuss cyber security and artificial intelligence use in schools. It would also be very appropriate for me to give Kirra a big shout-out, and to provide a brief bio for Kirra as well. Kirra is the founder of Safe on Social and the online safety agency. Kirra's journey from the early days of the internet to the current era of artificial intelligence has made her a leading consultant, speaker, and educator in online safety education, including on cyberbullying and the safe and responsible management of AI use in education. Kirra has a background in IT, business consulting, and cyber security and has provided strategic advice to a wide range of government departments, premiers, ministers, and executive directors, as well as consulted to organizations ranging from Fortune 500 companies to small businesses. Kirra advocates for a mindful use of technology and aims to empower individuals to navigate technology with confidence and awareness.
At this point, it would be very prudent to mention that we provided the attendees a Slido link a few weeks in advance, and Kirra very kindly has prepared a presentation that addresses many of the questions posed through that Slido. We're going to accept questions once Kirra has concluded her presentation, and we ask you to hold on to these questions till Kirra's presentation has concluded. Because I've got a strong hunch that Kirra will be addressing many of them anyway. Kirra, it's now my pleasure to hand over to you. Thank you so much for being with us today.
KIRRA PENDERGAST:
Thank you Anam. Thank you all for having me. Can you hear me OK? Just triple-checking. What we're talking about today, may seem pretty overwhelming at first, but we are recording this so you will have access to it if you need to go back. And at the end of the session, you will have my email address if any other questions come up way after the event as well. I'm always happy to help where I can. I've worked with more than 800 schools across Australia over the last ten years, and this is coming up every single time I speak. I just completed a nine-week speaking tour. I'm about to come back for another one. And it is top of mind.
So we're going to get into the detail of, I think we need to discuss which is the underlying cybersecurity when it comes to AI. For my whole career, I've used the analogy we need to build our houses to lock up before we put anything of value in them. And a lot of people are already using ChatGPT, everything through to gamma which is like PowerPoint, slide developer, and a multitude of other programs.
And before we get too far into it, we need to pull it back and detect the patterns, and learn from what's happened in the past and that's what I'm going to cover off today, as well as most of the things that came up in the questions that you will kindly provided. So first and foremost, we need to understand what we're dealing with here. Artificial intelligence is not new. In fact, it's been around since the 1950s. What came at us really really quickly 18 months ago. And that's already gone at lightning speed when you think about it, is what we call generative AI, which is your PTs and things. So we have four different major versions of AI, large language models, which is like text, Q&A, all of the things that ChatGPT does. Machine learning are big algorithms that drive things like social media platforms. Deep learning is used in speech recognition. So every time you talk to Siri or Alexa or something like that. And then there's natural language processing, which is like Google Translate and all of those things, seeing it and using it in 2024.
So we do use this daily. There's absolutely nothing to be scared of if we get it right. OK. So there's a whole heap of things that we're going to talk about today with regards to where I think you should start teaching about this while we're preparing for all sorts of policy to come through. But we need to understand where we use it every day so we can start to demystify it because there is a lot of parents in particular that (INAUDIBLE), and paranoid and reading all of these atrocious things about how this is going to take over the world and all of that sort of stuff. So we need to dull all of that down a bit and replace it all with factual evidence on how we use this every single day. So smartphones and mobile devices, any kind of virtual assistant like Siri and Alexa, as I mentioned. Social media, online shopping we use it every single day and we get recommended products. And in fact, if you're a social media user, you've probably recommended things right down to a piece of clothing based on the fact of where you're located at that particular time and the cameras that are off the roof all talking together through all of these big algorithms.
This is why we get ads for things we were just talking about the other day. That's artificial intelligence at work. Satnav, GPS, streaming services like Netflix, home automation and smart devices, all of those Google Nest's, and the light systems, and everything you can get now. Fraud detection in banking, healthcare diagnostics, like AI has been wonderful for healthcare. And being someone that's been in the technology industry for 32 years, I've seen patterns come like this. And what we're seeing now is we've got to have and this is going to make your minds go. But whilst we're learning quickly about AI use, we need to keep one eye on what's happening with cloud computing, because that's where we're going next. And the next generation of students coming through schools are... You can think about this, kids in Kindie probably by the time they're in year seven or eight, will be using quantum computing in the classroom and could literally develop and find a cure for cancer with the amount of computing power that's there, we just don't know.
So we need to keep one eye on what's coming next. Obviously, this is used in all sorts of safety features and things as well. So with all of that comes a lot of challenges and opportunities. And we need to think about preparing our students for their future, not our past. I started my career about 12 months before the internet was on the desktop.
So it was used in education in universities and it was used by defense, but it wasn't being used by the everyday person. And back then we had the same kind of debates that were happening or that are happening, I should say right now, which is around ethical use. Is this cheating? It's going to destroy education as we know it. And in history, we can see that happen with books and things as well. So we need to think about not so much our personal opinions on this, but how we prepare the young people that we teach on how to do this better for the sake of humankind, and also for them, and preparing them for a work-ready kind of space that is going to be steeped in AI, because it already is.
There's a lot of firms that are using it right now, so we need to proactively educate staff, across the board, admin, executives, leaders, teachers, the whole lot. We all need to be on the same page and really put down a very strong foundation that's steeped in risk management before we go crazy rolling everything out in a classroom. So common misconceptions. AI is not going to replace human workers as we know it, especially in the education space, because human capability here and the way that we teach, there's empathy and all of those things that aren't there and won't be there for a long time with the technology that we're talking about. So we need to remember there's a lot of misconceptions. There's a lot of people that are fearful, I get it, it's scary when we start using new technology, but hopefully by the end of this, you're way more confident on how you can move forward in a secure way with mitigating the risk as well. So Australia already has an AI ethics principal, and a lot of people don't know that this exists because it's been around for quite some time.
But this is around the technology side if you were a tech developer to promote safer, more reliable with fair use of AI and mitigate the risks, and of course, putting those ethical standards in place. But what is fairly new is the framework for education, which I'm going to talk about in a minute. But first and foremost, what I'm teaching when it comes to school use is whilst, as I said earlier, we're waiting for the different state governments to put together a cohesive policy and make some decisions about how this can be used in education, we need to start teaching AI ethics. That is the absolute best part, so students understand right and wrong on how to use this technology. And that would be the absolute baseline and best place that you could start doing research, teaching about those kinds of things in the classroom, teaching staff, leading schools with ethics and AI, and making that very public to your community on how you're doing this. So AI ethics involve transparency, bias, and fairness.
Privacy, security, and accountability are the major ones. The one that we don't often think about are sustainability. This technology consumes enormous amounts of energy. And I have to say that a lot of senior students that I've presented on this topic to are very, very interested in this and they want to learn more about it. So there's some really interesting ways that we can teach as well. Grasping the impact on human potential job losses. And as I said, preparing students for their future, not our past. It will kill off for lack of a better term, some job roles. We're already starting to see that with people that work in certain fields, like managing things and copywriting and stuff like that, they need to pivot and change. And so we need to be looking at that with any kind of career advice. Digital literacy is actually on the increase, and we're seeing literacy levels increase all around the world (INAUDIBLE), all over Europe. I'm involved in big groups all over the US, and they're all saying that they're seeing an increase in literacy because of the young people using AI technology and the way that you have to work with it.
Because I hate the term prompting because it's a large language model. We should be talking to this thing like it is a human, rather than trying to be like a prompting engineer and all of these other things that are being touted all over the place, we just need to talk to it. But because we have to be really targeted about what we're talking about to any kind of AI technology, literacy levels are actually increasing because you have to really think about what you're saying to get the answer that you need. AI should be inclusive, unfortunately, there's still some bias, which we'll discuss a bit more at the moment, but we need to be having those big discussions as we're educating people around AI ethics (INAUDIBLE) absolute baseline. When I use the analogy, build your house to lock up. This is literally the foundation. When you lay that cement slab, it needs to be AI ethics and then everything else builds from there. So the framework and design principles that were released by the Australian Federal government in January are this or early February.
It's around six different principles with 25 guiding statements to them. And what I did was a bit of (INAUDIBLE) analysis for this presentation, what they've stated, and where I think it might need a little bit more work or a little bit more thinking collectively by those of us that educate in this field. One of the first of the six is teaching and learning. So we have to think about impact and instruction. Teacher expertise, we're using AI to support teachers. Critical thinking and enhancing that creativity kind of technology can bring without restricting human thought. Learning design, clearly outlining the use of AI in any student's work, including assessments and then also academic integrity. There is ways and at the end I will teach you how to think about assessing things differently. OK, so we need to look at all of these across that teaching and learning component. The current state though, it's being used really sporadically. As I said, a lot of people aren't using it, a lot of we're using it then they thought the government banned it.
So they're using it at home and not at school, and it's a bit of a shemozzle. So that impact and instruction is really important. There can be limited exposure to AI technology at schools between teachers. So bring things up in staff meetings, what people are learning all of those and sharing that teacher expertise of what they found works and doesn't, especially with teachers that are like nearing the end of their career or like a lot that I've been working within PD sessions, are completely app fatigued, as we call it, because after Covid they had to learn so many new things to be able to teach that they're like, Nah, I just can't. I don't want to do it anymore. So there might be some little things that pique their interest that you can work through in staff sessions. So, critical thinking and learning design is really important and is being massively underutilized as I have seen across the continent. And academic integrity with policy writing is a lot of the work that I'm doing for schools at the moment, but not so much overarching policy, more guidelines that point to whatever policy may come down the track to make sure that it's being used ethically.
Human and social well-being. This is about the mental health side of things because what can happen here is kids can get really overwhelmed, and especially anxious students, anxious teachers, people that have got too much on this can hyper accelerate that. Like the internet, when it first came on, was like walking into a room with 10,000 televisions happening, and you wanted to watch every single channel at the same time. So we're finding that happen again. So we need to think about that. We need to think about human and worker rights, and including autonomy and dignity as part of the framework. But the current state of that is that awareness and consideration of AI's impact on well-being and diversity is completely absent, completely inconsistent. So not all AI tools prioritize user well-being. There's overreliance and misuse. There's reinforcement of biases. So we need to think about how we educate around those things as well. So as you can see, there is a lot to take on. And I'm only two into this, so it needs to be well planned, well thought out, and constant education is the best way to move forward.
Transparency is another spoke in the framework. So informing the school community about the impact of AI use, whether the school is using it or not, needs to be communicated with parents. You might want to have a parent information evening so they can come in and understand how you're using it. When you're using it, what sort of impact that has on their student, making sure that they understand how it works, and eliminating that fear because of the risk being mitigated and publicly how you are mitigating that risk? The transparency regarding AI use in education is wide. OK. So what we're seeing is some are being left completely uninformed, like personal experience. My niece, my brother's daughter, I subscribed her, she's 14. I've subscribed her to chat GPT four and said learn it. Even when it was still just in its fledgling state, where my mother flat out said no to his child, who was the same age. So it's like one comes from a technology family, the other one comes from a completely overwhelmed and scared side of that.
I will disclose now that we're both from Byron Bay, so you can imagine the difference in opinions that come out of an area like that. But it just proves that we need to be transparent. We need to educate beyond that fear to make this a worthwhile exercise for everybody, and to get the community on the same page. Fairness is another big one. Looking at minimizing opportunities for discrimination, cultural, intellectual, property, all of those things is another spoke. But again, the impact of AI technology can be really uneven. So we need to really think about discrimination, cultural responsibilities, all of those things. Because originally if you typed something in and said, write me the history of a particular location in Australia originally it was only writing the white history. Now that has been corrected over and over again. And you can do that by just simply writing. Incorrect please tell me the indigenous history of this location, and then it starts to learn from us as well. So we need to make sure that we're constantly checking things like that.
Accountability, maintaining human control in decision making here. Monitoring AI and managing those risks and opportunities you'll hear me say over and over again. So the gap analysis there obviously is establishing those clear lines of accountability, ensuring the reliability of AI, and enabling any kind of community feedback, whether that is from a student cohort, parents, staff, leaders that come together as a group across a region. But the big one here and my area of expertise is around privacy, security, and safety. And upholding privacy and data rights in accordance with. Thinking about where we need to seek consent. What I'm doing with a lot of schools that I work with at the moment is updating all of their permission to publish forms. So right down to that level, we need to start thinking about how this impacts members of staff, students, and the wider community. With the onslaught of deepfakes and things which I'll talk about in a moment, and AI increasing the risk of sextortion, we need to be onto it around how we take photos and publish photos on Facebook pages, for the school, on the school websites and things like that.
We need to think about using anything free. OK, so anything free to use can cause an issue because you immediately become a product. So you need to make sure that we've got sign-off from parents about different products that you've checked, the privacy and data retention section of any contract for any app that you're signing your school up to use. So we need to think about that before feeding data into any of these because we could be breaching student data. We can be breaching student copyright by feeding their work into a cheat checker, for example, that hasn't been signed off on. So we need to do some research around that.
Cybersecurity and resilience, making sure there is robust security measures in place for AI tools and data and that's a conversation that needs to be happening with school IT departments and ensuring Copyright law when using cheat checkers and things like that. So, the current state on this one is, we have a whole heap of different things going on here 'cause we tend to, what I see happen all the time is when new technology comes out and everyone goes, wow and jumps on board, we'll often tick a box that says accept terms and conditions of use without fully reading them.
So, we need, how that data is being handled, highlight any gaps in privacy protection like where is the data being stored and how is it being used. So, we have to read those terms and conditions. But with the most major one that we're hearing about at the moment, which is ChatGPT, just last week they abolished any need to sign up for an account. So, ChatGPT is effectively competing directly head on with Google as a search engine as well. So, up until last week and still, under what's called the Children's Online Privacy and Protection Act out of the US, copper compliance means that people need to be over the age of 13 to use a lot of these tools that can be collecting some kind of personal data of a child and using that to sell advertising and things like that. So, the tick in the box that accept terms and conditions always has in it, yes, I am over the age of 13 but that's when you, for an account and download the application and use it. OK. So, what ChatGPT or OpenAI have done now, they've opened it up.
So, you don't need to actually have an account. So, this opens up a whole new plethora of risk for young people that don't know what they're doing, don't understand the ethics, getting on to this app and using it however. So, we need to train them around digital literacy app, the online safety apps, you know, as I've been talking to the team, I feel like I've been hurtled back to the mid-90s when we were just starting to educate about antivirus and firewalls and things like that. And we've seen these massive waves of technology repeat over the years with mobile phones, then the internet and then smartphones and then social media and now AI. And we're still not ahead of it when it comes to online safety and cyber security and a lot of people think that they know a lot about this but I can guarantee you, as someone that works with schools day in, day out, it changes every day and unless you have a deep technology background, you are never going to be fully on top of how that moves because it's very, very hard to detect the patterns if you don't understand the patterns of the past to see what's coming.
This is why, whilst I'm talking about in all of my cyber safety talks now, I've got a firm eye on quantum computing and already discussing what's coming next. So, we need to constantly be looking forward to make sure that's all wrapped up in policy and regulation because that is the biggest thing we can do here to be protecting student data and privacy ongoing. So, monitoring and supporting what's going on, partnerships and collaboration, talking with tech companies even if you've got the app, have tech companies present on how they're doing this, so students learn from it and then access and equity. The digital divide in Australia is huge already. Australia is a little bit behind the rest of the world on how we're educating around AI but also we have big issues when it comes to connectivity. Some students where they live don't have access to anything other than maybe a Starlink, if they're lucky, from a satellite and the way AI consumes data, it means they're missing out. So, a lot of the time the only place they're going to get access to technology that they may need in the workplace is at school.
So, we need to keep an eye on that digital divide and how we do that better as well. So, this is where policies and safety come into play. So, what people share online to align with regulations, prevention of online bullying and harassment, respecting other people's privacy, this way to teach a child about consent with all of this is always asking permission before you share their photos. Even if their child's parent has signed a permission to publish form, always ask the child first, just a little sideline because I'm extremely passionate about this. If you are taking photos of children at your school or staff at your school to publish onto the school's Facebook page, just rethink that. We need to update those permission to publish forms in an age of AI. So, for example, if there's a photo being taken because the child has won something at a sports carnival or a certificate, just make sure now that you are slightly obscure their face so that when they're holding up the certificate to take their photo, make sure it's just across part of their face so their face can't be cropped and used in a deep fake, for example.
OK. Teachers, you may wanna go back and ask if they are OK, with their photo being published on a school website or on the page or just, you know, looking excited or animated or looking down to the side glasses hat, all of those things will help protect people a bit more. So, we really need to think about the policies and safeties in schools and make sure that's educated and ripples out across the community as well. The reason that I do that is because all of the big social media apps are in a race to have a synthetic relationship with us. And of course, they are children because kids are going to get used to using this and it's going to become second nature and they are going to be the CEOs of tomorrow. So, the first case that came out was within the app Snapchat. 'My AI' is rebranded ChatGPT, where you can ask it questions and if you're as old as me, you might remember it, a magazine called Dolly Magazine, where you used to write in as part of that and ask Dolly, they would respond in the next magazine.
So, this is like Dolly Doctor, for those of us that remember that. And you can ask it questions that you might be, you know, a bit awkward about asking but anybody else, especially your parents and it will respond but what it's doing is gathering a whole heap of information. I'm on a anti-vaping taskforce with New South Wales Police and Health, and one of the things that come up all the time is how vapes through Snapchat from random people and if, as an example, they asked 'My AI' where do I buy a vape? We need to remind them that we don't know how this information is going to be used in the future and if that information is sold off to a health insurer or a life insurer, it might mean that they don't get health insurance in the future. We just flat out don't know.
So, we need to think about the risks in this, we need to teach healthy interaction, making sure that considering all the privacy options that they know that the only way you can turn off 'My AI' in Snapchat is if you become a Snapchat plus subscriber.
So, that flat out tells us if you're paying Snapchat $7 US per month, you can turn this off, if you're using free Snapchat, you cannot. So, it's absolutely 100% data farming that next level of students and young people playing and talking to this all day long within Snapchat. Synthetic relationships and deepfakes are going nuclear with the onset of AI. And this little video that I'm just going to play in the corner here, it has no volume but it shows you that what Facebook are doing, this hasn't been released in Australia yet but it's coming. (VIDEO PLAYS) The first of their AI chatbots is a complete replica of Kendall Jenner. This isn't her, this is an AI version of her called Billie on the Facebook app. So, you can have conversations with her around what restaurant to go to, where to do yoga when you're in New York, anything you like and you become her friend. OK. So, this kind of interaction is drawing young people and old people in to have a conversation but as you can see there on the screen, she's following 33 other people.
Now, these are all other chatbots within AI. The first within Facebook, I should say and Instagram. The first one is Brew, which is the American footballer Tom Brady that you can ask about sport. The next one that came up, you can ask about healthy lifestyle and stuff like that is called Sally. And as you can see that Sam Kerr, the Captain of the Matildas. So, these are all designed to draw people in to get more data that, then they can sell off to advertisers and make another billion dollars a minute like they do. So, understanding how AI contributes to all of this, deepfakes and video is really important. And there you have Paris Hilton being Amber, teaching you how to be a private detective, stalk your ex-boyfriend, all of those kinds of things are coming at pace. So, just to show you how this looks with an average real person, this is 54-year-old me on the left, this is deepfake AI technology. And just by clicking a few filters, I can make myself look 20, then five and then 15 and then everything in between.
By the end of it, I look like some kind of supermodel, it's quite ridiculous but, you know, something really funny happened when I was working in Hong Kong in late January, a boy in year four kept putting up his hand saying, miss, miss, I can tell it's fake. And I said, how, mate? And he said, your neck. And he was absolutely right because by the end of this, looked like a super bottle in the face but my neck is still very much a 54-year-old woman. I did remind him never to say that to a woman again in his life but the point was valid, you can always spot a fault. AI doesn't get eyes right, it doesn't get hands right yet but it's coming and it's getting better. This one in the middle here is a deepfake of me at the London quantum (INAUDIBLE). All I did was lean over this Korean company, took a little snippet and deepfaked me to start talking. Yeah, so they told me that if I paid them a lot of money they could then turn me into a chatbot. So, if you went to safeonsocial.com and typed in a question, I would answer you.
So, it's quite crazy how this is all advancing. This last one over here again, proper me down the bottom, teenage me at the top, it's a TikTok filter, we're seeing more and more and more of this coming. So, this is obviously what's being used behind a lot of bullying in schools where kids are deep-faking each other, there's all sorts of things that we need to nip in the bud very, very quickly when it comes to this. So, just clicking through that, I just want, like I could go on about that deepfake sextortion and all of the risks involved in that for at least an hour. So, if anyone has any questions about that, you can get in touch with me directly but there's a lot of it happening. So, I just wanted to run through some best practice here. So, advantages of GenAI in the classroom, of course, are around personalised learning. People like me, I'm dyslexic, you know, I find it very, very difficult in some circumstances to like fill in forms, stuff like that. So, I've been using ChatGPT since day one, I can talk as you can tell for hours on topics.
So, I will often speak into dictation on my phone, I will then put it into ChatGPT and say, please correct the flow and the grammar for this document and then it speaks out something that's legible. So, it's not just me rambling on but if I then went and ran that through a cheat checker, it would say 100% written by Pete, when it was actually 100% written by me and edited slightly by ChatGPT. So, that's how we can get around these personal learning, personalised learning issues when it comes to cheat checkers and a lot of the big universities around the world are literally throwing cheat checkers in the bin. I worked with the European Digital Education Network in Dublin last year and a woman from the Russell Group of Universities, which is Oxford, Cambridge, all of that said exactly why they're changing the way they assess because there is so many flaws in cheat checker technology for people like me that are using it for personalised learning. Obviously, there's time and resource savings, there's preparing students for the future.
This is fantastic and I probably shouldn't say this but I'm going to be very frank and I'm going to say it. These tools are fabulous for those emails that you really, really wanna send but you really can't. So, you have a squeaky parent that keeps coming at you and coming at you and coming at you, and you just wanna write something to say, maybe not so politely, go away! If you wrote that into ChatGPT, for example, and then wrote underneath rewrite in a positive and professional polite tone it would rewrite that whole email to something that is sendable. OK? So, there's multiple ways that you can use this technology. Drawbacks, of course, privacy, this will continue overreliance that widening of the digital divide, the potential for bias. So, hallucinations are another big thing here. This is a tool, it is not a replacement for anything at all. We need to use it as a tool because unless you know what you are talking about, it can come up with what we call hallucinations, which means it will completely make things up.
So, these tools are trained to April 2023. Now, they're galloping ahead, so it's out probably around September but they are always lagging behind, they're never completely up to date. Two Florida lawyers found out the hard way where they cited precedents that they'd asked ChatGPT for, those precedents didn't exist and they were made to make look like complete fools and ended up all over the US media. So, you need to know what you're talking about and use this as a tool, not as a source of information and that's the basic thing that we need to teach students straight.
So, documenting AI tool usage, this is like a how-to from here. What we need to think about here is putting some things in place within your school to make sure that before policy is handed down, that you have a really good list of what everyone's using. So, making sure there's a detailed list that all of your staff, hand to the leader or whoever leaders nominate what they're actually using. So, you've got a good overview.
That self-assessment can then extend to things like, yes, I have checked the data and privacy and this is where it's held, yes, I have trialled this and this is where the hallucinations happen, yes, I understand the patterns without using a cheat checker. All of those things that we can put ourselves through as a really thorough self-assessment and then clearly articulate the purpose behind why they need to use that particular app before it's on the board. So, ensuring that transparency and consent around privacy and data protection and getting consent from parents and students about data collected on them by these tools, how it's used, how it's shared, is the most important thing of the whole lot to build that transparency and trust. So, promoting digital literacy and ethics across the board, as you said, is well, as I've said multiple times, is what we need to be constantly thinking about when it comes to AI in schools and using it as a tool to enhance human creativity rather than replace it.
So, maintaining human oversight and that human responsibility to monitor and evaluate is another key point to this. Support and equity and access, making sure it's accessible and inclusive to all students with disabilities and with learning specialist learning needs is fabulous for kids who are dyslexic, dysgraphia, English as a second language you can translate. I've worked in schools in Western Sydney that have predominantly Arabic, I had a whole heap of Iraqi mothers turn up to one of my presentations with their sons so that their sons could translate what I was saying, where I'd already written all of my cheat sheets for them in Arabic, I've worked with Kurmanji with the Armadale and done the same thing. So from an English as a second language, this is phenomenal and really does, you know, provide that equity and access that we need across the board. Continuous professional development teacher expertise goes a long way here. Your team, you would have some teachers that are using this way more than others.
So, that collaboration and sharing in staff sessions is really important, even if it's only 15 minutes every other week, it is something that they can talk about what works, what doesn't and how they're using certain things. So, where to from here? As I mentioned in the very beginning of this presentation, we are currently in a space that is exactly like we were with the internet in the early 90s. We didn't know cloud computing was coming, we didn't know that we'd have Uber, Netflix or anything like that. I was still fighting over my dad for, you know, whether he was watching the football or I was watching countdown and I never expected to be in a career like this. So, we need to continuously be having one eye on what's coming next, manage the short and the long at the same time is what's navigated my way through my technical career the whole time. Looking at what's happening right now, being right across all the risks with what's happening right now with AI but please, please keep one eye on where we're headed with quantum computing in particular because we make sure that we're ahead and look at that technological shift ongoing.
So, this will be crucial in the future. AI tools will transform the way that you work, there's a lot of policy that needs to be changed. Start with the small, it's like when you're paying off a whole heap of debt, get rid of that small credit card debt and stuff like that first before you start looking at the big stuff.
Regularly assess the tools you're using for their effectiveness, share learning with your colleagues, learn to detect the patterns that pop up in ChatGPT and things that your students are using, rather than defaulting back to those faulty cheat checkers, you can tell, it always says things like in the digital realm. Moreover, there's a massive overuse of semicolons, once you use it more and more often, you can absolutely see the patterns. So, familiarise yourself with all the different types of tools that are available, and think about new ways of assessing project-based learning, engaging students in programs that require the use of generative AI and actually talk about how they're going to do that, what prompts did they use?
Citing their sources at the bottom, digital portfolios, peer review and collaboration. I've written this all out into a cheat sheet for you, making sure that they do presentations and demonstrations so you know that they understand the learning that they just haven't gone cut and paste out of whatever they're using. They're even using 'My AI'in Snapchat to do work, interactive quizzes, reflection papers, big creative projects and ethical debates and discussions. A lot of people, and I hate to say it because I love writing but a lot of people around the world are calling generative AI the death of the essay. So, it's really important that we need to start looking at how we can potentially assess things differently in the future. (VIDEO ENDS) That's it from me for this part of the presentation. There are my contact details there, which I'm not sure if we're gonna leave on the screen or you want me to get rid of that while we have the Q and A. Anam, how would you like to do this?
ANAM JAVED:
I think Kirra, it's worthwhile leaving them on the screen because some of our attendees might be screenshotting or vigorously noting them down, but, and I think we might leave them on while you answer the questions as well that have come from our Slido. So what I might do team is, I've got a few questions for Kirra from Slido, but I'm also going to now open up the chat box function to our attendees who are joining us live at this moment. So feel free to put down any questions in the chat. We are going to be sort of guided by how much time we have remaining. And, Kirra, whenever you're ready, I'm gonna throw you straight into the Slido questions, if that's OK.
KIRRA PENDERGAST:
OK. Yep.
ANAM JAVED:
So, Kirra, we had a chat about this prior, but we just wanted to get your broader sort of thoughts and views and encouragement I guess, for teachers who are grappling with the concepts of cheating and plagiarism when it comes to generative artificial intelligence, what are your thoughts on that? And, and sort of general advice around cheating and plagiarism?
KIRRA PENDERGAST:
It's always going to be a risk. As you know, they can plagiarize off the internet, out of books, all of those things. Again, this is just another tool for that. So what it's doing is presenting different opportunities for plagiarism and cheating. But learning to detect those patterns which become blatantly clear the more you use this technology.
I've trialled maybe 15 to 20 different forms of large language models like ChatGPT, Gemini by Google, even like, you know, social media apps like Blaze that write for you and things like that. And the same patterns are coming across. 'Cause it's often OpenAI rebranded and rebranded and rebranded. So you're always going to see more over and realm and certain words being used over and over and over. And I think getting to learn those is the best way to detect that. And getting ahead of it. So saying to students, look, we understand that you're using this, but please cite it as a source, document the questions that you asked it as prompts so that we can discuss it in the classroom and that your friends can learn from it as well, I think is the best way to handle that.
ANAM JAVED:
Excellent. Thank you. Kirra. We've got another sort of hands-on question from the school world at the moment, which is, you know, really pressing concern.
And the question is around school leaders or anyone with a position of responsibility around the implementation of AI tools. What are your ideas and thoughts on creating a culture of transparency and accountability around the use of AI technologies within a school? Are there any steps teachers can take to sort of kick-start this work?
KIRRA PENDERGAST:
100%. I would suggest that you form a little task force within the school, that it's up to them to be driving the change that is rolling through the school so that they can do different PD courses and things that are needed, and be continuously disseminating that information to other staff members as number one, that they work with people that are experts in policy development, because at the moment, you need to have guidelines that are going to point to any policy. So like some of the biggest schools that I work with, I've written guidelines around AI usage on where to check the data privacy, you know, ticking all those boxes in the, I was talking about before. Have you done the research about this? So if there's a team within the school that's responsible for that, so that when the government comes along and says, this is the policy you're already 90% prepared for it, is really, really important to have that in place because we need to get ahead of this. There's a lot of lessons that we didn't learn when it came to social media-wide use, and schools ended up having to carry the brunt of that and (INAUDIBLE) school, 'cause I'm sure every single person on this call would nod that parents don't understand duty of care.
They don't understand where duty of care ends and parenting starts. And you had to pick that up as educators and look after their children's digital interactions because they keep running into school going, blah, this is happening. Blah, this has happened and you've got to fix it and all of that. Well, no you don't. So we need to get ahead of that potentially happening with AI technology use and all of the good things and the bad things that can come from it in equal measure.
ANAM JAVED:
Excellent. Thank you for that, Kirra. Getting sort of a bit more specific with the Slido questions here. It's a broad question, but also specific in that it asks, you know, how, what is the balance? So obviously we don't want teachers heading down the rabbit hole of just becoming completely risk averse and just, you know, putting a blanket ban on the use of AI. What is your recommendation around sort of the use of generative AI apps, et cetera.? You know, you talked about, you know, the pitfalls of using a free app and essentially handing your data over and being used as a product. Are you able to sort of guide us in terms of where a safe but also a creative approach lies in terms of AI use?
KIRRA PENDERGAST:
Yeah. Absolutely. Look, I think sticking with the bigger (INAUDIBLE) for (INAUDIBLE) of a better term at the moment, like the ChatGPTs, the Google Gemini, if it comes through in Victoria, I'm not sure where it's at with the government at the moment around EduChat, through Microsoft, Bing, all of those, the bigger end of town sticking with those rather than like, you know, I love this book more than anything. I don't know, can you see that? 'Cause, it's because of the background, the AI classroom and it has a whole heap of different apps in it, but a lot of them are little. So if you're using one of those little apps like Gamma or something to write a PowerPoint presentation for you or something like that, just making sure that there's no student data or staff data or anything put into those, and that you are not taking the risk that it could have malware in it and attack the corporate network. So doing it on a personal device that's not connected to the school network or something like those kinds of, you know, initial risk management, basic sort of stuff.
So it's just one layer back from taking down the corporate network if there's something dodgy there if you know what I mean. So making sure it's very, very isolated. There's nothing wrong with experimenting and finding out what's out there. Just be very, very cautious if you're doing it on the school network. I would say what (INAUDIBLE) do, cafe would do, a lot of independent schools are doing, are blocking access to certain things for that reason alone. So then think about it. If you're using it from your mobile device, you know, does that connect to your banking? All of those. So really check in on the terms and conditions of use would be the first thing. Always remember from the student side of things, (INAUDIBLE) doesn't work. So we need to think about how we teach them how to do that safer in the classroom. And I think by eliminating the curiosity, by using it in class as soon as possible. So they're not going off and doing things in the wild, so to speak will make a really big, big difference to that.
ANAM JAVED:
Excellent. Thank you, Kirra. I've also shared a link to the book that you held up not long ago in the chat. Referring to the chat function, just reminding our attendees that that is open for you to use to pose questions as well. But Kirra, a question that I feel quite strongly about personally is the existing inequities in education. And you talked a lot about, you know, the benefits of using AI to address that. But there's a question in the Slido about, you know, relying on AI use potentially exacerbating socioeconomic inequities in schools who perhaps may not have widespread laptop access or a community that are not perhaps internet savvy. Are you able to comment on that and what leaders, school leaders and teachers should be aware of?
KIRRA PENDERGAST:
100% I am. I work with schools, the smallest school I work with has nine students. The largest school I work with has 3,700. And it's a very, very wealthy American school in Hong Kong. So you can imagine I've seen every different aspect of it. What we need to be thinking about is how a school can bridge that gap and play that role (INAUDIBLE) every student equally.
Because a lot of students that I work with, this is why I've been to date, fairly anti-phone ban. I'm very pro often away, but I'm very anti-phone band because some of the schools I've worked in, the kids have only got access to the internet through their mum or dad's phone to do work, you know, and things like that. So there's all of those things that we need to take into consideration. So when it comes to that widening of the digital divide, getting access at school to this kind of technology is really important because depending on where a child lives, like as I said before, I'm from Byron Bay. The Byron Bay hinterland, just to drive from Byron to Lismore, my mobile phone will drop out five times.
When I work with (INAUDIBLE) masters at Kyogle High, more than 50% of the students at that school have no internet connectivity at home, so school is where they can access this kind of thing. So I think we need to think about, I think we need to have a big think about, you know, what we're doing with BYoD devices and potentially going back to being able to issue student devices so we have more control over what's actually on them and how they're used. It's a very, very big topic. Anam, I could literally talk about that alone for an hour from the breadth of experience I have with working with (INAUDIBLE). So there's so many different intricacies with that one. So I'm happy to take that offline if anyone wants to have a bigger discussion about it.
ANAM JAVED:
Fabulous, Kirra. Thank you for that. We've got two questions coming in directly to me and through the chat as well. So I'll go to the one that's been privately chatted, (INAUDIBLE) to me. Can you clarify a bit more? Uh, Kirra, if we are students who use apps in schools, this is a good one. Are we assuming duty of care in that situation, which is different to social media? That sort of remains more in the scope of families allowing kids to hop on to Snapchat and Facebook. But it's a really interesting one if we're using, you know, AI in the classroom even and, you know, imagine a secondary classroom of 25 students, we can't potentially hover over every shoulder.
KIRRA PENDERGAST:
That's correct.
ANAM JAVED:
What are your thoughts on the duty of care there?
KIRRA PENDERGAST:
Oh, I have a lot of (INAUDIBLE). This is why I use the analogy of building a house to lock up, and getting the ethics and getting the security and privacy side of it right. Getting the policy guidelines right before we introduce it to, into the classroom. Because even if the department or the federal government have come out, you know, Jason Clare, the Federal Minister for education was very adamant in January, Australia will be using this in the classroom.
(INAUDIBLE) a lot of the state premiers had said, absolutely hold off until we get the policy right. Now, I was a bit one foot in each camp because while I think kids have got to have access to this, teachers need to be using this so they can understand the tools, we absolutely have to nail down that policy and guidelines per a school. So when I first started working in cyber security, for example, and then when I first started my first cyber safety education company, like way back in 2009, the first project I did, which is a good analogy for this, was for, Metro South Health in Queensland, where I wrote the Education Policy and Guidelines strategy for 12,000 staff across that department.
And then we did everything right down to induction training on how they could use social media, when they could use social media, all of those things in the context of being an employee at that (INAUDIBLE). So I think what we need to be looking at here is that from a student perspective so that from a school's duty of care point of view, you have ticked every box. You said, well, we've taught them the ethics, we've notified the parents, we've changed the photo permission to publish forms. We've done everything that we possibly can to keep the child safe when they're using this technology at school. And if they do something outside of that, then you have, (INAUDIBLE) from a due diligence point of view, this is why it's really important to get that right above and beyond what may be coming, to start getting those guidelines in place now, getting self-assessments done so that you know everything that's being used across the school so that if something did happen, you are 90% prepared for it until the policy comes in, that's overarching.
It's a really important thing and it's something that I talk about non-stop. We cannot, you know, and, I'm gonna say it. What my work and all of that history and policy and stuff like that is, this is where it becomes very different in a cyber security space to an ex-teacher, ex-police officer, ex-somebody teaching this stuff, you've got to have a big background knowledge on how it needs to be built from that ground up to make sure everything is safe and secure and aligned with becoming and duty of care to keep a teacher or a school out of trouble, and to keep a student or a staff member safe all at the same time. Yeah, that's the difference.
ANAM JAVED:
Brilliant, Kirra, thank you for the very in-depth response. But, you know, further flesh out your response to that. There's another question. It's a great question, actually. We were having a discussion at the academy about this as well. So you would know that ChatGPT sort of in their terms and conditions require parental consent for account creation for students aged between 13 to 18. When it comes to that parental consent, is that, can you talk about, you know, where schools' duty of care lies, are we expecting schools to send out a paper form for parents to fill out, and then it's a free-for-all and school uses it, verbal consent, what are your recommendations around that?
KIRRA PENDERGAST:
100%. Get parents to sign off. It's exactly the same as getting a permission to publish. 100% make it clear that the school will not be allowing students to use it under the age of 13. OK? If they go off and do it because it's open now and they don't actually need an account, that's like them going off and using Snapchat before they turn 13. You've got zero control over that. But again, that's a very important part to put in any guidelines that go out to the parent community. So you should all charter and things like that explaining and have a couple of parent nights or, you know, one of the schools that I work with in Wangaratta made it that, part of their tech agreement was that students and parents had to come and listen to me speak. Yeah? So then 600 of them turned up to one of these sessions, and the parents actually physically picked up the form so the child could get their Chromebook at that session. So you can put those sorts of things in place. You know, one of your teachers is coached by someone like me or something like that, so that you can then have these parent sessions throughout the year so that they are informed that if then a child goes off and uses it and they're under the age of 13, or they do something abhorrent using the technology, as a school, you've ticked all of the boxes from a due diligence duty of care, all of that point of view, it's really important.
It was a great question. Yeah.
ANAM JAVED:
Fabulous, Kirra, that's, you know, there's lots of learning in the, in your response for me as well. That's really helpful. You've given us some really sort of concrete actions we can take as educators. Speaking of concrete actions, I know Kirra, you and I talked about this earlier, but there's a question from Chris in the chat about whether you have any advice on policies. Do apps exist that can deal with, the question says, challenging students that teachers suspect are using AI to complete tasks? Is there a magic lamp that exists that can catch those submissions?
KIRRA PENDERGAST:
Yeah, look, there is, but they've got false positives all the time. Yeah? So there is things like turn it in. You know, the bigger well-known ones will pick up a percentage of how much has been generated by AI, for example. But then you've got to look at the inclusivity side of it, (INAUDIBLE) side of it. And people like me, as I explained before, you know, if you've got a child who is dyslexic or something like that and they've used some, one of the tools like even Grammarly to correct grammar and stuff like that, that's AI, you know, so it's gonna pick it up in those and say that it was written by AI when it hasn't been. So I guess spot quizzes and things like that. If you have a year seven child who has been struggling with writing an essay all of a sudden literally submit something that you know is way beyond them, like have really open discussions with them, say, look, we don't mind if you used it. We just need to know that you've used it because we need to understand that you know the topic and hit them with a couple of questions, you know, so that you can see that they understand the topic.
Because when they get to their final year of school, a lot of that is still written exams. So if they don't (INAUDIBLE), they're gonna fail, epically. So we need to make sure that they understand it. So using it yourself to start to see the patterns that pop up certain words, the structures of paragraphs, the flow of it, it becomes abundantly clear. I can spot it from 40,000 feet. In fact, I spot it most days of the week when a lot of people in my sector are writing stuff and posting it as LinkedIn posts. Well, that was written by that. That was written by that, you know. So you can see it. You really can. You don't think you will, but literally, it'll take you a week and you'll start to detect the patterns.
ANAM JAVED:
Brilliant answer, Kirra. And, you know, you've really touched on teacher agency and knowledge of students in your response as well. Ultimately, so much of that power lies with us as educators.
And, you know, you talked about the death of the essay as well. You know, perhaps AI can be that final death nail in that upcoming assessment process. But, there's a question from Rachel in the chat, Kirra. It's a pretty broad question. And, it's a very important one. Can you talk about the types of sort of classroom activities or assessments that schools could and should use, perhaps when using AI apps like some best practice examples off the top of your head that you can think of for teachers that we should be using AI to create?
KIRRA PENDERGAST :
Like from a project point of view or...
ANAM JAVED:
Yeah. Project, a task, anything for students that we could use AI to create.
KIRRA PENDERGAST :
Oh, yeah, absolutely. So, like, throw out a question about something and then ask the question, do you think this is true or not? You know, like, how do you think it came up with that answer? What do you think it would have been drawn from? Do you think it's got its information from social media? Do you think it's got it's information from Wikipedia or is it something from a reliable source? Do you think it's influenced by current affairs? You know, like asking it random questions. What is the bias? I did this with a class in Hong Kong last February, where I asked year 12 to write an ode to Agatha Christie in the tone of Barack Obama and to see what it generated. And so they could start to, you know, they were laughing at it, and then they're like, I'm going to write something about this in the tone of the Ninja Turtles. And they were looking at it, and then you start to see the patterns and you can describe the patterns to them and it becomes this really interactive, curious class. So, doing things like that where they're actually interacting with it, but it's the teacher showing them examples and you start to see those patterns is really, really good.
Detecting bias, another session I had, we were trying to come up with different really subtle ways that bias could come through. So, I said watch this and typed in. So I said, ask me a random question. It was a group of year nine. A year nine boy said, ask it. How do I ask a girl out on a date, like this? This really awkward, shy boy put his hand up and said, ask it. How do I ask a girl out on a date? And so I put that into my AI on my Snapchat and because I'm in a heterosexual relationship, Snapchat's immediate response was, it's always good to try new things. And so, like being able to discuss that level of bias with kids was really interesting to them. And then they started asking heaps more questions. So it can become a really interactive, interesting classroom activity just by dissecting the answers that it's giving them.
ANAM JAVED:
Fabulous Kirra and, you know, also adding to your answer, I know work with Rachel and Rachel it gave, AI tools give me some great podcasting templates as well on the structure and sort of the questions to pose as well. So, Rachel, you can add that to your toolkit....
KIRRA PENDERGAST:
And lesson plans.
ANAM JAVED:
And lesson plans exactly.
KIRRA PENDERGAST:
Lesson planning is fabulous. You get them done in about three minutes if you know the content, obviously, and you check that it's linked it to the right code. So you can say, write this lesson plan, line it to a Cara, write it in, you know, Japanese or whatever you want and then boom, boom, boom, boom. You can also create your own GPT within ChatGPT and train. It's like training your own little monster. Yeah. So you can train it over and over again. I've got one called my PA that books all of my travel.
So it's like I just plug in this as I'm gonna need to be here and here and here and here and here, and it comes back with boom, boom, boom, boom, boom, boom because I've linked it to Expedia.
So there's all sorts of things that you can do with these tools. Don't be scared of them. You're not going to break it.
ANAM JAVED:
Wonderful Kirra. Fabulous. There's also great interaction in the chat where Danita said that, they did catch an essay which had the word realm in it, just as you said, where, you know, the student wouldn't normally use that as well. So, yeah, absolutely. Again, lots of, you know, teachers knowing their students, knowing their content, links to curriculum as well coming in handy.
Chris has another really great question coming from a leadership and also comms and marketing perspective. So, Kirra, earlier you urged us to be sort of, cautious and publishing student faces and photos in school marketing content. And Chris's question is that marketing is very important in schools to maintain enrollments. What can you recommend in terms of best practice models where we can and do use student photos and images but also avoid, as you said, the creation of deep fakes, etc..
KIRRA PENDERGAST:
Yeah. So, the reason I brought that up is a school that I work with in Queensland or I work with them now, I didn't then, had a group of 14 year old girls photos lifted off their Facebook page deepfaked into wearing lingerie and sold all over the dark web. So, you absolutely that will throw your marketing in the bin. OK, as you can imagine. So there's marketing and there's marketing and there's child's front and center. And if you go out to a parent community and make it very public that this is the reason that you publish photos that look this way is to protect children in an age of deepfakes and AI technology. And you make sure that you still, you can still use photos, don't get me wrong and you can still use social media and share all of that joy that parents love to see, but make it more orientated on the task. So if it's a sports carnival, big, wide, whole heap of kids that you can't actually focus in on one little face, making sure that it's taken from the side or over their shoulder when they're actually doing a project like an art project or something like that.
Or as I said, if they've won a certificate holding something up that partially obscures their faces, or they go, yay! And their hand is like a cross, their face a little bit, so you could still see it's them if it's their parent. But as a school ticked all of the duty of care boxes to your best ability, you have made it that it's going to be difficult for that child's face to be cropped and put onto the body of somebody else or manipulated in any way. You know, people ask me and it's actually, I had to write it on my LinkedIn profile because people are like, why in the hell would someone with a professional career like yours have a photo of yourself with a squished up face biting an apple? You know.... with the head of LinkedIn learning last week. And I said, because I'm always telling principals and C-suite level people that I work with to make sure they change their photos on LinkedIn, because if you're sitting there with a passport style corporate shot that eating an apple is blocking my face so no one can lift it, and then create an account that looks exactly like me and things like that.
So, or duplicate my face and deep fake it in any way. There was a man in Mexico that transferred 39 million USD to a company that he thought his boss had signed off on an acquisition. It was a deep fake of his boss. So we can't be too, you know, paranoid here. So I think as much as we can do to obscure faces all the better. People like their privacy, constantly reminding parents that if they're taking photos on site at school to make sure that it's up close and personal of their child, and there's nobody else in the background. You know, my other life is very different to this one. I've been a music photographer for 30 years. I do Bluesfest and Splendour when it was around, and all of those in Byron, and the amount of times that I've been on stage taking a photo of the crowd, and there's people in the crowd holding their phones up like this in front of your their face, you cannot count it. Some just don't like it. So the other thing is always ask the child as well, even if you have their parents permission.
Do you like this photo sharing it on our social media page. Simple way to teach them about consent as well. Another one I could go on about for hours.
ANAM JAVED:
Fabulous Kirra. Thank you again, a very detailed and very helpful response. I'm also reminding the audience that this is why we've kept sort of Kirra's contact details on the screen as well. If you do have very specific school related questions, please do reach out to Kirra directly as well. Kirra, we've got a private question just pinged in. Chris says thank you, by the way, for your very detailed response earlier.
KIRRA PENDERGAST :
You're welcome.
ANAM JAVED:
We do have a private question pinged in that asks, in light of your response earlier about, you know, ChatGPT consent, etc., you know, which has been really taken on board by the crew here, would you consider it safer to use school based emails to log in them for students. So @education.gov or is it actually then creating more problems. Would you recommend a separation of identity. What sort of emails should we use to be signing in etc. to these apps?
KIRRA PENDERGAST :
Look, I think personally I would like, you know, this goes way back to cybersecurity Kirra. I would like to see, a separate email that's also attached to the school specifically for these kinds of products. Yeah. So the student has their, you know, .edu.gov address, they have their, you know, it might be Kirrachatgpt.edu.au address. Those sorts of things I think will be handled very quickly by the Victorian government and by Kath Aird and independent schools and things like that. What they're doing in New South Wales and South Australia with the deal that they've struck with Microsoft is that, it's ringed off. Everything that's education that goes into edu chat in those states, stays in those states. So the sovereignty of the data has been put front and center and the security. So in situations like that, if Victoria goes down the same path it may not be an issue.
OK, that will be sorted. But, you know, we constantly have to remind students not to use their school email addresses for anything that is not approved by the department.
Again, I've worked with a school that had a $20 million ransom due because a student had used their school based email address to sign up for an unapproved platform. So we need to make sure that that's in check with your school's IT department and that they're involved in any policy making process, to make sure that they're up to date with the latest levels of cyber security to counter that kind of thing. So, it's a bit of a, you know, it's a great question, but it's one that's in a bit of a state of flux and very dependent on whether you are a state school, Catholic education, looking at that overarching policy as it comes through. And that's a very good question that everyone should be asking of their cybersecurity specialists on site at their school.
ANAM JAVED:
Excellent. Thank you Kirra. This might be our final question but I'm going to take it, I think it's a good one. It's good learning for us all. It's a bit out of left field but the question is about schools increasingly sort of buying or renting virtual reality or augmented reality equipment. Does that fit into the realm of AI? Are there cyber safety concerns there? Virtual reality headsets, etc. plugging into apps. What are your thoughts on that sort of stuff coming into schools as well?
KIRRA PENDERGAST:
There's a lot of that coming in. There's a lot of that kind of learning in stem and things like that happening and there's always again, it's around what the app is. Have you done the due diligence on the app? Where's the data being stored and asking all of those kinds of questions. You know, again, I like to use a lot of personal anecdotes of where I've actually tested this stuff. I bought my profoundly autistic nephew, a VR headset for his birthday. He loves gaming, he absolutely loves it. And I was getting quite concerned about him sitting and gaming for hours and hours and hours. And when my brother sent him me a video the other week of Hendrix in the lounge room, doing these ones and walking around everywhere in the back garden and stuff like that, actually exercising while he was fighting off people in games and doing whatever. There is real big benefits to that. There's a lot of great technology coming through that VR space, a lot of really immersive experiences with cinema with, you know, they can literally lay on the floor and watch a movie projected in 3D off the ceiling.
There's all sorts of incredible stuff coming. And we've got to remember that analogy that I used in preparing students for their future, not our past, depending on what career they go into, this could be something that stimulates, you know, the fact that they become a developer in that sector or they're, you know, a learning designer or a professional gamer or a music producer in the gaming industry. There's a lot of great stuff that can come from it. So again, it's just build that, you know, their house to lock up, making sure they understand the risk when it comes to cyber security and cyber safety and mitigating as much of that risk as possible, and making sure that in any of the things that we've talked about today, if anything makes anyone use it feel uncomfortable, they see something that makes them feel uncomfortable.
They are a victim of sextortion or something like that, that we have created a safe space for them to speak up because often won't tell their parents. And as we all know, especially with students, you know, I was taught from when I was a young mum, you know, my son stacked it down the stairs, split his face open.
A friend that was staying at the time was a paramedic, and he kept saying, sweetheart, if he's screaming, he's OK.
It's when they go silent that we need to be worried and we need to wrap that over all of this stuff. As long as we're talking about it, people have got a safe space to speak, concerns that they have.
We can fix it. It's when people don't talk and retreat that it becomes an issue. So making sure you apply that to any new application that's used in your school is a really good way to start. It'll be great for some kids. It would be terrible for kids that are anxious or might feel a bit claustrophobic or whatever. So we have to look at it from a multitude of different angles.
ANAM JAVED:
Brilliant Kirra. Thank you for taking that question, that was really helpful. Kirra, what we might do is actually, given the sort of very diverse array of questions you fielded, I might give you the floor again, just to provide some closing comments on the topic of cyber safety and AI simply because it's really important that you do have that final say. What is the final message that you'd like to provide today in light of your wonderful presentation?
KIRRA PENDERGAST :
The importance of continuing or continuous learning, with this is not going to stop. You know, as you heard from my bio, I've been in tech for 32 years. I deal with something new every single day. Yesterday, I was on the phone to Bloomberg Business Week in New York for two hours, talking about a big exposé they're doing across an app called Roblox, which, you know, is a Metaversal app. I'm working with massive child safety issues on that every single day of the week, and it's changing at pace. Parents that as teachers and school leaders, you would know full well that when you've got younger generations coming through as parents, what we're starting to find now is they think they know a lot about technology because their first generation Myspace. So they don't show up and they go, oh yeah, I know about that. Well, no, they don't and it changes constantly. And as someone with a career as deep as mine in technology is learning things every single day of the week, it proves flat out dropped the ball on this.
We need to be continuously learning or be working with people that can be continuously educating us and keeping us at the front of what is happening is the absolute baseline. Just don't be scared of tech, just keep an eye on it. You know, even if you have typed into Google Alerts AI in education. So anything with a headline that comes up in the news on that, you're getting a feed of what's going on. You know, those little things like that can make a huge difference.
ANAM JAVED:
Excellent. Thank you so much, Kirra. I'm really pleased that you finished off with that call to action, particularly about us being lifelong learners, but also a really solid reminder that, you know, rather than waiting for AI to go away, I think it's important to accept that it's not going anywhere. It's always been with us, as you mentioned anyway, to start with and it's only going to get more entrenched in prevalence. So we might as well sort of take it as a learning opportunity. Really appreciate you providing us with that lens. Kirra, on behalf of the academy, we are so, so grateful to have you presenting from overseas today and sharing your knowledge on such an important, pressing and current topic. Thank you so much for joining us. I did want to mention Kirra, that we've got some really lovely feedback coming in the chat, privately and publicly as well.
Thank you Kirra, that was very useful. Thank you, that was fascinating and very interesting and important. So I wanted to share that with you, but also team to everyone who signed in, we wanted to mention on our part that there will be some cheat sheets coming from you, Kirra, that will be circulated post this webinar on a weekly basis.
So, keep an eye on your inbox, particularly, sort of, your other or junk mail inbox as well. We don't want you to miss out on these cheat sheets that Kirra's compiled on the different topics that Kirra has covered today. On that note, we've got Kirra's details in the chat there, still on the screen deliberately. Please take a screenshot or just copy paste what's in the chat as well. What's going to come up now in the chat is a link to our remaining thought leadership series and I'm just going to do a plug here for the Academy. We've got a session coming up next also on AI, but from a future of education, student agency and creativity lens. And that is going to be delivered by our international guest, Ronald Beghetto. After that, we've got a thought leadership series continuing with Tasneem Chopra OAM on cultural safety, Doctor Helen Kelly on teacher burnout, and Bruce Armstrong on leading schools through times of disruption. Some of that disruption can be caused by AI as covered today.
So please keep tuning in to our thought leadership series, the links in the chat. But what we're going to do now team is we're going to give our teachers, educators and school leaders five minutes back in the day. Thank you once again, everyone, for joining us for the Academy's thought leadership series today. The session on cyber safety and the safe use of AI in schools. Have a great evening everyone and Kirra, thank you again for waking up very early in the morning from Europe to join us for this webinar. Thanks everyone.
KIRRA PENDERGAST:
Pleasure. I'll be back in Australia on Friday. I hope it's not too cold. See you later.
ANAM JAVED:
Excellent. See you later, Kirra.
KIRRA PENDERGAST:
Bye.
ANAM JAVED:
Bye, everyone.