next game-changer. The concept of Artificial Intelligence (or AI) has been around for many years, but it has only been in recent times that the general public have been able to interact and engage with AI tools in an easy and accessible way. We now have access to AI tools that can create weird and wonderful pictures, language models that can write 10,000 word essays and chatbots that want nothing more than to just talk to us and see how our day is going. To say that the possibilities are endless might be more applicable now than it has ever been before.
What’s even more fascinating is the uptake of these AI tools. As an example, ChatGPT, (the AI based language model) only launched in November last year and is currently seeing a user base of over 100 million. Needless to say, the increase has set a record for fastest growing userbase numbers of all time. With data like this, it is clear that AI is making considerable waves in the world!
But what is the response? AI has already experienced its fair share of praise, awe, consideration and concern. When innovation is thrown into the world, there are always going to be questions around how it will impact the future and what this will mean for many systems and behaviours we have grown accustomed to for so long. As with all new technologies, there are concerns and considerations to take into account so we must approach this vast technology with an open mind, embrace their potential, learn as much as we can, and as always, work towards their responsible use with safety and security in mind. If your school is looking to address the AI phenomenon with students, there are a few considerations that are worth discussing. Remember, AI is constantly evolving and as online safety professionals in the field, we are keeping an eye on new developments as they come in. For now though, consider the below: (click on the icons) AI is astoundingly impressive, but as we have seen, it isn’t perfect. If students are hoping to use these tools to create content, it is important to reiterate how essential their involvement is as a human. Many of these platforms will openly state that they have limitations. This can involve creating content that may be harmful, inaccurate, biased or offensive. Research and knowledge is still a key component - human connection must never be lost. Encourage them to act as the authority and to not assume flawless results. Remind them that any content they publish from an AI tool will be seen by others, and as such, they need to take responsibility by ensuring the content is appropriate and safe. Many behaviours associated with using AI tools are what many of us already teach. We educate around why we shouldn’t Google something harmful or inappropriate; the same critical thinking must apply with AI. Some restrictions may be in place to prevent harmful instructions or media but we need to acknowledge that the technology is young and appropriate, effective safeguards may not be in place yet. As we’ve said before, talking through online experiences with family members or trusted adults can help students navigate these spaces with more reassurance and support. If a child then witnesses or experiences an AI tool providing harmful material, it will give them more confidence to talk to a trusted adult. As well, there’s no obligation to engage with an AI tool so ensure that students can feel empowered to set appropriate boundaries and to not engage if they feel uncomfortable. While the potential seems limitless, that doesn’t mean we have to compromise our personal information in the process. Students may be talking with online chatbots or experimenting with language models, meaning opportunities may arise where there’s a desire to input personal information. In the same way we educate around keeping our data secure, the same must be applied here. It's always important to consider what reason we have to give people our personal information, and if we wouldn't give out something to a stranger, why would we do it with an AI? It’s also important for users to be aware of data protection settings that are in place within these tools. This allows them to understand how their data and information will be used and shared by the developers. If a student is unable to understand this information, then it would be a positive step for teachers to understand what may be involved so they can inform their own communities in a more approachable manner. Until we fully understand how any data we submit to a generative AI tool is used, then it is important we follow legal guidance on the use of personal data belonging to others, such as the Data Protection Act and GDPR. Due to the realistic nature of AI generated content, it is very common to believe that we are interacting with a human. This has unfortunately opened up more opportunities for scamming through the means of emails or online messages. The basics of cyber security must still apply though and vigilance is key to spotting a potential scam. More information can be found here around how to spot and recover from an attack. As AI continues to develop, it is useful to acknowledge the impact and start bringing these discussions into the classroom. Regardless of how AI will alter the education sector, there’s no doubt that students will continue to engage with these tools as curiosity will always come into the equation when something big hits the mainstream. As with most of the technology that young people access, student safety must remain a priority, so let’s support them along the journey.How to Approach Using AI with Students
AI in Education
Latest AI & Wellbeing Articles