AI and the Evolution of Cyberbullying in Schools

 

by | March 20, 2025

 
Reviewed for accuracy by Jen Sobieski MSEd

AI and the Evolution of Cyberbullying in Schools

 

Outline of a human head with letters exiting their mouth implying speech.

Today, like never before, Generative AI is right at our fingertips, amazing us, just like the internet used to. With this new technology, however, comes even greater responsibility on both students and teachers to be as informed as possible.

 

How can Generative AI be Harmful?

 

Generative AI is a tool that uses computer algorithms to synthesize pictures, videos, writing, and other forms of media, by sifting through and analyzing large amounts of data. Once these forms of media are generative, they are considered to be unique, even when they may look similar to many previous versions of something else. 

 

While these many Generative AI tools can be very useful for innovation, new conversations, and ideas, they are just like any other form of technology: only as good as who is using it. 

 

Since Generative AI is still in its early stages, every day it grows to be even more powerful and knowledgeable about our world, and can be another online platform for hate speech, harassment, violations of privacy, and many other criminal activities. 

 

One of the negative effects of social media, and putting either public or more private matters about your life on the internet, is that you cannot control where they will end up. This is the same problem with Generative AI, but on a much larger scale. With Generative AI, cyberbullies in and out of the classroom can use the victim’s social media, personal information, or anything else they gather online to instantly create forms of media, including harmful messages. This makes bullying rapid and brings it to a much more personal level. 

 

Bullies can easily create fake online identities or accounts to trick people for fun, monetary gains, or other harmful reasons. Chatbots, or other agents of conversation, can make even young children feel as though they are talking to a real person online, disrupting their interpersonal relationships with others. With all of the new uses and offsets of Generative AI, being informed is the best way to help students protect themselves, and to protect school communities overall.

 

Problems for Schools

 

Students in a futuristic classroom looking at a portrayal of the globe.

Students in a futuristic classroom looking at a portrayal of the globe.

Generative AI has and will continue to pose many threats to the world, bringing school bullying to new and terrifying levels. For one thing, the direct and indirect harms of AI are at this point very hard to measure. With the ease in which deepfakes, or AI-generated materials of people digitally altered to be someone different, go undetected, bystanders may not know bullying has occurred. Then, the bully will get away scot-free, while permanently tarnishing the reputation of an innocent child with fake information.

 

This makes it more important for teachers, educators, and administrators to be cognizant of all the new and odd content that may pop-up on school websites and other platforms, and to be constantly fact-checking it with other sources of unbiased information. Even this tip is not enough to confirm that these deepfakes, manipulated forms of media that are meant to appear real, or other harmful content have fully been deleted off other students’ phones or the internet as a whole. 

 

Additionally, educators only have as much information about their students as schools or the individual students themselves provide. Therefore, it can be hard to understand the potential devastating effects of students being targeted by harmful sources manipulating their personal information in a negative way that has no bearing on the school itself. 

 

Keeping this in mind, solving AI cyberbullying is not just about building better software for a generation, but also building a stronger culture around the data it is generating from. AI is a societal issue, and the approaches in which we use to deal with it, are just as important as the intended results themselves, as this is not just one easy fix.

 

How to Help  

 

That being said, there are still many useful tips for educators to help minimize the effects of AI’s drawbacks as much as possible. For one thing, many mainstream AI tools have attempted to block the creation of offensive or inappropriate content for students. However, there are other sites that could still allow for this material, if people are looking for it. This is a challenge for all parties involved, but being as alert as you can of potential harm, and reporting problematic content to the original platform could help. 

 

This has caused feelings of helplessness in many parents and has even raised concerns about how to redesign the classroom.  As with any previous form of brand-new technology, teachers should know that it is extremely important to be vigilant and rely on the reputations of students, using common sense in the classroom. Building trust with students is a great way to help make sure that AI is used appropriately, and it can be a strong educational tool if digital literacy is practiced. Here are some of the best ways to support digital literacy in the classroom:

 

  • Use it for critical thinking: discuss its limitations in terms of generated material and how to use it for creative problem solving

 

  • Use it for ethics: talk to your students about privacy concerns and what are appropriate ways of using public and personal information that may or not be yours for the taking

 

  • Use it for research: part of the benefit of Generative AI is all the sources it can pull from almost immediately. It is important to teach your students how to properly converse with AI as a learning tool

 

After implementing these digital literacy steps in the classroom, students will not only have a better understanding of how this technology works but also a better understanding of the future of learning and technology’s interaction with the world.

 

We can learn how human intelligence is synthesized, to an extent, with AI. Similar to the plasticity of the human brain, we can assist AI in improving its learning systems over time, to reflect the multi-faceted nature of information-processing. Ultimately, the biases presented in the data generated by AI can be used to teach students a lesson about subjectivity versus objectivity and how to become a stronger decision maker. In conclusion, with the growth mindset that technology forces us to enact, AI can build better leaders from the classroom to the workplace, and throughout the rest of students’ lives.

 

How Frenalytics Can Use AI to Stop Cyberbullying

A person’s outline coming out of a computer screen with thought bubbles surrounding them.

Frenalytics’ user-centered software allows students to personalize their education to meet their individual needs, and uses students’ past feedback and experiences to highlight the best future content for their learning. AI can be very helpful for this method because it can use improved translation, text-to-speech and other crucial features to make education more inclusive for all its users.

 

To combat cyberbullying head on, Frenalytics can use AI coupled with its educational software to track student performance and experiences that may inform their choices about how they use technology. AI can provide educators a unique form of monitoring to help them gauge how they should teach a class or where their students may require more knowledge, expertise, or support in digital literacy.

 

Engage Your Mind with Frenalytics™ – signup for a free trial, or call (516) 399-7170 to learn more.

 

About Frenalytics

FrenalyticsEDU is an award-winning, automated, real-time progress monitoring and personalized learning platform designed specifically for ELL students and students with IEPs and 504s.

Our program provides standard-aligned resources across English Language Arts (ELA), Math, Reading, SEL and life skills, in a gamified user friendly environment.

 

Want to see how Frenalytics helps patients and students live more independent lives?
Click here to create a free account, or give us a call at (516) 399-7170.

 

Try Frenalytics for free!

 

About The Author

 

Maya Kareer is a sophomore at Boston College studying Applied Psychology, Data Science, and Management. While working with Frenalytics, Maya has been able to further research how AI can both help and hinder educational progress in all aspects of the classroom. Through her own writing, she hopes to use her research to increase AI awareness among educators, while explaining how Frenalytics has already aimed to achieve this goal. Maya also loves listening to music, playing the piano, and baking with family and friends.

 

Contact Maya: maya@frenalytics.com

Leave a Reply

Your email address will not be published.