Post 5: Role of technology in combating cyberbullying

 Use of technology to monitor and prevent cyberbullying

1. Social media platforms and AI

AI-powered detection: Platforms like Facebook, Instagram and twitter use algorithms to identify and flag potentially harmful content, such as hate speech, threats, and bullying messages.

User reporting: Users can report abusive content directly to the platform, which can then take action, such as removing the post or blocking the user.

Filtering and moderation tools: These platforms often offer tools to filter out unwanted content and limit interactions with specific users.

2. Parental control apps

Monitoring and filtering: Parental control apps allow parents to track their children's online activity, monitor messages, and filter inappropriate content.

Time limits: These apps can also set time limits for screen usage, helping to prevent excessive exposure to online harassment.

Educational tools and resources

Books and articles:

"Cyberbullying: A parent's Guide" by Michele Borba



"It's not your fault: How to help your child overcome bullying" by Carolynn Cowan




"The parents' Guide to Cyberbullying" by Susannah Fox and Justin Reich




Workshops and Trainings:

Local schools and community centres often offer workshops and trainings on cyberbullying prevention.

Online courses and webinars are also available.

Online resources:


The national bullying prevention centre: https://www.pacer.org/BULLYING/





The potential for AI and machine learning

Much like offline abusive behaviours, online bullying, harassment and abuse continue to pose a significant problem for children and adults alike. Some evidence suggests that their prevalence has increased with the COVID-19 lockdowns (Keating et al., 2020; Lobe et al., 2021; Milosevic, Laffan, et al., 2021). While parents/guardians, schools/educators and the government have an important role to play in addressing all forms of bullying, online platforms such as social media, games and direct/private messaging, among others, are also key actors in this process, and they are struggling to find ways to more effectively moderate bullying behaviours (Gillespie, 2018; Milosevic, 2018). Moderation refers to examining content that is reported to online platforms for the purposes of assessing whether (a) it violates the platform’s policy and (b) is subject to eventual removal as such violative content. Abuse, harassment, cyberbullying and hate speech typically constitute breaches to platform policy.

The vastness of shared content on platforms makes it impossible to rely on human moderation only (not to mention the psychological damage that human moderation can entail for moderators because of the sensitivity and emotionally heavy nature of the content itself; see Roberts, 2019), and recent years have witnessed a steady increase in research efforts to find effective ways to leverage algorithmic techniques intended to help automate the process of moderation, such as natural language processing (NLP), machine and deep learning (from now on artificial intelligence or AI), to effectively address the problem (Gorwa et al., 2020; Gillespie et al., 2020; Vidgen & Derczynski, 2020). This would allow for a more effective triaging of content for human moderation, and it would also enable greater reliance on proactive moderation. Unlike reactive moderation, where a user reports a piece of content which is subsequently sent to moderation and processed in an automated fashion or alternatively investigated by humans, proactive moderation relies on the aforementioned techniques to automatically detect such instances of policy violation before they are reported by users. Some of the large platforms such as Facebook/MetaTwitter and Google already publish the percentage of such cases that have been detected and actioned before they were reported. However, it is not always entirely clear as to what such actioning of content entails, and how it is done, and scholars, the media and governments alike have voiced concerns regarding the lack of transparency and accountability of online planforms. Recent activities of the Office of the eSafety Commissioner in Australia (Australian Government, n.d.), and legislative developments in Europe such as the Online Safety Bill in the UK (Gov.UK, 2021) and the Online Safety and Media Regulation Bill (OSMR) in Ireland (Government of Ireland, 2022), promise to deliver greater scrutiny by enabling the government to better examine company activity via auditing and implementation of codes of conduct by companies. The importance of understanding companies’ work will increase as they expand into virtual reality, robotics and with a greater use of wearable and other “smart” devices (i.e. internet-connected devices, such as toys, virtual assistants, “smart” home appliances).

https://www.bing.com/ck/a?!&&p=42639c34f32de99b51586d925c1b688f2a9e96e68b8e5140fbb2dddf8c47d057JmltdHM9MTcyOTY0MTYwMA&ptn=3&ver=2&hsh=4&fclid=029b552c-7763-6693-2559-4464761b673e&psq=the+potential+for+AI+and+machine+learning+in+cyberbullying&u=a1aHR0cHM6Ly9saW5rLnNwcmluZ2VyLmNvbS9hcnRpY2xlLzEwLjEwMDcvczEwNDYyLTAyMy0xMDU1My13&ntb=1










Comments

Popular posts from this blog

Post 6: Case studies and Success stories

Post 1: Cyberbullying in schools