Post 5: Role of technology in combating cyberbullying
Use of technology to monitor and prevent cyberbullying
1. Social media platforms and AI
2. Parental control apps
Educational tools and resources
Books and articles:
Workshops and Trainings:
Online resources:
The potential for AI and machine learning
Much like offline abusive behaviours, online bullying, harassment and abuse continue to pose a significant problem for children and adults alike. Some evidence suggests that their prevalence has increased with the COVID-19 lockdowns (Keating et al., 2020; Lobe et al., 2021; Milosevic, Laffan, et al., 2021). While parents/guardians, schools/educators and the government have an important role to play in addressing all forms of bullying, online platforms such as social media, games and direct/private messaging, among others, are also key actors in this process, and they are struggling to find ways to more effectively moderate bullying behaviours (Gillespie, 2018; Milosevic, 2018). Moderation refers to examining content that is reported to online platforms for the purposes of assessing whether (a) it violates the platform’s policy and (b) is subject to eventual removal as such violative content. Abuse, harassment, cyberbullying and hate speech typically constitute breaches to platform policy.
The vastness of shared content on platforms makes it impossible to rely on human moderation only (not to mention the psychological damage that human moderation can entail for moderators because of the sensitivity and emotionally heavy nature of the content itself; see Roberts, 2019), and recent years have witnessed a steady increase in research efforts to find effective ways to leverage algorithmic techniques intended to help automate the process of moderation, such as natural language processing (NLP), machine and deep learning (from now on artificial intelligence or AI), to effectively address the problem (Gorwa et al., 2020; Gillespie et al., 2020; Vidgen & Derczynski, 2020). This would allow for a more effective triaging of content for human moderation, and it would also enable greater reliance on proactive moderation. Unlike reactive moderation, where a user reports a piece of content which is subsequently sent to moderation and processed in an automated fashion or alternatively investigated by humans, proactive moderation relies on the aforementioned techniques to automatically detect such instances of policy violation before they are reported by users. Some of the large platforms such as Facebook/Meta, Twitter and Google already publish the percentage of such cases that have been detected and actioned before they were reported. However, it is not always entirely clear as to what such actioning of content entails, and how it is done, and scholars, the media and governments alike have voiced concerns regarding the lack of transparency and accountability of online planforms. Recent activities of the Office of the eSafety Commissioner in Australia (Australian Government, n.d.), and legislative developments in Europe such as the Online Safety Bill in the UK (Gov.UK, 2021) and the Online Safety and Media Regulation Bill (OSMR) in Ireland (Government of Ireland, 2022), promise to deliver greater scrutiny by enabling the government to better examine company activity via auditing and implementation of codes of conduct by companies. The importance of understanding companies’ work will increase as they expand into virtual reality, robotics and with a greater use of wearable and other “smart” devices (i.e. internet-connected devices, such as toys, virtual assistants, “smart” home appliances).
Comments
Post a Comment