By Nadia Zaifulizan
As technology and connectivity advances, more people are aware of the need to set boundaries of what is acceptable online behaviour. The internet culture has made popular contents more favourable, regardless of the ethics or morality of such contents. This has sparked many social catastrophe that influenced online behaviour into cruel, real-life outcomes. Now social media platforms are taking steps to streamline acceptable online behaviour. While some attempts are reportedly slow moving and inadequate, others are optimistically providing a better and fresher alternatives of setting these necessary standards. We look at some of the attempts made by these social media / media platforms to gauge how much has changed since the rise of popular internet culture.
Facebook struggled with content moderation for years, and data privacy issues on the side.
There are 2.41 billion users on Facebook. Each week, more than 10 million User-Generated Contents require review and moderation, worldwide. Facebook’s moderation tasks was a huge ongoing burden because of the difficulties in making decisions that would be in the best interest of the diverse nature of its 2.41 billion users worldwide.
The standards it set as guideline for content moderation includes topics like violence and criminal behaviour, safety, objectionable content, integrity and authenticity, as well as respecting intellectual property. Its effort are immense but the standards’ improvements took a relatively longer time than the time taken for contents to proliferate online. In 2016, there were an alleged 1,340 “credible terrorist threat escalations” on Facebook, awaiting moderation. This was only the ones reported. Terrorism, hate speech, disinformation, crime sharing are rampant in the platform’s user-generated contents. This year, the Christchurch massacre was live-streamed on Facebook, sharing with the world the shooting and killing spree to promote the user’s propaganda. People are on Facebook not only to connect, but also to gain and utilize popularity and mass followings. In many cases, these are then utilized to influence perceptions, actions, or outcomes.
Recently, Facebook was exploring hiding Like counts. This can be interpreted as an attempt to move towards content value instead of the emphasis on popularity that is often encouraged by the number of Likes. Facebook is also prescribing new rules for political advertisements to ensure transparency and increasing platform security facing upcoming elections.
There are 139 million daily active users on Twitter. 38% of abusive contents are automatically moderated via automated technology. Last July, Twitter announced that it will hire human moderators to determine if a language is dehumanizing others, in the basis of religion. This is a step in acknowledging that there are still context and sentiments that automated technology will need to be further trained with, in order to grasp the meaning behind a message or post, as opposed to humans who have years of context and understanding in gauging the same words.
Twitter has its own AI that search out derogatory terms, but these will be reviewed and moderated by a human. Twitter has also been experimenting features that filter abusive messages, and exploring the function to hide abusive comments.
Alternatively, some third party volunteer users created the Twitter Blocklist to block a list of accounts that they deemed as harassers. This is an attempt by users and can be used by other users, to protect their personal Twitter space from Twitter harassment.
Youtube
Youtube’s Harassment and Cyberbullying Policy addresses content or behavior intended to maliciously harass, threaten, or bully people. This applies to content posted by its users and includes contents that incites others to harass or threaten individuals on or off YouTube. Youtube implements the 3 strikes system where 3 strikes in the same 90-day period will result in permanent removal of the user’s Youtube channel.
Although Youtube’s policy and stipulations have spelled out anti-bullying and anti-harrasment efforts clearly, its implementation has been subject to many criticism. Alex Jones Youtube channel was able to gain traction and massive number of subscribers and viewers before it was finally removed last year. This comes after many families and civilians became the target of bullying and harassment following the hoaxes disseminated via his propagandic InfoWars videos and online contents. The damage was traumatizing.
What This Means
Social media platforms are continuously subjected to the risks of user abuse and manipulation. The need to set standards and religiously implement them in content moderation is crucial in ensuring that users are protected, public safety is preserved, and society are more aware of the standards of acceptable online conduct. Many social media platforms are still making huge improvements to their content moderation, and while we look forward to these improvements, we can also do well in raising awareness of these issues.
For more society-focused tech updates, subscribe to our newsletter and follow us on Twitter!