In the wake of the Christchurch attacks, there’s naturally been a focus on the role of social media in spreading extremism.
I think it’s good that we talk directly about the impact that social media has in events such as this. Yet, I am worried about the use of blunt technical solutions — particularly the banning of users/groups — as a way to deal with this issue.
There is a debate to have here about free speech and censorship, and it is an important one. But let’s put this aside for the moment.
One of the main issues I have with censorship-based approaches are more practical. Recently, we’ve seen social media companies become more active in banning far-right speakers and activists, culminating in particular in the banning of Alex Jones and InfoWars from a range of platforms in 2018.
There’s two sides to the debate of what that achieved. There is good evidence this move severly dented the impact of InfoWars, with Jones’ reach reducing significantly. In turn it could easily be seen as a massive success.
However, there is a flipside. In far right circles, distrust of social media companies is becoming increasingly prevalent. Figures such as Milo Yiannapolous frequently argue that social media companies are surpressing the right and have a liberal bias. These moves in particular are egging on distrust of large institutions, and society as a whole, from far-right figures, deepening feelings of social isolation and the sense they are under seige.
With this, a range of other platforms are starting to pop up. This includes far-right Twitter, Gab
and the no-censorship platform of Voat
, amongst others. Some groups have set up forums on their websites and/or servers. For example, after being quarantined by Reddit in September last year
, the subreddit r/TheRedPill (more in the space of the Manosphere) now pushes users directly to their own forum trp.red (this already existed before the quarantine but is now more of a focus). Similar things happened when the the Incel subreddit was banned in 2017.
There are a number of challenges that come with these shifts. Firstly, these spaces are far less moderated than sites such as Twitter and Facebook, and in particular face far less social, political and legal pressure than these sites to remove abuse and violent language. This means that individual instances of abuse and violent rhetoric are rarely if ever moderated out on these spaces (Facebook and Twitter also have major issues with this, but it’s not as bad).
Moreover these spaces are focused primarily, or even entirely, on far-right discussion. Evidence suggests that online spaces such as these can further radicalisation even more, as users work to ‘one up’ each other to be the purest and most radical person in the space (this is true across the political spectrum). With literally no outside forces able to jump in, disagree with, or attack these views, this process can easily be sped up.
This is the trade off. While bans from large social media platforms may reduce the overall spread of particular users and groups, it does not get rid of those ideas entirely. Instead it can push them to the fringes of the web, places that are unmoderated, and more likely to lead to further extremism.
The point here is that, even if you take out questions of free speech and the role that social media giants should play in regulating it, technological solutions are tricky. When it comes to the web people are quick to adapt, learn and move around restraints.
This does not mean the social media giants cannot take action. YouTube for example has been criticsed as their algorithms are known to push people to more extreme content the longer someone stays on a site. While they have said they have taken action about this, there is more than can be done here.
We are fooling ourselves if we think that social media is either the primary cause, or in turn the primary solution to dealing with far right extremism. The causes are social, economic and political, and to deal with these issues we have to deal with those realities. Technical solutions are nothing but a band-aid, ones that are unlikely to stop another massacre from happening in the future.