Censorship online: who needs evidence?

Originally published on spiked.

The internet is made up of hardcore pornography, videos of fighting, bullying, rape and websites that glorify extreme diets, selfharm, and suicide. Or at least that’s the impression you could easily be left with after reading an alarm-ridden report just published by a UK parliamentary committee. And that means further support for the idea of controls on what we can and cannot view, all in the name of protecting children.

Harmful Content on the Internet and in Video Games , a report by the Commons Culture, Media and Sport Committee was published last Thursday. The committee’s report draws on an earlier report for the UK government authored by popular clinical psychologist, TV pundit and presenter Dr Tanya Byron, published in March. The Byron Report concluded that ‘[C]hildren and young people need to be empowered to keep themselves safe’.

In what amounts to a child-centred approach to understanding the impact of technology on children, Byron recommended setting up yet another regulatory body, called the UK Council for Child Internet Safety. The government has already agreed to do this before the end of the year. The council’s remit will be to work with internet service providers (ISPs) and industry to place the interests of children at the forefront of how the games industry and myriad website publishers must rate, monitor and, in some cases, censor their content.

The consensus is that parents can no longer be trusted to deal with the various hurdles that our risk-averse society has created. In the absence of parental skills, bodies like the new council will help alert parents to the potential dangers when children happen to stray online without any supervision.

News of all this has caused some protest, but only amongst those who produce games and websites. The booming computer games industry argues that it has already put in place all the necessary checks and balances to regulate games. They insist their own standard, Pan European Games Information (PEGI), is good enough for the job.

On this, Bryon’s report fudged the issue. She thought a combination of PEGI and the British Board of Film Classification (BBFC) ratings would do the job, arguing that both the BBFC and PEGI could put their stickers on the front and on the back respectively on each games’ packaging. The select committee’s recommendation, on the other hand, is to extend the remit of the BBFC to include computer games.

But regulating the games industry is just one part of the select committee’s focus. They also warn that children regularly stray online unsupervised, especially to websites like YouTube and other various social networking websites. What particularly worries the committee is that these websites are full of content uploaded by all kinds of people about any subject of their choosing. And in the case of social networking sites like MySpace and Facebook, they worry that children are inadvertently putting themselves at risk by posting information about themselves online.

In consequence, fingers are being pointed at website owners including Google (which owns YouTube) because the committee argues that they are not doing enough to protect children from a mass of inappropriate content. The committee believes big service providers like YouTube should be more proactive in reviewing material, more efficient in removing it if it is unsuitable, and better at flagging it up with a label where necessary.

The problem with websites, and the internet in general, is that they are very hard to regulate. Websites like YouTube thrive on massive amounts of content that is constantly being uploaded by thousands of people every day. Google have said that to try to regulate all of this content (some estimate as much as 10 hours worth is uploaded every minute) makes the task of censorship nearly impossible.

As a result, the debate around the select committee report has narrowly focused on who should be regulating who, whilst completely ignoring the major assumption behind this discussion: that the internet is causing children harm. Indeed, no-one seems to be challenging the misconstrued evidence about why children and their parents need help in dealing with the internet’s content.

In fact, self-regulation and censorship is already happening. The government-endorsed Internet Watch Foundation, set up in 1996, aims to ensure that all ISPs and mobile operators remove any offensive or illegal content that they might inadvertently host.

The Byron Report and the new select committee report raise the bar of internet regulation. But the central claim that the internet causes children harm is not backed up with any serious evidence. Likewise, the focus on the internet’s ‘dark side’ is also unfounded. The obsession with protecting children is opportunist and a convenient means to deflect criticism of the proposed regulation of content; critics are simply told that we must err on the side of caution. The available research offers no conclusive proof either way that the internet is doing irreparable harm to children. As the select committee admits, there is ‘still no clear evidence of a causal link between activity or behaviour portrayed on-screen and subsequent behaviour by the person who viewed it’.

There is nothing new about using the vulnerable to justify restrictions on what can be viewed, particularly those who are regarded as lacking the maturity or capacity to understand what is being shown (which has always included children and those with special needs, but would once have included women, too). What is new about the select committee report is that it uses the language of risk so as to by-pass the need for evidence of harm or offence; this ‘you can never be too sure’ outlook will always trump the ambiguity of the research to date. The cause of protecting children conveniently makes sense when it is, as the committee says, ‘based on the probability of risk’. As the committee declares, ‘incontrovertible evidence of harm is not necessarily required in order to justify a restriction of access to certain types of content in any medium’.

Not only is the new report blasé about the lack of evidence to support its conclusion that this new media content can be harmful; the committee cannot even define what is meant by ‘harmful content’: ‘The definition of what is “harmful” is not hard and fast: for one 10-year-old, a scene will seem very real and disturbing, whereas another will be able apparently to dismiss it or treat it as fantasy…’

But while there is little evidence being presented on how and why the internet is a threat to children, once the spectre of children being at risk is raised, everyone closes ranks. Yet again, the internet provides the perfect prism through which to discuss the culpability of adults as being unfit or ill-equipped to bring up children.

We should be extremely suspicious whenever politicians, campaigners and ‘experts’ play the children card. Almost any kind of restriction can be justified if the young are supposedly at risk. Amidst all this panic, we need to draw the opposite conclusions to the select committee report and demonstrate why the internet should be left alone. While the internet still remains relatively uncensored and unregulated, it causes us to act like adults in how we deal with it, and in how we supervise others, including our children. However, if this latest set of proposals gets through, it will mean allowing the authorities to decide paternalistically what we can watch or play. In the name of protecting children, we will all be treated as children.