Home Page
Home Page

OFCOM Must Not Be Involved In Regulation Of Social Media

[TALKING POINT]

by Chauncey Tinker – 12 Feb 2020

Today we see news articles stating that the UK Government has decided to appoint OFCOM as the regulator for social media. We were warned this was coming in the Conservative Manifesto and in the Queen's Speech, a quote from the manifesto:

We will legislate to make the UK the safest place in the world to be online – protecting children from online abuse and harms, protecting the most vulnerable from accessing harmful content, and ensuring there is no safe space for terrorists to hide online – but at the same time defending freedom of expression and in particular recognising and defending the invaluable role of a free press.

From ITV:

My first thought was that this particular organization has a terrible track record in determining what is harmful and what is not. Quite recently they decided to take no action in the case of Jo Brand's despicable "joke" which in my opinion constituted direct incitement to violence against Nigel Farage. From the Express:

Quote:

"I’m thinking why bother with a milkshake when you could get some battery acid."

In my opinion, only "direct and credible" incitement should be prosecuted by the police. In my opinion this terrible "joke" falls short of what I have termed "direct and credible" incitement, and so in my view it would have been excessive to prosecute her for that (principally because Jo Brand is a comedian and the "joke" was uttered on a program that is at least supposed to be a comedy program, i.e. it is not supposed to be taken seriously). However I certainly think that it should have been the end of Brand's career as a public "entertainer". From the Telegraph:

My greater concern though is that OFCOM's involvement in social media will lead to the censorship of views that need to be heard. As usually happens when such legislation is introduced, the government claims that there will be "safeguards" to protect "freedom of expression". Unfortunately insufficient thought has been given to the vitally important question of what exactly constitutes "freedom of expression", and what exactly should be the limitations on that. Also as usual, the idea to legislate is a knee-jerk response to some real-life cases which triggered the "something must be done" kind of reaction. For example, the following case has been mentioned. From the Telegraph:

In this case it seems that the girl was suffering from thoughts of depression and her father suspected that she was being exposed to images of self harm and encouraged to commit suicide by malicious people online. While I have the greatest sympathy with the parents in such cases, the hard truth is that all children could potentially have been exposed to such material if they had gone looking for it (but they didn't), and so surely the more productive reaction seems to me to be to ask why a child is feeling so depressed in the first place that they feel drawn to such content, and addressing the problem at its source.

We should always remember that "online harms" are not physical in nature, but reading the government response, the wording seems almost designed to make us think the opposite is true. From the UK government:

Cyberbullying has been shown to have psychological and emotional impact. In a large survey of young people who had been cyberbullied, 37% had developed depression and 26% had suicidal thoughts. These figures are higher than corresponding statistics for ‘offline’ bullying, indicating the increased potential for harm.

The particular question of keeping children "safe" online is high on the agenda here, and many people sympathize with the government's objectives in that respect. However the risk is that in the attempt to keep children "safe", adults will be shielded from difficult realities which they really need to be thinking about, none more so than in the particular case of terrorist content.

Since first joining social media platforms just a few years ago, I have seen images that I would really rather not have seen, including graphic video footage of people being murdered. Sometimes commentary which could easily be termed "radicalizing" has accompanied such footage, but I have never been even slightly tempted to commit any violent acts as a result of it. Often though, such content has caused me to think more acutely about the serious problems we face in the modern world.

One of the examples of "terrorist content" given in the "Online Harms White Paper - Initial consultation response" was video footage of an attack in a mosque that was shared online. Quote:

The tragic recent events in New Zealand show just how quickly horrific terrorist and extremist content can spread online.

Note that it seems that exactly what will result from this consultation has not been decided yet:

as set out above we intend to publish interim codes of practice on how to tackle online terrorist and Child Sexual Exploitation and Abuse (CSEA) content and activity in the coming months.
...
Terrorist propaganda and vile online child sexual abuse destroy lives and tear families and communities apart. We cannot allow these harmful behaviours and content to undermine the significant benefits that the digital revolution can offer.

Exactly what constitutes "terrorist propaganda" is obviously subjective. Consider the example of the prosecution of Marine Le Pen in France; Marine Le Pen had been trying to alert the French people to the danger posed by Islamic State sympathizers coming into her country. From the Daily Mail:

The legal action could easily be considered as a politically motivated attempt to silence an opponent of mass immigration. Marine Le Pen was not inciting violence against anyone, if she had been then no doubt she would have been prosecuted for that specific crime.

If it is really necessary to shield children from such content, then wouldn't it be better to simply lean on the social media companies to persuade them to limit access to the over 18s say with an age verification process? According to the Twitter Terms of Service for example, there is currently a requirement that you must be over 13:

Perhaps it is better in any case that children are exposed to such realities, after all they need to be prepared for the dangers that lurk in the real world. Since the likelihood is that the problem of Islamic terrorism will increase in line with the ever growing numbers of Muslims in the West, shouldn't the children of the West be made aware of the growing threat that will likely be even bigger when they reach maturity? What do you think? Please leave your thoughts in the comments section below.

No debate on this subject would be complete without thinking about the question of child abuse imagery. However such imagery is already illegal, and so posting it is a criminal act covered by existing legislation. A quote from the above ITV article about OFCOM's proposed involvement in online "safety":

But the latest proposal could extend the remit of the regulator further with the Government white paper looking to regulate "illegal activity and content to behaviours which are harmful but not necessarily illegal".

Again this sounds rather like an entirely moveable goalpost of an objective that could include just about anything at all - "harmful but not necessarily illegal". Surely we already have the legislation we need in this area?

Another huge problem with trying to police social media is the sheer volume of content on social media. OFCOM was originally given the task of protecting viewers and listeners from "harmful or offensive material", which is obviously an entirely subjective phrase. However at that time the challenge was limited to a handful of TV channels and radio programs. Social media is a very different kind of communication medium, the content is being posted by millions of individuals and is consequently highly unpredictable.

What we do

We make sure:

viewers and listeners are protected from harmful or offensive material on TV, radio and on-demand;

When for example Twitter is hosting literally millions of tweets per day, any attempt at moderating this volume of content effectively and objectively is completely hopeless, especially with such a subjective guideline. Currently social media platforms police themselves, as we all know in a very lopsided way. I happen to think this is entirely their right and that it is up to those who disagree with the censorship on these platforms to create alternative platforms. The great danger though is that once a government regulator starts to lean on these platforms to discourage certain content, this will in all likelihood lead to the platforms going over the top and censoring more than the government intended that they should. No doubt when subjected to such pressure, given their track record, the currently most popular platforms will more than likely increase the censorship in line with their owners' political and religious biases as well.

Thankfully there are increasing numbers of alternative platforms including telegram.org, gab.com, medium.com, steemit.com, mewe.com and others. The idea that OFCOM will be able to effectively regulate the growing number of competitors in a purely objective way seems even more impossible to me, particularly as many of them have been created precisely by those who are concerned about online censorship, and those platforms allow greater freedom of expression than is currently allowed on the most popular platforms.

What's more, the scope of the legislation may not even stop at what we may think of as social media platforms, but may extend to all online forums, perhaps even to include our own forum here. I have to wonder if the budget of OFCOM is going to grow exponentially as a result of their increased remit (government funded organizations do have a tendency to grow). A hint at the scale of the ambition is seen in the government response:

Analysis so far suggests that fewer than 5% of UK businesses will be in scope of this regulatory framework. The ‘duty of care’ will only apply to companies that facilitate the sharing of user generated content, for example through comments, forums or video sharing. Just because a business has a social media presence, does not mean it will be in scope of the regulation. Business to business services, which provide virtual infrastructure to businesses for storing and sharing content, will not have requirements placed on them.

Fewer than 5% of UK businesses? That's still potentially an awful lot of businesses, and what about all the websites that are run on a not for profit basis?

Finally there is the question of penalties. What will happen when online content providers don't comply with the regulations? Another quote from the government response:

The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content

3. Services in scope of the regulation will need to ensure that illegal content is removed expeditiously and that the risk of it appearing is minimised by effective systems. Reflecting the threat to national security and the physical safety of children, companies will be required to take particularly robust action to tackle terrorist content and online child sexual exploitation and abuse.

So what will happen if they don't? A section further down titled "Enforcement" informs us that they haven't decided yet, the decision will be made in the Spring.

CONCLUSION

In conclusion at the very most I would argue that as far as protecting children online goes, a more sensible approach could simply be to restrict social media access to adults only using age verification, although as stated above I can see arguments even against going this far. Again, child abuse imagery is already illegal and so I cannot see any reason for new levels of censorship that go beyond what is already against the law.

My own view that I have expressed before is that with regard to incitement, incitement speech that does not constitute "direct and credible" incitement to violence should not be prosecuted, and neither should governments use their powerful influence beyond the law courts to silence mere opinions of members of the public that don't cross this threshold.

OFCOM was originally supposed to be an independent regulator, and it was supposed to be limited to regulating public broadcasters with huge reach and influence, not to "regulate" the general public who finally have a significant voice thanks largely to social media. As far as OFCOM's existing responsibility to oversee the content on television and radio, it seems reasonable to me that comments such as Jo Brand's incitement to throw battery acid at political figures should lead to some action, but OFCOM have shown clear political bias in that regard, and so clearly the idea that they are an "independent" regulator is laughable in any case.

Overall therefore I do not want to see OFCOM involved in regulation of social media in any way shape or form. Instead I believe the government should focus its efforts on reforming the laws that limit freedom of expression currently, they are a convoluted mess that go far beyond what I consider to be reasonable limits. Once that task is accomplished they should focus on re-educating the police, according to the reformed laws, in how to police what is actually against the law, rather than creating an increasingly grey area where government tries to influence the opinions of members of the public through a state regulator that is independent in all but name. The Home Secretary has in any case surely got far more pressing matters to worry about than content posted online, such as securing our borders and countering actual acts of terrorism, which seem to be increasingly frequent at the moment.

[Footnote: Ironically my own views are being suppressed (not banned outright) on Twitter apparently because I have been labelled as a "crypto-fascist" by some half-witted far-left extremist. Consequently it is unlikely many people will get to read my above measured thoughts on the important subject of protecting freedom of expression on social media. Please link to and share our articles as widely as possible to help us communicate the uncomfortable truths of the modern world to as many people as possible.]


RELATED POST:

My proposal for where the limits should be:


ELSEWHERE ON THE WEB:

From the Press Gazette:

Some information about recent decisions made by OFCOM (note that there is abundant evidence here that this organization can't even produce a decent website):


Note – Please note that we are dependent on the veracity of news reports provided by other news sources. It is not our intention to knowingly quote “Fake News” (except where stated as satire), but we do sometimes quote the BBC when other sources are unavailable.

What do you think? Please leave a comment below.

Please feel free to share this article on social media sites:

Tweet     Share on Facebook     Google Plus     Reddit     Tumblr