The new Act was given the Royal Assent on 26th October 2023, amidst much political trumpeting that it would make Britain a much safer online environment, particularly for parents and children. When an Act is politically charged, its aims lofty, and the penalties for breach very severe, the laudable aims are sometimes illusory, history confounding it to an ambitious failure that was not properly thought through. In this case, however, the Act has been planned and debated for six years, since 2017. It sought to tackle some of the largest global corporates in the world who have been playing the free speech defense card for too long. The result is a long and complicated Act that is difficult to understand.
Lack of Internet Regulation
One of the problems with the Internet is that it was not planned. It just emerged, arguably, in 1983 as a means of communication between academic bodies in the USA. If we had known then, what this new thing would turn into, then a lot of structure, rules, and security would have been installed to stop it growing like Topsy, almost out of control. As a parent, and former foster parent, I have witnessed firsthand the harm that children can do to each other online. At the time, it was mainly a Facebook issue, but now there are many more dangerous and unregulated platforms for children to choose from. It must be a worrying time to be a parent.
For a long time we have needed an internet referee, and the British Government have attempted to create one with far-reaching new powers in the form of Offcom. The Act is a much needed device, which tries to place an obligation on providers of user to user services who have a substantial number of users in the United Kingdom (no matter where in the world they are based) to remove both illegal content, and “legal but harmful” content from children. It also makes the online platform introduce rigorous age-verification systems. Whether children will find a way round any restriction remains to be seen.
New Rights of Action?
As a lawyer who spends his life trying to give a voice to the victims of abuse, I am interested in whether the new Act gives new rights to pursue claims against social media platforms after they have been psychologically damaged by illegal content leading to self-harm, or, at worst, suicide. Distraught parents could, of course, bring an action against the parents of the cyberbully, or more likely, an adult pretending to be a child. Personal actions are, however, fraught with difficulty due to lack of assets to seize, or, more likely, the emotional content of the allegations. Just as employers of abusers are liable for what the abusing employee does in a children’s home, a social media platform should be liable for illegal content, or “legal but harmful” content that leads to catastrophic harm to a child.
The new Act is peppered with numerous “duties of care”, which leads any lawyer to believe that it is creating new rights of action for breach of statutory duty. Duties are controversially far-reaching, even obliging platforms to look behind encrypted data. This has led the likes of Meta to threaten to remove What’s App from the UK because the technology does not exist to look behind encryption. Offcom have headed off the threat by promising not to try to intervene unless and until the technology exists. It does, however, provoke a worrying fear that even private messaging is being monitored by technology companies, with the resulting possibility of hackers stealing data, and selling it to the highest bidder.
Exemptions?
Curiously, however, Section 2 confirms that the Act does not apply to email, SMS, and MMS messages, unlike the likes of Facebook Messenger, and What’s App. Why the exemption exists, provides a strange anomaly. The only cyber bullying case which we at ACAL (Association of Child Abuse Lawyers) are aware of involved the grooming of a child by a teacher using SMS Text messages to persuade her to engage in sexual abuse. Would gentle grooming amount to “legal but harmful” content? Probably. If done via standard text messaging, however, it would not be covered by the Act. Targeting the primary cause of the abuse is more morally justified but practically more fraught with difficulty.
Codes of Practice
In order to examine what steps social media platforms, or more accurately “user to user services” must do to police the communications and content on their sites, one has to look to Codes of Practice. Currently the codes are not yet in force because Offcom are tasked to consult on the codes, and agree their content with the platforms. So arguably, before action could be taken against a platform under the Act, the codes should be in force, and a history of breach of the Act must take place. Common sense, however, could stand in the place of formal codes where lack of action that should have taken place, has not occurred.
What constitutes illegal content?
So what is defined as illegal content? The Act lists it in Schedule 7. It includes:-
- Assisting suicide
- Threats to kill
- Harassment, fear, or provocation of violence
- Drugs and psychoactive substances
- Firearms and other weapons
- Assisting illegal immigration
- Sexual exploitation and images
- Proceeds of crime
- Fraud
Summary
So how “safe” is the Act? Undeniably it creates more protection for children, and reacts effectively to the many tragedies that the Internet has produced such as groups promoting suicide and self-harm. It also attempts to police the use of cyber bullying, and social media to recruit vulnerable young adults for terrorism. The Act does not only target social media platforms, it creates new criminal offences, which the police can use against illegal users of the internet. It has been welcomed by Children’s Charities, and rightfully so. Whether it will avoid the unintended consequences of, for example banning content that is perfectly legal unintentionally because of a strict interpretation of “legal but harmful” remains to be seen.
Certainly Offcom is given a new set of sharper teeth to investigate, and fine up to £18 million or 10% of turnover, whatever is greater. Whether or not sufficient resources will be available to fulfill its new wider obligations will be a question of funding and/or resources.
The Act is a good example of the difficulty of trying to regulate an industry which is broad and complex in its make-up. Just as deciding what Human Rights prevail involves a balancing exercise between one right and another, so does a balance need to be struck between the wrongs of damaging content, and the right of free speech?
by Peter Garsden, Solicitor. For help on any aspect of this article, or, indeed, any aspect of abuse cases, please contact us by filling in our form.