Social media platforms that fail to deliver on promises to block sexist and racist content face substantial fines under government changes to the Online Safety Bill announced on Monday.
Under the new approach, social media sites such as Facebook and Twitter must also give users the option to avoid content that is harmful but does not constitute a criminal offence. This could include racism, misogyny, or the glorification of eating disorders.
Ofcom, the communications regulator, will have the power to impose fines of up to 10% of global turnover on companies for breaking the law. Facebook parent company Meta recorded revenue of $118bn (£99bn) last year.
A harmful communication offense was however removed from the legislation after criticism from Tory MPs that it was legislating to ‘hurt feelings’.
Ministers removed the provision on regulating “legal but harmful” material – such as offensive content that does not constitute a criminal offense – and instead require platforms to apply their terms and conditions to users.
If those terms explicitly prohibit content that falls below the criminal threshold – such as certain forms of abuse – then Ofcom will have the power to ensure it controls them adequately.
In another adjustment to the bill, big tech companies must offer users a way to avoid harmful content on their platform, even if it is legal, through methods that may include moderation of content or warning screens. Examples of such content include those that are abusive or incite hatred on the basis of race, ethnicity, religion, disability, gender, gender reassignment or sexual orientation.
However, companies will not be able to remove content or ban a user unless the circumstances for doing so are clearly defined in the terms of service. Users should also be offered a right of recourse to protect against arbitrary removal of content or account bans.
The revival of the much-delayed attempt to rein in tech companies comes as Meta was fined 265 million euros on Monday for a breach of data protection law after the personal data of more of 500 million people.
The bill, which returns to parliament on December 5 after being suspended in July, also contains new provisions on the protection of children. Overall, the legislation imposes a duty of care on tech companies to protect children from harmful content, but the updated bill now includes provisions such as requiring social media companies to post reviews. of the dangers that their sites present for children. Sites that impose age limits — which for most major social media sites are 13 — will need to state in their terms of service how they enforce them.
Culture Secretary Michelle Donelan said an unregulated social media industry had “damaged our children for far too long”. She added: “I will be bringing back to Parliament enhanced online safety, which will allow parents to see and act on the dangers the sites pose to young people. It is also free from any threat that tech companies or future governments might use the laws as a license to censor legitimate opinions.
Shadow culture secretary Lucy Powell said the government had “bowed to vested interests” by removing the legal but harmful provision.
“The removal of ‘legal but harmful’ gives abusers a free pass and takes the public for a ride. This is a major weakening, not strengthening, of the bill,” she said.
“The government bowed to vested interests at the expense of user and consumer safety.”
Changes to the bill were made in the face of warnings from Tory MPs and some campaign groups that an earlier version would encourage tech companies to be overly censored and stifle free speech.
Kemi Badenoch, trade secretary and former Conservative leadership candidate, criticized the bill in July, saying: “We shouldn’t be legislating for hurt feelings.”
His comments alluded to the Harmful Communication Proposal contained in the bill, which made it an offense to send a social media message intended to cause “psychological harm, amounting to at least serious distress.” This has now been dropped and the government will no longer repeal parts of two laws – the Malicious Communications Act and the Communications Act – which it was meant to replace.
Other changes to the bill include criminalizing incitement to self-harm, a change that was introduced after the inquest into the death of 14-year-old Molly Russell, who died after viewing large amounts of harmful material on Instagram and Pinterest in 2017. Under the bill, which applies to all companies that produce user-generated content, as well as search engines, tech companies must tackle content illegal such as child sexual abuse images and terrorist material.