The government is due to table its long-awaited online security bill in parliament on Thursday.
The bill aims to tackle a wide range of harmful online content, including cyberbullying, pornography and material promoting self-harm.
Social networks could be fined or banned for not removing harmful content, and their bosses could be jailed for non-compliance.
Labor said the delays in the bill mean disinformation is on the rise in the UK.
The bill’s regulator, Ofcom, will have the power to request information from companies and executives who do not comply face up to two years in prison within two months of the bill coming into force.
Senior managers would also face criminal liability if they destroyed evidence, failed to attend an Ofcom interview, provided false information or otherwise prevented the regulator from entering offices.
Any business breaking the rules would face a fine of up to 10% of their revenue, while non-compliant websites could be banned outright.
Culture Secretary Nadine Dorries said the bill meant tech firms wouldn’t have to “check their own homework.”
“Tech companies have not been held accountable when damage, abuse and criminal behavior raged on their platform,” she said.
One of the new aspects of the bill is the introduction of a “right to complain” for people who feel their social media posts have been unfairly taken down.
Large social media companies must assess the risks of the type of adult legal harm that could arise from their services and determine how to manage them — and consistently enforce these Terms.
Definitions of these legal damages will be set out in additional legislation, but possible examples could include material that promotes self-harm, eating disorders or harassment.
It has taken some time for the legislation to reach the stage where a bill is now due to be presented to Parliament.
Here’s a timeline of how it unfolded:
An Online Harms White Paper was first presented in April 2019 by the Conservative government – then led by Theresa May.
It proposed a unified regulatory framework to address a range of harms.
At its core, there was a duty of care for internet companies to combat harmful content, with an independent regulator (later known as Ofcom) set up to monitor and ensure compliance.
While children’s charities such as the NSPCC welcomed the move, others felt the term ‘harm’ was not well defined.
Privacy organizations like the Open Rights Group warned that the bill could endanger freedom of expression.
The name had changed to the Online Safety Bill when a draft version was included in the Queen’s Speech in May last year and published the following day.
Two months later, a joint committee of MPs and members of the House of Lords was set up to examine the content.
Key recommendations released in December 2021 included:
- all pornographic sites should have a duty to prevent children from accessing them
- Individual users should be able to complain to an ombudsman if platforms have not met their obligations
- Tech companies should appoint a security controller
- Scams and scams – such as B. Fake advertising – should be covered
- The bill should not only consider the content, but also the “potential harmful effects of algorithms”.
The Law Commission also proposed creating a number of new criminal offenses, including:
- Promote or incite violence against women or based on gender or disability
- knowingly disseminating seriously harmful misinformation
- Cyber-flashing – sending unsolicited nude pictures
- the intentional sending of flash images to epilepsy patients
There have been a number of legislative changes in recent months.
In February, the Department for Digital, Culture, Media and Sport (DCMS) announced it would expand the law to include additional offences, including revenge porn, hate crimes, fraud, the sale of illegal drugs or weapons, and the promotion or facilitation of suicide, people smuggling and sex crimes Exploitation.
Other recommendations related to cyber-flashing, prank calls, encouragement or support for self-harm, and epilepsy trolling would be considered.
A few days later, the DCMS announced that porn websites would be legally required to verify the age of their users.
And this month it was said that social media sites and search engines would be forced to stamp out investment scams and romance scammers on their platforms.
Social media platforms would also have a new legal obligation to prevent paid fraudulent advertisements from appearing on their services.
Martin Lewis, the founder of website MoneySavingExpert, whose face is often used in fake ads, said he was “grateful that the government listened to me and other campaigners” and included fraud in the legislation.
Labor Party shadow culture secretary Lucy Powell said the delays in the bill had “allowed the Russian regime’s disinformation to spread like wildfire online”.
She added: “Other groups have observed and learned their tactics, with Covid conspiracy theories undermining public health and climate deniers jeopardizing our future.”
One of the biggest debates has centered around online anonymity. Some argued a crackdown on the use of anonymous accounts should have been included in the bill.
Others note that legislation requires many changes but does not always provide solutions. For example, companies will be left to decide how best to comply with the new age verification rules.
Activists from a range of organizations including Demos, Carnegie UK and Full Fact said: “Rather than trying to ban or take down any potentially harmful content, the bill must protect freedom of expression by tackling the business models of major tech platforms that rely on it.” are instructed to amplify sensational and extreme content for large numbers of people.”
Others argued that the bill is unlikely to live up to expectations.
Jim Killock, executive director of the Open Rights Group, said: “The fact that the bill is constantly changing its content after four years of debate should tell everyone it’s a mess and will likely be a bitter disappointment in practice.”
Add Comment