The UK Online Safety Bill, passed in October 2023, is now officially the UK Online Safety Act, one of the most significant internet laws in recent UK history. Its goal is to create a safer online environment, particularly for children and young people, by enforcing strict rules on platforms and services that host user-generated content. Enforcement phases are being rolled out gradually through 2024–2025 under Ofcom’s oversight.
The Core of the Online Safety Bill
The UK Online Safety Bill, first introduced in 2021 and passed in October 2023, has now become the Online Safety Act. It is widely regarded as the most ambitious UK attempt to date to regulate harmful online content and enforce stronger protections for internet users, particularly children and vulnerable groups.
The law was drafted in response to growing concerns about how unregulated digital platforms expose young people to explicit, harmful, or illegal material. By placing new duties on tech companies, the Act seeks to establish a safer, more accountable online environment, marking one of the most significant overhauls of UK internet regulation in recent decades.
Goals of the UK Online Safety Bill
The Online Safety Act builds on earlier efforts, including the Digital Economy Act 2017, which introduced online age verification requirements but was never fully implemented. Unlike its predecessor, the new Act has a broader scope and stronger enforcement mechanisms, making it one of the most far-reaching internet safety laws in the UK.
Its core goals include:
- Protecting children from harmful content
A primary priority content area is ensuring that minors cannot access harmful or age-inappropriate material, particularly pornographic content. To achieve this, platforms must implement rigorous age assurance and verification systems capable of reliably preventing under-18s from exposure to such content.
- Reducing exposure to illegal material
Platforms are required to swiftly detect and remove content linked to terrorism, child sexual exploitation and abuse (CSEA), animal cruelty, illegal drug and weapon sales, and posts promoting suicide or self-harm. By addressing this material risk, the Act seeks to prevent online activity that could cause significant harm to users, especially children and vulnerable groups.
- Imposing a duty of care on platforms
The Act introduces a new duty of care for online services, obliging them to assess and mitigate risks, put in place proactive safety measures, and publish transparent reports on how they protect users. This legal duty goes beyond voluntary codes of practice and establishes binding obligations on tech companies.
- Strengthening protections against online abuse
The Act expands the law around the non-consensual sharing of intimate images, including deepfakes and manipulated media. This provides stronger legal recourse for victims and reflects a growing recognition that such content can cause severe and lasting significant harm.
Together, these provisions represent a fundamental shift in responsibility: instead of leaving safety entirely to individual users or parents, the Act places accountability squarely on the platforms that host and distribute harmful content, ensuring they cannot ignore or downplay their role in online risks.
Main Provisions of the UK Safety Bill
The Online Safety Act applies to a wide range of online services that allow user interaction or host user-generated content. This means its scope is not limited to the biggest social networks, but also covers messaging platforms, search engines, adult-content websites, and even smaller community-based services. The law makes clear that both UK-based and international companies serving UK users must comply.
The key provisions include:
- Mandatory age verification and assurance
Adult-content online platforms must implement robust age-checking systems to prevent minors from accessing content harmful to them. This may involve document checks, facial age estimation, or reusable digital IDs.
- Risk assessments and mitigation duties
Platforms and search engines are required to regularly assess the risks of illegal and harmful content on their services and put in place effective measures to reduce those risks. This shifts the burden of responsibility from users to the platforms themselves.
- Duty of care for harmful and illegal material
Companies must act swiftly to remove illegal content such as child sexual exploitation, terrorism, and extreme violence. They are also expected to limit access to “legal but harmful” content that could negatively affect children, such as self-harm or eating disorder promotion.
- Strengthened protection against abuse
The Act expands the law against non-consensual intimate image sharing and explicitly covers deepfake pornography, giving victims stronger legal remedies.
- Regulatory oversight by Ofcom
Ofcom is empowered to issue codes of practice, investigate non-compliance, and impose penalties. Its role goes beyond guidance — it has the authority to levy significant fines, block access to non-compliant services, and, in serious cases, pursue criminal liability for company leaders.
By combining age verification, proactive risk management, and regulatory oversight, the Act represents a comprehensive framework for online safety that significantly raises the accountability standards for digital platforms.
Timeline of the UK Online Safety Act
The Online Safety Act went through several years of development before becoming law:
- 2021 – The Online Safety Bill was introduced in Parliament, sparking wide debate about internet safety, free expression, and platform responsibility.
- October 2023 – The Bill was passed and formally became the Online Safety Act.
- 2024–2025 – Enforcement is being phased in gradually. Ofcom will publish detailed codes of practice, consult with stakeholders, and begin rolling out its oversight. Companies will be given time to adapt, but significant penalties will apply to those failing to comply during these enforcement phases.
This phased approach is designed to balance urgency with practicality, giving companies time to implement new safety systems while ensuring the protections promised by the Act are not delayed indefinitely.
Enforcement and Penalties
Enforcement of the Online Safety Act is carried out by Ofcom, the UK’s communications regulator, which has been given some of the most powerful oversight responsibilities in its history. Ofcom not only sets codes of practice for companies to follow but also has authority to investigate, demand information, and take corrective action against platforms that fail to meet their obligations.
The penalties for non-compliance are severe, reflecting the UK government’s determination to ensure compliance:
- Financial penalties – Companies can face fines of up to £18 million or 10% of their annual global turnover, whichever is higher. For the largest tech companies, this could amount to billions of pounds.
- Service restrictions – Ofcom has the power to block access to non-compliant platforms and services in the UK, effectively cutting them off from the British market.
- Criminal liability – In the most serious cases, senior managers and executives may face criminal charges if they repeatedly and willfully fail to comply with the law.
This tough enforcement framework is intended to push companies to treat online safety with the same seriousness as financial compliance or data protection, creating a culture of accountability across the tech sector.
Who Is Affected by the UK Online Safety Act?
The scope of the Act is deliberately broad, covering almost any digital service that allows user interaction or user-generated content. This ensures that safety protections extend beyond just mainstream social media giants and apply to smaller or niche services as well.
Entities that fall under the Act include:
- Social media platforms – Facebook, Instagram, TikTok, X (Twitter), and similar platforms with large user bases.
- Messaging apps – Services such as WhatsApp, Signal, and Telegram, especially where encryption and user safety issues intersect.
- Search engines – Google, Bing, and others must ensure harmful or illegal content is not easily accessible through their results.
- Adult-content sites – Pornographic websites are explicitly targeted for mandatory age verification measures.
- Any service with user-generated content or community features – This includes forums, review sites, gaming platforms, and even smaller community apps where users can interact or upload content.
Importantly, the Act applies to both UK-based companies and foreign platforms that provide services to UK users. This global reach ensures that international tech firms cannot avoid compliance simply because they are headquartered outside the UK.
How Age Verification Works Under the UK Safety Bill
A central feature of the Online Safety Act is its requirement for companies to implement effective age verification and assurance measures. These are designed to ensure that children and young people cannot access adult content or other harmful material online. Unlike previous laws, the Act does not prescribe one single method but instead sets a duty of outcome: platforms must be able to demonstrate that their chosen system reliably prevents underage access.
The Act recognizes multiple approaches, including both traditional verification and newer age assurance technologies:
- Document checks – Users may be asked to upload a government-issued ID (such as a passport or driver’s licence) alongside a biometric selfie to confirm the match. This method is highly secure and already used in many KYC (Know Your Customer) systems.
- Reusable digital IDs – Some providers offer pre-verified digital identity credentials that can be linked to accounts. These allow users to prove their age without repeatedly sharing sensitive documents across different platforms.
- Facial age estimation – AI-driven technology can estimate a user’s age range based on a selfie, without requiring an ID document. This method helps reduce friction, as most users can be onboarded quickly, with ID checks only requested if there is doubt about the result.
- Other assurance methods – In some cases, services may use payment card checks, mobile operator data, or third-party verification services as additional layers of assurance.
Crucially, these measures are not just about blocking access to adult websites. They are part of a broader child safety framework, intended to prevent under-18s from being exposed to pornography, graphic violence, gambling, or other harmful online environments. At the same time, regulators emphasize that verification must be implemented in a way that protects privacy and data security, with strong safeguards to prevent misuse of biometric or identity data.
By requiring robust, privacy-conscious age assurance, the Act seeks to strike a balance between protecting children online and maintaining user trust in how their personal information is handled.
Criticism and Support for the Online Safety Bill
The Bill has received robust support from child safety advocates. Organizations such as the National Society for the Prevention of Cruelty to Children (NSPCC) and the Equality and Human Rights Commission (EHRC) have hailed the Bill as a necessary measure to protect vulnerable young users from the myriad dangers of the internet.
However, the Bill has also sparked significant debate among technology experts and privacy advocates. Some of the concerns raised by critics argue that the stringent age verification measures and content removal requirements could infringe on privacy and free speech. There are concerns about the potential for misuse of biometric data and the broader implications of such invasive verification processes. On top of this, many tech companies, especially those providing user to user services, argue that the Bill makes them unfairly liable for the content on their sites.
Balancing Safety and Freedom
The introduction of the Online Safety Bill represents a critical moment in the ongoing effort to create a safer online environment for children. While the Bill aims to address genuine and pressing concerns, it also raises important questions about privacy, data security, and the limits of government regulation.
As the UK moves forward with implementing this legislation, finding a balance between protecting young internet users and preserving the fundamental rights of privacy and free expression will be paramount. Continuous dialogue between policymakers, technology companies, and civil society will be essential to ensure that the Online Safety Bill achieves its intended goals without unintended consequences.
In the end, the success of the Bill will depend on its ability to adapt to the rapidly evolving digital landscape while maintaining a steadfast commitment to the safety and well-being of children in the UK.