Global Legislators Begin to Look at On Device Solutions to Stop Child Online Harm
Why Governments Are Finally Serious About Stopping Harm Before It Starts
At SafeToNet, we’ve spent years talking about how to make the digital world safer for children. Right now, we’re seeing a turning point in how governments and legislators think about online safety — especially when it comes to tackling child sexual abuse material (CSAM) and other harmful content before it ever reaches a child’s screen.
This isn’t about reacting after the fact. It’s about prevention — stopping harm from happening in the first place.
The UK: Leading With Prevention
In December 2025, Jess Phillips, the UK’s Minister for Safeguarding, made waves with the government’s Violence Against Women and Girls (VAWG) strategy. In her announcement, she stated clearly that:
“Preventing the harm from happening in the first place is key…”
She also pointed to government plans to work with tech companies on solutions that go beyond traditional content removal, including nudity-detection filters on smartphones and the expansion of technologies already being rolled out by British safety tech companies like SafeToNet.
This represents a pivotal shift. The UK government is already looking beyond platform responsibility toward device-level prevention, which aligns directly with what HarmBlock does — stopping harmful content from being seen, shared, or stored on the device itself.
That’s not small talk — that’s legislation-adjacent reality in motion.
In fact, the UK’s wider online safety framework — built around the Online Safety Act 2023 — already gives regulators broad powers to address harmful and illegal content online and to enforce duties of care on tech firms.
A Landmark Amendment in the House of Lords
In the UK Parliament, Lord Nash recently proposed an amendment to the Police and Crime Bill that would mandate smartphones sold for children to include technology capable of stopping the consumption, sharing, and storage of CSAM.
This moves the conversation firmly from “industry best practice” toward a regulatory requirement.
Across the Atlantic: Conversations in the U.S.
In the United States, conversations are evolving as well. While the focus has traditionally been on enforcing reporting obligations and strengthening law enforcement access to evidence, child safety advocates are increasingly pushing for proactive solutions.
For example, International Justice Mission (IJM) — a global anti-CSAM NGO — has publicly urged policymakers to adopt on-device CSAM prevention requirements across technologies. They argue that embedding safety features directly into the devices people use every day will accelerate progress toward eliminating CSAM.
At the legislative level, acts like the EARN IT Act and other bipartisan efforts — including the STOP CSAM Act — aim to tighten accountability for platforms and law enforcement agencies. However, these efforts still tend to center on content reporting and investigation rather than prevention at the source.
So far, federal action in the U.S. has not mandated on-device harmful content prevention — but the policy conversation around how to protect children online is increasingly pointing in that direction.
Australia: Bold Moves, but a Different Focus
Australia has taken a bold regulatory step that has captured global attention: the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which introduced a world-first law banning social media access for under-16s.
Complementing this, the existing Online Safety Act 2021 empowers the eSafety Commissioner to take action against severe online abuse, including CSAM, and to remove or restrict harmful material across services.
This shows how governments can be different yet aligned in purpose. Australia is working to reduce the entry points children have to harmful content, while the UK is beginning to demand solutions that block it from ever appearing on the device.
Both goals matter, but a key lesson is emerging: access restrictions alone aren’t enough unless they are paired with technology that prevents harm at the point of content creation and/or consumption.
Europe: Balancing Safety and Rights
In the European Union, the policy environment is evolving as well. The European Parliament and Commission have been working on regulations to combat CSAM and other online abuse while striving to protect privacy and security rights.
Proposed EU rules would require platforms to conduct risk assessments and mitigation efforts without mandating deep scanning or backdoors that could undermine encryption. There is also growing experimentation with age verification and age assurance initiatives across EU member states under the Digital Services Act (DSA).
This underscores a central theme of global digital policy today: how do we protect children without eroding fundamental rights? Smart, on-device prevention technology offers a compelling answer — precisely because it can operate without compromising encryption or privacy.
Why On-Device Safeguarding Matters Most
Across these global initiatives, one thing is clear: more regulation is coming. But if legislation is to truly stop harmful content from existing in the first place — rather than merely removing it after the damage is done — policymakers must anchor their solutions in technology that prevents harm before it reaches social feeds, chats, or storage folders.
That’s where on-device safeguarding tools truly shine:
They stop harmful content before it ever reaches a child’s eyes, unlike reactive moderation systems.
They respect privacy by keeping detection on the device, not in the cloud.
They complement legislative goals in the UK, U.S., Australia, and beyond by delivering real, measurable protection.
The message is simple but powerful: keeping children safe online means building safety into the devices they use — not just the platforms they visit.

