UK safety act

The Online Safety Act 2023 (the Act) is a new set of laws in the UK designed to protect children and adults online. It places a range of new duties on social media companies and search services, making them more responsible for user safety on their platforms.
Key Points of the Online Safety Act:
• Purpose: The Act’s primary goal is to ensure that service providers implement systems and processes to reduce the risks their services are used for illegal activity and to take down illegal content when it appears. For children, it introduces strong protections, requiring platforms to prevent access to harmful and age-inappropriate content and to provide clear reporting mechanisms for problems. For adult users, it aims to enhance transparency about potentially harmful content and offer more control over the content they see.
• Regulator: Ofcom is designated as the independent regulator of Online Safety. It is responsible for setting out the steps providers must take to fulfil their safety duties through codes of practice and has a broad range of powers to assess and enforce compliance.
• Scope: The Act applies to search services and services that allow users to post content online or interact with each other. This includes social media services, cloud storage and sharing sites, video-sharing platforms, online forums, dating services, and instant messaging services. The Act applies to services even if they are based outside the UK, provided they have links to the UK, such as a significant number of UK users, targeting the UK market, or posing a material risk of significant harm to UK users.
• Proportionality: Safety duties are proportionate to factors like the risk of harm, and the size and capacity of each provider, ensuring that smaller services are not required to take the same actions as large corporations. Ofcom and providers must also consider users’ rights when fulfilling these duties.
New Obligations to Comply:
The Act introduces several key obligations for in-scope service providers, which are being implemented in phases by Ofcom:
1. Duties about Illegal Content (Now in effect):
◦ Risk Assessment: In-scope service providers were required to complete their assessments of the risk of illegal content appearing on their service by 16 March.
◦ Proactive Measures: Platforms must implement measures to reduce the risks their services are used for illegal offending and put in place systems for removing illegal content when it does appear. This includes taking proactive steps against “priority offences” which are serious and prevalent illegal content and activity.
◦ Removal of Other Illegal Content: Platforms must also remove any other illegal content where there is an individual victim, or when they are made aware of it by users or other means.
◦ Prevention by Design: Obligations are not just about removal; platforms need to consider their site design to reduce the likelihood of criminal activity.
◦ Types of Illegal Content: This includes content related to child sexual abuse, extreme sexual violence, extreme pornography, fraud, inciting violence, terrorism, intimate image abuse, and more.
◦ Enforcement: Ofcom can enforce against the illegal content regime as of 17 March.
2. Duties about Content Harmful to Children (Phased implementation):
◦ Age Assurance for Pornography: As of 17 January 2025, platforms publishing their own pornographic content (Part 5 services) must immediately introduce robust age checks that meet Ofcom’s guidance to prevent children accessing online pornography. Simply requiring a user to confirm they are not a minor is not sufficient (this specific detail is from the SCREEN Act in the prior conversation, not directly in the current source, but the essence of “robust age checks” is here).
◦ Children’s Access Assessment: In-scope service providers had until 16 April to carry out a children’s access assessment to determine if their service is likely to be accessed by children.
◦ Risk Assessments for Harm to Children: Services likely to be accessed by children have until 24 July 2025 to complete their children’s risk assessment, based on guidance published by Ofcom on 24 April 2025.
◦ Protection from Harmful Content: Companies likely to be accessed by children must take steps to protect them from harmful content and behaviour, even if it’s not illegal.
◦ Categories of Harmful Content for Children:
▪ Primary Priority Content (children must be prevented from accessing): Pornography; content encouraging, promoting, or providing instructions for self-harm, eating disorders, or suicide.
▪ Priority Content (children should be given age-appropriate access): Bullying, abusive or hateful content, content depicting or encouraging serious violence/injury, dangerous stunts/challenges, or encouraging ingestion/inhalation/exposure to harmful substances.
◦ Consistent Age Limit Enforcement: Social media companies must enforce their age limits consistently and specify in their terms of service the measures used to enforce these limits.
◦ Child Safety Regime Fully in Effect: The child safety regime will be fully in effect by Summer 2025.
3. Duties for Categorised Services (Larger platforms, later implementation):
◦ Categorisation: The Act created categories of service (Category 1, 2A, and 2B) with thresholds laid via secondary legislation on 16 December 2024.
◦ Register and Further Codes: Ofcom will publish a register of categorised services in Summer 2025 and consult on further codes of practice and guidance for these additional duties by early 2026.
◦ Adult User Control (for Category 1 services): These large user-to-user services will be required to offer adult users tools for greater control over the content they see and who they engage with. This includes tools to verify identity, reduce exposure to non-verified users, and prevent non-verified users from interacting with their content. They must also offer optional tools to reduce exposure to certain types of legal content, such as content that encourages suicide, self-harm, eating disorders, or is abusive/hate content.
4. Tackling Harmful Algorithms:
◦ Providers must specifically consider how algorithms could impact users’ exposure to illegal content and children’s exposure to harmful content as part of their risk assessments.
◦ They must then take steps to mitigate identified risks, considering platform design, functionalities, and algorithms.
◦ The Act clarifies that harm can arise from how content is disseminated, e.g., an algorithm repeatedly pushing content to a child.
◦ Categorised services will need to publish annual transparency reports, including information on the algorithms they use and their effect on users.
New Criminal Offences: The Act introduced new criminal offences that came into effect on 31 January 2024, applying directly to individuals sending them. These include encouraging or assisting serious self-harm, cyberflashing, sending false information intended to cause non-trivial harm, threatening communications, intimate image abuse, and epilepsy trolling.
Enforcement and Penalties:
• Ofcom has powers to take action against non-compliant companies.
• Fines: Companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater.
• Criminal Action against Senior Managers: Criminal action can be taken against senior managers who fail to ensure companies follow information requests from Ofcom, or if they are at fault for a provider’s failure to comply with enforcement notices related to child safety duties or child sexual abuse and exploitation.
• Site Blocking: In extreme cases, with court agreement, Ofcom can require payment providers, advertisers, and internet service providers to stop working with a site, preventing it from generating money or being accessed from the UK
Australia Encryption Law
The Australian Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (AA Act) is a significant and polarising piece of legislation designed to address the challenges faced by law enforcement and intelligence agencies in accessing encrypted communications and information, often referred to as the ‘going dark’ problem. The Act aims to make technology companies more accountable for assisting agencies in deciphering encrypted data.
Here are the key points and new obligations to comply with under Australia’s encryption law:
Purpose and Context
• The AA Act seeks to compel technology companies to provide assistance to law enforcement and intelligence agencies to circumvent or overcome encryption.
• It is considered one of the strongest legislatively enacted responses to encryption challenges by a Western democracy since the “Crypto Wars” of the 1990s.
• While some view it as a draconian “anti-encryption” law that could enable “surveillance backdoors,” its creators assert it does not allow for the creation of decryption capabilities or backdoors.
New Obligations: The Industry Assistance Regime
The Act introduces an “Industry Assistance” regime under Part 15 of the Telecommunications Act 1997 (Cth), allowing certain agency heads to compel assistance from various actors in the telecommunications supply chain, including traditional providers, hardware manufacturers, software developers, and “over the top” (OTT) messaging applications. This assistance is facilitated through three types of instruments:
• Technical Assistance Request (TAR): A voluntary request for technical assistance.
• Technical Assistance Notice (TAN): A mandatory notice requiring a provider to give technical assistance that they are already capable of providing.
• Technical Capability Notice (TCN): A mandatory notice requiring a provider to create or implement a new technical capability to enable assistance.
These “notices” (TANs and TCNs) can only be issued for:
• Enforcement of Australian criminal law, specifically for “serious offences” punishable by three years or more imprisonment.
• Assisting with the enforcement of foreign criminal law related to “serious foreign offences”.
• Safeguarding national security.
The types of assistance that can be compelled (“listed acts or things”) are extensive, including providing technical information (like source code), installing software, facilitating access to software, and assisting with testing, modification, or development of technology or capability. This means providers can be compelled to help agencies bypass their own privacy and security measures.
International Significance
• The AA Act has an extraterritorial reach, applying to providers even if they are outside Australia, provided they develop, supply, or update software “likely to be connected to a telecommunications network in Australia”.
• Notices can be issued to foreign companies for foreign offences, even without a physical presence in Australia, if they meet the definition of a “designated communications provider”.
• There is a potential for capability sharing with other jurisdictions, especially Five Eyes Alliance partners, although the Act itself does not explicitly provide for assistance with foreign intelligence efforts.
The Systemic Weakness/Vulnerability Limitation (Section 317ZG)
A core part of the Act, Section 317ZG, is designed to balance public safety with privacy and cybersecurity concerns. It prohibits agencies from requesting or requiring a provider to “implement or build a systemic weakness or systemic vulnerability” into a form of electronic protection.
• Purpose: To prevent the mandatory insertion of “encryption backdoors” or “security backdoors” on a wholesale basis, ostensibly precluding measures like “key escrow” or the US “Clipper Chip”.
• What it covers:
◦ It prohibits implementing or building a new decryption capability in relation to electronic protection. This implies any decryption capability is considered systemic.
◦ It prohibits actions that would render systemic methods of authentication or encryption less effective. This means providers cannot be forced to weaken their encryption (e.g., using shorter key lengths or weaker algorithms).
◦ It aims to protect the security of information held by any other person (i.e., non-targets), even if a weakness is selectively introduced to target technologies. This is intended to prevent a weakness created for one target from jeopardising others’ security.
• What it might not cover: The limitation primarily applies to what a provider can be forced to build or implement into their electronic protection. It may not restrain:
◦ Actions by agencies themselves (or their private contractors).
◦ The provision of information (e.g., source code) that agencies could use to create vulnerabilities without further provider cooperation.
◦ The creation of a systemic weakness if it is not “built or implemented into a form of electronic protection” directly, though subsections 317ZG(2) and (3) expand this.
◦ A weakness or vulnerability that is an inevitable component of a newly introduced capability may not automatically trigger the limitation unless it impacts other users or constitutes a systemic weakness under the Act’s definitions.
Interpretative Challenges and Criticisms
• Complexity and Ambiguity: Section 317ZG is notably confusing due to its substantive and procedural complexity, “web of inclusions, exclusions, and exclusions to exclusions,” and last-minute amendments. This has led to skepticism about its effectiveness.
• “Class of Technology”: The undefined term “whole class of technology” is critical for determining if a weakness is “systemic.” A narrow interpretation could allow agencies to justify measures affecting many users by defining the “class” at a higher level of abstraction than the specific affected product. However, the Act’s purpose suggests an interpretation that protects innocent third parties.
• Procedural Hurdles for Providers:
◦ Challenges to notices, particularly TCNs, can involve an assessment by a technical expert and a retired judge, but their findings are not binding on the Attorney-General.
◦ The 28-day time limit for requesting an assessment can be waived in cases of urgency.
◦ While ex post judicial review is available, the Act expressly excludes review under the Administrative Decisions (Judicial Review) Act 1977 (Cth).
◦ Decisions are administrative, limiting judicial review to errors of law, not merits.
• Cybersecurity Concerns: Despite the limitations, security professionals remain concerned that the types of activities not limited by Section 317ZG could still compromise users’ overall cybersecurity. For instance, allowing targeted exploits at “end-points” (e.g., user devices) rather than in transit.
• Economic Harm and Trust: The Act’s complexity has been linked to significant economic harm for the Australian digital business sector and has undermined public trust and confidence.
• Industry Response: The AA Act may incentivise providers to strengthen encryption methods to make their products more resistant to compelled assistance. Conversely, it might create a “chilling effect” where providers avoid offering products that preclude agency access to plaintext, impacting individual privacy and cybersecurity.
In essence, while Section 317ZG nominally prevents agencies from mandating wholesale encryption backdoors, its convoluted nature and the extensive powers remaining outside its direct limitation mean that cybersecurity professionals and industry stakeholders remain highly concerned about its practical implications and the potential for unintended negative consequences.
US Screen Act

The Shielding Children’s Retinas from Egregious Exposure on the Net Act, or SCREEN Act (S.737), was introduced in the Senate on 26 February 2025. Its primary aim is to protect minors from accessing online pornographic content by requiring certain interactive computer services to adopt and operate technology verification measures.
Key Findings and Sense of Congress leading to the Act:
• Previous legislative efforts to shield children from online pornography (like the Communications Decency Act and Child Online Protection Act) were struck down by the Supreme Court, which found them not to be the least restrictive means, despite recognising a “compelling government interest”.
• The Supreme Court had suggested “blocking and filtering software” as an alternative, but this technology has since proven ineffective, with studies showing it fails on a significant number of pornography sites and children can easily bypass it.
• Furthermore, only 39% of parents use such software for their minors, leaving 61% of children with restrictions only at school or libraries.
• Online pornography exposure is widespread among minors (estimated 80% for ages 12-17, with 54% actively seeking it), and the internet is the most common source.
• Exposure to online pornography has been linked to severe psychological effects in minors, including anxiety, addiction, low self-esteem, body image disorders, and an increase in problematic sexual activity.
• Congress reaffirms its “compelling government interest” in protecting the physical and psychological well-being of minors from “indecent” content.
• The Act asserts that requiring interactive computer services in the business of creating, hosting, or making available pornographic content to implement technological age verification is now the least restrictive means to achieve this compelling interest, given the evolution of cost-efficient and narrowly operable age verification technology.
Definitions:
• “Minor”: has the meaning given in section 2256 of title 18, United States Code.
• “Covered platform”: An interactive computer service engaged in interstate or foreign commerce, or purposefully availing itself of the U.S. market, whose regular course of trade or business is to create, host, or make available content “harmful to minors” with the objective of earning a profit. This applies regardless of whether this is their sole income or principal business.
• “Harmful to minors”: Refers to visual depictions that, taken as a whole and with respect to minors, appeal to prurient interest in nudity, sex, or excretion; depict sexual acts in a patently offensive way for minors; and lack serious literary, artistic, political, or scientific value for minors. It also includes “obscene” content or “child pornography”.
• “Technology verification measure”: Technology that uses a system or process to determine whether it is more likely than not that a user is a minor and prevents access by minors to any content on a covered platform.
• “Technology verification measure data”: Information collected solely for the purpose of age verification, identifying or reasonably linkable to an individual or device.
New Obligations to Comply:
Beginning 1 year after the date of enactment, a covered platform shall adopt and utilise technology verification measures to ensure:
• Users of the platform are not minors.
• Minors are prevented from accessing any content on the platform that is harmful to minors.
Requirements for Age Verification Measures:
• Platforms must use a technology verification measure to verify a user’s age.
• Requiring a user to confirm they are not a minor is NOT sufficient.
• Platforms must publicly make available the verification process they employ.
• The technology verification measure must be applied to the IP addresses (including known VPN IP addresses) of all users, unless the platform determines the user is not in the United States.
• Platforms may choose their specific technology verification measures provided they meet the requirements and prohibit minor access.
• Platforms may contract with third parties for these measures, but this does not relieve them of their obligations or liability.
Technology Verification Measure Data Security:
• Covered platforms must establish, implement, and maintain reasonable data security to protect the confidentiality, integrity, accessibility, and prevent unauthorised access of technology verification measure data.
• This data must be retained for no longer than reasonably necessary for verification or to demonstrate compliance.
Commission Requirements (FTC):
• The Federal Trade Commission (FTC) is required to conduct regular audits of covered platforms to ensure compliance with the age verification requirements.
• The FTC must make audit terms and processes public.
• The FTC will issue guidance to assist covered platforms in complying with these requirements no later than 180 days after enactment. However, this guidance does not confer rights or bind the Commission, and enforcement actions must allege a specific violation of the Act.
Enforcement:
• A violation of the age verification requirements is treated as an unfair or deceptive act or practice under the Federal Trade Commission Act.
• The FTC will enforce the Act using its existing powers, jurisdiction, and duties as defined by the FTC Act.
GAO Report:
• The Comptroller General of the United States must submit a report to Congress no later than 2 years after compliance is required. This report will analyse the effectiveness of the measures, compliance rates, data security, behavioural/economic/psychological/societal effects, and provide recommendations for enforcement and legislative improvements.
US Kids online safety act

The Kids Online Safety Act (S.1748), introduced in the Senate on 14 May 2025, aims to protect the safety of children on the internet. The Act defines a “child” as an individual under the age of 13 and a “minor” as an individual under the age of 17.
The legislation primarily imposes new obligations on “covered platforms”, which include online platforms, online video games, messaging applications, and video streaming services that connect to the internet and are used, or are reasonably likely to be used, by a minor. There are specific exceptions, such as common carrier services, broadband internet access services, email services, certain teleconferencing/video conferencing services, wireless messaging services not linked to an online platform, non-profit organisations, educational institutions, libraries, news/sports websites, business-to-business software, VPNs, and government entities.
Here are the new key obligations for covered platforms:
• Duty of Care (Section 102):
◦ Covered platforms must exercise reasonable care in the creation and implementation of any “design feature” to prevent and mitigate reasonably foreseeable harms to minors. A “design feature” includes anything that encourages or increases frequency, time spent, or activity of minors, such as infinite scrolling, auto-play, rewards, notifications, personalised features, in-game purchases, or appearance-altering filters.
◦ Foreseeable harms include, but are not limited to, eating disorders, substance use disorders, suicidal behaviours, depressive and anxiety disorders related to compulsive usage, patterns of compulsive usage, severe physical violence or online harassment, sexual exploitation and abuse of minors, and the distribution, sale, or use of narcotic drugs, tobacco, cannabis, gambling, or alcohol. Financial harms from unfair or deceptive acts or practices are also covered.
◦ This duty of care does not require preventing minors from deliberately searching for or requesting content, or accessing resources for harm prevention. It also explicitly states that no government entity can enforce this subsection based on viewpoint of users’ speech or expression protected by the First Amendment.
• Safeguards for Minors (Section 103(a)):
◦ Platforms must provide readily accessible and easy-to-use safeguards for minors. These include:
▪ Limiting the ability of other users or visitors to communicate with the minor.
▪ Preventing others from viewing the minor’s personal data, especially restricting public access.
▪ By default, limiting design features that encourage compulsive usage, such as infinite scrolling, auto-play, rewards for time spent, and notifications.
▪ Providing control over “personalized recommendation systems,” including a prominently displayed option to opt out (while still allowing chronological content) and an option to limit types or categories of recommendations. A “personalized recommendation system” uses personal data of users to suggest or rank content.
▪ Restricting the sharing of the minor’s geolocation and providing notice about geolocation tracking.
▪ Providing an easy-to-use option for minors to limit time spent on the platform.
◦ For users the platform “knows” (actual or objectively implied knowledge) are minors, the default setting for any safeguard must be the most protective level of control over privacy and safety, unless a parent enables otherwise.
• Parental Tools (Section 103(b)):
◦ Platforms must provide readily accessible and easy-to-use parental tools for parents of known minors. These tools must include:
▪ The ability for parents to manage a minor’s privacy and account settings, including viewing and, for a “child” (under 13), changing and controlling these settings.
▪ The ability to restrict purchases and financial transactions by the minor.
▪ The ability to view total time spent and restrict time spent on the platform by the minor.
◦ Platforms must provide clear and conspicuous notice to a minor when parental tools are in effect.
◦ For users the platform knows are “children” (under 13), these parental tools must be enabled by default, unless the parent previously opted out of existing compliant tools.
• Reporting Mechanism (Section 103(c)):
◦ Platforms must provide an easy-to-use means for users and visitors to submit reports of harms to a minor.
◦ They must also provide a specific electronic point of contact for these matters.
◦ Platforms must confirm receipt of such reports and provide a substantive response within 10 days for large platforms (>10 million monthly US users) or 21 days for smaller platforms, or as promptly as needed for imminent threats to a minor’s safety.
• Prohibition on Advertising Illegal Products (Section 103(d)):
◦ A covered platform shall not facilitate the advertising of narcotic drugs, cannabis products, tobacco products, gambling, or alcohol to an individual that the platform knows is a minor.
• Dark Patterns Prohibition (Section 103(e)):
◦ It is unlawful to design, embed, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice with respect to safeguards or parental tools.
• Disclosure (Section 104):
◦ Platforms must provide clear, conspicuous, and easy-to-understand notice of their policies and practices regarding safeguards for minors, how to access safeguards and parental tools, and information on personalized recommendation systems, prior to a known minor’s registration or purchase.
◦ For a known “child” (under 13), verifiable parental consent must be obtained for parental tools and safeguards, aligning with Children’s Online Privacy Protection Act (COPPA) requirements.
◦ Terms and conditions must clearly explain how personalized recommendation systems use minor’s personal data and options to opt out or control them.
◦ Clear labels and information, including endorsements, must be provided to minors regarding advertisements.
◦ Platforms must provide comprehensive information in a prominent location for minors and parents about their policies, practices, and how to access safeguards and parental tools. Disclosures should be available in the same languages as the product/service where practicable.
• Transparency (Section 105):
◦ Large platforms (more than 10 million monthly active users in the US and predominantly user-generated content forums) must issue a public report at least annually, based on an independent, third-party audit.
◦ These reports must include: an assessment of minor access, commercial interests for minors, data on minor users (numbers, time spent, languages), reports received (disaggregated by language), assessment of safeguards and compliance, evaluation of safeguard efficacy, description of design features that increase use, collection/processing of personal data for recommendations, and mitigation measures taken.
◦ The audit process requires consultation with parents and youth experts, consideration of minor experiences (including reports and law enforcement information), research, industry best practices, and indicia of age beyond self-declaration. Platforms must fully cooperate with auditors.
◦ Public reports must safeguard user privacy by presenting data in a de-identified and aggregated format.
• Market Research (Section 106):
◦ Covered platforms cannot conduct market or product-focused research on a user they know is a “child” (under 13).
◦ They cannot conduct such research on any “minor” (under 17) unless they obtain verifiable parental consent.
• Filter Bubble Transparency (Title II, Section 202):
◦ Beginning one year after the Act’s enactment, it will be unlawful to operate an online platform that uses an “opaque algorithm” without complying with specific requirements.
◦ An “opaque algorithm” is defined as an algorithmic ranking system that uses user-specific data not expressly provided by the user to determine content selection or prominence.
◦ New obligations for platforms using opaque algorithms include:
▪ Providing clear and conspicuous notice that the platform uses an opaque algorithm, presented when the user first interacts with it.
▪ Including detailed notice in the terms and conditions (updated for material changes) about the algorithm’s features, how user-specific data is collected or inferred, options for users to opt out or modify their profile, and quantities the algorithm is designed to optimize.
▪ Enabling users to easily switch between the opaque algorithm and an “input-transparent algorithm”. An “input-transparent algorithm” does not use user-specific data unless expressly provided by the user for that purpose (e.g., search terms, saved preferences, followed profiles), but excludes browsing history, previous geographical locations, and inferences about the user.
▪ Prohibiting differential pricing or denial of service based on a user’s choice to use an input-transparent algorithm.
• Enforcement (Section 109):
◦ Violations of the Act are treated as unfair or deceptive acts or practices by the Federal Trade Commission (FTC), which will enforce the title with its existing powers.
◦ State attorneys general can also bring civil actions to enjoin practices, enforce compliance, or obtain damages/restitution for violations of Sections 103, 104, or 105. They must notify the FTC before filing, unless it’s not feasible. However, violations of Section 102 (duty of care) cannot form the basis of liability in actions brought by a State attorney general under State law.
Most provisions of the Act will take effect 18 months after the date of enactment.