SAFER framework for digital brand safety
The SAFER framework and toolkit (member-only content) provide essential guidance for government communicators to ensure digital brand safety when placing paid advertising and organic content across online platforms.
Contents:
Introduction
Government communications engage millions of people every year in vital campaigns, including public health messages during pandemics or flooding, recruitment for public services, and voter registration information.
We are committed to upholding the highest standards of safety and suitability when delivering government communications. This means ensuring our messages appear only in digital environments that are free from harmful, illegal or inappropriate content, that protect individuals and minority groups from harm, and that align with Civil Service values of integrity, honesty, objectivity and impartiality. It also means demonstrating responsible stewardship of public money by choosing platforms and publishers that provide good value for taxpayers whilst maintaining the trust that is essential for effective democratic communication.
Since launching the original SAFE framework in 2020, the digital landscape has evolved significantly with the growth of social media platforms and the introduction of the Online Safety Act 2023, which brings new requirements for content moderation, transparency and accountability across digital spaces.
As we now embrace an ‘always-on, digital-first’ approach to our campaigns, it’s essential that we update our framework to reflect these developments. Recent developments in online channels and platforms, shifts in public digital behaviours, and the emergence of generative artificial intelligence have transformed how we communicate.
The extended SAFER framework has been enhanced with a fifth principle and establishes five core principles for assessing digital environments. It helps us safeguard both paid advertising and organic content across digital environments, protecting government reputation and the government brand while ensuring messages are trusted by the public.
By applying these updated standards, we maintain effective reach and impact for our campaigns while demonstrating responsible use of taxpayers’ money in an increasingly complex digital landscape, building public trust in government communications and raising standards across the wider industry.
Summary of changes
From SAFE to SAFER
The most significant update to the framework is the addition of a fifth principle – ‘R’ for ‘Risk and responsibilities’. This expansion together with some refinements to the existing principles, provides a more comprehensive approach to assessing digital environments and ensuring brand safety across government communications.
The 5 SAFER principles are now:
- Safety
- Appropriate environments
- Freedom of speech
- Ethics
- Risk and responsibilities
The existing four SAFE principles when published in 2020 were ‘Safety and suitability’, ‘Advertising context’, ‘Freedom of speech’, and ‘Ethics’. For simplicity and clarity, ‘S’ for ‘Safety and suitability’ is now abbreviated to ‘Safety’. ‘Advertising context’ is now ‘Appropriate environments’ to encompass all digital communications, particularly organic content.
The updated framework also includes the following key changes:
- Widening the scope to cover both paid advertising and organic social media content, making it apply to more types of government communications and including lessons learned from the Government Communications Horizon Review into Responsible Innovation (2023).
- Embedding suitability checks across all digital communication approaches and channels by renaming ‘A’ to ‘Appropriate environments’, so the framework now covers more than just paid-for advertising.
- Updating key definitions throughout the framework, especially in the ‘Safety’ principle, based on the Online Safety Act 2023 and recent policy changes.
- Strengthening the ‘Ethics’ principle to match the latest government strategies and resources launched since 2020, such as the Government Communications Framework for Ethical Innovation.
- Introducing a new approach to guidance on advertising with news publishers, informed by the launch of the Online Safety Act 2023.
- Providing clearer guidance on standards and best practice for working with content creators (also known as ‘Influencers’) and evaluating advertising tools.
- Adding new guidance on considering how platforms are funded when doing assessments.
- Creating more practical guidance for all agencies working with Government Communications on when and how to plan, buy and deliver digital communications (‘Risk and responsibilities’).
Scope and application
The SAFER framework is designed to be adaptable and flexible, accommodating the diverse range of government communications, campaigns, audiences, and digital environments, and it applies to both paid advertising and organic content.
To help communicators apply the framework in practice, we have developed the SAFER toolkit (member-only content), providing step-by-step assessment forms, checklists and worked examples to guide you through evaluating digital environments against the five SAFER principles. Whether you’re assessing a new platform for a campaign or reviewing an existing channel, the toolkit offers practical support tailored to different types of digital environments.
The term ‘digital environment’ broadly denotes any digital service that can host digital ads or messages, such as:
- Websites
- Apps
- Social media platforms (including ‘decentralised’ platforms, also known as the ‘Fediverse’)
- Online video channels, including video on demand (VOD)
- Digital audio channels (such as podcasts)
- Programmatic advertising networks and addressable advertising formats
- Virtual, augmented, and mixed reality platforms (VR/AR/XR)
- Digital in-game advertising (DIGA)
- Digital out of home (DOOH)
- Chatbots
- Private or business messaging services
The principles of the framework should be applied to established and emerging platforms, with new technologies assessed against the framework’s core values.
When deciding whether to use any platform or environment, we should consider three key questions:
- Does the activity represent value for money and justify the use of public funds?
- Does the digital environment meet our legal and ethical standards for digital safety, as set out in this framework?
- Are there any risks to data protection, information security or national security, as advised by the Information Commissioner’s Office, or other government functions?
Our approach always balances these principles against the potential value of using a particular platform or environment.
Five SAFER principles
S – Safety
Safety checks are a cornerstone of our approach, ensuring government content doesn’t appear alongside or inadvertently endorse illegal activity, harmful behaviours, or content that causes serious offence to individuals or protected groups, whether deliberately or inadvertently.
Digital environments should have effective policies and mechanisms in place to ensure that government communications do not appear alongside unsafe content. Though not exhaustive, below are examples of unsuitable content:
- Content which promotes or incite hatred or intolerance against others on the basis of protected characteristics as defined under the Equality Act 2010.
- Content that is gratuitously offensive, indecent or obscene.
- Speech or conduct that constitutes bullying or harassment of those protected under the Equality Act 2010.
- Threats of, or incitement to, violence.
- Threats of, or incitement to, or support for terrorist activity, as defined in the Terrorism Act 2000, and/or no enactment, endorsement or promotion of illegal activity.
- Content that features sex, drugs or violence.
- Content funded by political parties.
These checks align with the Online Safety Act 2023 guardrails and form the minimum required criteria for all government communications.
The Government Communication Service (GCS) team monitors the latest developments from regulators (such as the Office of Communications (Ofcom), the UK’s communications regulator) and other public bodies (such as the Equality and Human Rights Commission) to ensure use of digital environments for government communications remains in line with UK laws and regulations.
A – Appropriate environments
Once an environment has been established as safe, the suitability needs to be assessed to ensure that the environment is appropriate for UK government. This includes considering the message of the campaign and the contexts that could cause reputational harm or embarrassment to UK government – as well as other harms, such as widening inequalities or causing disproportionate harm to different groups within society.
As part of establishing suitability and appropriateness of the digital environment, we accept there will be some categories of content that are not generally suitable for UK government, while recognising the rights of other advertisers to communicate in this environment. As well as those already mentioned above, two other factors to consider for suitability include:
- Clear and named ownership of the platform or publisher.
- The context of the campaign. For example, is there satirical or political content that may not be suitable for the campaign to appear alongside? This consideration is less likely to apply with regard to news publishers.
The criteria for suitability and appropriateness considers the nature and seriousness of a campaign, and how well it fits with its surroundings. For example, we wouldn’t put a climate change campaign next to content that denies climate change, but we might put vaccine information next to anti-vaccine content to help counter misinformation.
Appropriate environments focus on where our ads appear and how those places are funded. We need to make sure government messages appear in digital spaces that give good value for taxpayers’ money. This means using our advertising budget efficiently to reach our campaign goals while spending public money responsibly across the many different types of digital platforms now available (read more under ‘Scope and application’).
Funding
Government communications need to appear in the digital environments where the public are, in order to effectively deliver important information on government policies and public services. We must do so responsibly, so it is important we consider the implications of the different funding models which exist across the digital landscape.
Funding Content Creators (FCC)
These platforms and buying models are where government advertising spend is shared with the content creators partially or in full. For example, the publishers in programmatic buying or video creators on YouTube and Twitch.
When we advertise on platforms that pay content creators (like Twitch), we need to be extra careful. These platforms present the highest risk to our reputation because if our ads appear next to inappropriate content, our money would directly go to the creators of that content. FCC categories therefore need to be risk assessed, and in some cases not included in campaigns (read more under ‘Content creators and influencers‘).
Funding Platform Only (FPO)
These platforms and buying models do not share the advertising revenue with content creators, or do not have content creators, and instead optimise ad placement across their platform according to user behaviour, location, or time of day. The platforms also do not usually offer advertisers and agencies much control over the adjacent content or context in which the advertisements are placed, for example in social media ‘feeds’ or in digital out-of-home.
The FPO category is composed of quite varied environments, each with their own considerations, which need to be suitably assessed. We will continue to work proactively to ensure brand safety controls and processes are implemented amongst these platforms to deliver the highest levels of brand safety for government communications.
Hybrid Funding Models (HFM)
Some platforms adopt a hybrid model, where the funding arrangements are dependent on the particular format used by an advertiser. For example, platforms which provide a range of formats packaged up for advertisers, where some formats fund content creators, and others which fund the platform.
Where hybrid funding of this type is used, then the unique monetisation policy of the platform should be considered, and whether this provides government communications with satisfactory protections against directly funding the creators of inappropriate content.
Content creators and influencers
Influencer marketing is a paid, credible, and valuable communication channel where we partner with individuals who have engaged online followings – including social media influencers, podcasters and experts. This approach can help us reach audiences traditional channels struggle to connect with, and should be considered alongside established online and offline channels.
When used effectively and transparently, influencers create personalised, engaging content for government communications. However, if not managed properly, this could damage public trust. All content produced with influencers must be trustworthy, accessible and provide value for taxpayers’ money.
As this rapidly evolving field changes, we must follow the Incorporated Society of British Advertisers’ (ISBA) Influencer Marketing Code of Conduct as well as ISBA Brand Safety guidance. Campaign teams should also review potential partners against the SAFER framework to ensure content won’t undermine trust in government messaging or cause reputational issues.
Campaign teams planning to work with influencers should read the Government Communications influencer due diligence guidance alongside this framework for detailed requirements on vetting potential partners.
Optimising user experience
It is important that UK government messaging appears in efficient, effective and trustworthy environments that provide value for taxpayers’ money, with the highest chances of being seen, heard, and acted upon by real people. A study by Newsworks and Magnetics found that adverts on high-quality websites are 74% more likeable than the same adverts on lower-quality sites.
UK government content and advertising must also appear in environments which promote good user experiences, make provisions to limit advertising ‘bombardment’ (where relevant), and which have a high quality of advertising content.
We remain committed to combating ad fraud and the ambition of full viewability through our media partners. This involves looking at the ‘adverts to content’ ratio, the amount of ad ‘recycling’ and the active measures in place by platforms to monitor and minimise ad fraud.
When considering ‘advertising context’, we recommend several checks to ensure your government campaign provides the best possible user experience as well as delivers value for taxpayers. A positive advertising environment increases engagement and effectiveness by balancing visibility with respect for user experience. To achieve this, assess whether your ads appear in environments where:
- The number of ads and ads-to-content ratio is appropriate – too many ads can disrupt user experience and reduce engagement. The guide is 60/40 ad to content for desktop and 40/60 across mobile devices. Additionally, we recommend no more than six adverts to a page for desktop and three for mobile.
- Frequency management, low ad cycling, and no auto-reloads are in place to prevent users from feeling bombarded with repetitive content. Auto-reloads should not happen before 60 seconds.
- High levels of viewability ensure your ads are seen by real people across different devices.
- The digital environment has active measures to minimise ad fraud, with arrangements for refunds in place for advertisers.
These factors contribute to creating efficient, effective advertising that’s more likely to be noticed, trusted and acted upon by your target audience.
F – Freedom of speech
The government encourages freedom of speech and high-quality journalism, both online and offline, for a healthy and flourishing democracy. Essential to the SAFER framework is safeguarding the rights and protections of individuals and minority groups online. User-generated content unfortunately has the potential to be abused, and such content can be harmful, violent and offensive.
To support respectful and constructive environments for public interaction, government communications must use digital environments which have clear policies on the following:
- Guidelines for user-generated content that set out appropriate standards and a code of conduct that protects individuals from harmful content.
- Active monitoring of the user-generated content, and/or that users are able to report inappropriate content.
- The actual implementation of removing content that breaches the guidelines.
- Clear policies around safeguarding and enforcement by platform or media providers (as per the Online Safety Act 2023) for both adults and children.
Under the Online Safety Act 2023, news publishers are largely exempt from the duties to moderate user-generated content on their own websites. Government should therefore advertise with news publishers that:
- Uphold the highest standards in editorial codes of conduct.
- Adhere to relevant regulatory standards such as those set by Independent Press Standards Organisation (IPSO), or an alternative code of conduct that protects the rights of individuals and encourages ethical journalism.
- Have no history of publishing or broadcasting disinformation, with the intent to deceive.
E – Ethics
Government communications should align with the Civil Service Code, (integrity, honesty, objectivity and impartiality) and follow Government Publicity Conventions by being:
- Relevant to government responsibilities.
- Objective and explanatory, not biased or polemical.
- Not able to be misrepresented as party political.
- Conducted economically and appropriately, justifying costs as public expenditure.
To show we’re following best ethical practices, we need to make sure the digital spaces where we place government messages are fair, unbiased and provide good value for money.
Public trust in government communications depends on working with platforms and publishers that are open about how they operate. When choosing where to place your communications, look for places that have:
- Clear privacy policies and notices that people can easily find and understand.
- Safeguarding policies and protections for children, as required by the Online Safety Act 2023.
- Information about who owns and manages the platform.
- Systems to prevent fraud and offer refunds when it happens.
- Responsible data handling that follows UK General Data Protection Regulation (GDPR) rules and only collects what’s necessary.
- Careful use of algorithms to avoid unfair bias, especially with sensitive personal data.
R – Risk and responsibilities
All government communicators and their marketing, digital and media agencies must follow the standards in this framework when delivering both paid-for and organic content, and assess the possible risks of placing official communications in different digital environments.
All digital platforms where we communicate with the public need to keep up with changing laws that protect people from harmful content. This includes websites, social media and new technologies, virtual or experiential (like VR/AR). These platforms should respect people’s choices about how their data is used and work towards fully following data protection laws.
They should also follow rules set by regulators like the Office of Communications (Ofcom) and the Committee of Advertising Practice (CAP). These include using proper security measures, like Secure Sockets Layer (SSL) certificates for websites.
Digital platforms that sell advertising space should be open and responsible about how they operate. They need to protect children by following age-appropriate best practices, including using effective systems to verify users’ ages where needed. This is an important part of creating safe online environments where government communications can appear.
Working together
Our agencies and partners need to work with government to minimise risk and are therefore responsible for ensuring the following:
- Content and context monitoring and review: Ensuring that all digital environments are regularly monitored and reviewed for inappropriate or harmful content. This might also include the use of AI-driven tools that better determine the content and context that our ads appear in or alongside.
- Contextual targeting: Reviewing past content of content creators and influencers against the safety and suitability criteria of this framework (‘Safety’ and ‘Appropriate environments’ principles), to ensure it will not prove problematic if associated with, or undermine trust in, government messaging.
- Using verification systems: Implementing robust verification systems where technically possible, such as third-party brand safety tools (for example DoubleVerify, Integral Ad Science) to ensure ads are being served in safe and suitable environments.
- Ad fraud: Minimising ad fraud across trading relationships as well as upholding the cyber security of their platforms.
- Data protection: Ensuring that all digital environments that are used for government communications adhere to relevant legal frameworks, including but not limited to the UK GDPR and Data Protection Acts. This includes maintaining clearly understandable privacy policies, and providing informed consent for users for any data collection activities.
Governance and assessment
Alongside this updated framework is a new SAFER toolkit (member-only content) where you will find more information to help with implementation.
Assessment forms provide a step-by-step risk assessment allowing for contextual rationale while weighing identified risks, and the form used will depend on the type of digital environment (for example, website versus social media platform). For more information about the assessment process and to request an assessment form, contact branding@cabinetoffice.gov.uk.
When there is a clear need to communicate using higher-risk platforms and websites, which may pose risks to brand safety and reputation, we must carefully assess the context and rationale against any identified risks using our Campaign Specific Process, as outlined in the SAFER toolkit (member-only content). This ensures we maintain both effective communication and appropriate safeguards when using higher-risk digital environments for government messaging.
Inclusion List
Government Communications operates an Inclusion List of digital environments that adhere to all brand safety principles of the SAFER framework. Government organisations can use this list to inform decisions when entering direct buys with publishers.
The government’s lead media agency on the Crown Commercial Service Framework (RM6123 until June 2026, and thereafter RM6364) is responsible for actively maintaining this list, working closely with the wider agency ecosystem in its application.
This includes assessing environments against the standards and criteria of the SAFER framework for paid-activity, using the SAFER assessment forms, and securing sign-off from the Government Communication Service team for modifications to the list.
For organic content, government organisations can check with the Government Communication Service team (branding@cabinetoffice.gov.uk) whether an environment they want to use has already been assessed, then complete an assessment as needed and secure sign-off from the team.
There may be instances where platform-specific conditions are in place to ensure safe use for government advertising. The Government Communication Service team maintains the list of specific exemptions and communicates these to government organisations as needed.
Campaign-specific flexibility
The SAFER framework has been developed to provide core brand safety for all government communications, however we also recognise the need for tailoring to specific campaign context and audiences to drive government outcomes. Where necessary, government communications and advertising can still be run in environments that do not meet all channel-specific criteria, so long as a risk assessment has taken place.
The SAFER framework therefore incorporates a campaign-specific exemption process, that allows government organisations to request changes and additions outside of the core Inclusion List for specific campaigns. Government organisations can fill out an exemption form, which will include an assessment of the relevant environment against the SAFER principles. This will then require sign-off from the Government Communication Service team.
For details of the exemption process and to request the exemption form, contact the Government Communication Service team at branding@cabinetoffice.gov.uk.
More information
For questions about the SAFER framework contact the Government Communication Service team at branding@cabinetoffice.gov.uk.
Related content:
- Influencer due diligence policy
- Generative AI policy
- Framework for Ethical Innovation
- Government Publicity Conventions
Other resources/or references:
- Civil Service Code
- ISBA Influencer Marketing Code of Conduct
- ISBA Brand Safety guidance
- Crown Commercial Services (CCS) Media Services framework (RM6123) https://www.crowncommercial.gov.uk/agreements/RM6123
- Crown Commercial Services (CCS) Media and Creative Services framework (RM6364) https://www.crowncommercial.gov.uk/agreements/RM6364