Generative AI policy
Continuing Professional Development (CPD) points: 2
This policy sets out clear principles for all government communicators to follow in their use of AI within their organisations.
Introduction
Government Communications is committed to using generative AI responsibly to deliver better communications for the public. By upholding our public service values, we want to set the standard for excellence in government communication, and build trust and confidence.
As Government Communicators, we should all seize the benefits of generative AI, and ensure we can responsibly harness this technology, for the benefit of the public.
To help you understand and apply this policy effectively, the following definitions explain the key AI technologies covered in our principles:
Artificial Intelligence (AI): Computer systems designed to perform tasks that typically require human intelligence. These include, but are not limited to, visual perception, speech recognition, decision-making, language processing, pattern recognition, and predictive capabilities.
Generative AI: A category of artificial intelligence systems that can create new content including but not limited to text, images, audio, video, code, synthetic data, and virtual experiences in response to inputs or prompts.
Our principles for responsible adoption of generative AI
In Government Communications, we will:
- Always use generative AI in accordance with the latest official government guidance. For example: the AI Playbook for UK Government, the Introduction to AI Assurance, and the Framework for Ethical Innovation. This is in addition to the latest regulations and legislations in the UK. Please read the ‘Further guidance and resources’ at the end of this page for more information.
- Act responsibly in our use of generative AI by operating consistently in line with our values. This includes the Civil Service Code, the Government Publicity Conventions, and is underpinned by the values of Democracy, Rule of Law, and Individual Liberty. Read our Innovating with Impact Strategy for further information.
- Develop and undertake training on responsible use of generative AI, in particular around ensuring accuracy, inclusivity, and mitigating biases. For example, this could be ensuring all generated content can be reviewed by a ‘human-in-the-loop’ to mitigate potential biases against individuals or groups with protected characteristics or other personal attributes.
- Require that all our contracted and framework suppliers adhere to this policy on the responsible use of generative AI, and have safeguards in place to ensure this. Ultimately, our contracted and framework suppliers remain responsible for their use of the technology.
- Uphold factuality in our use of generative AI. This means not creating content that misleads, in addition to not altering or changing existing digital content in a way that removes or replaces its original meaning or messaging.
- Engage with appropriate government organisations, strategic suppliers, technology providers, and civil society, around significant developments in generative AI, and the implications for its use in government communications.
- Continue to review reputable research into the public’s attitudes towards generative AI and developments in AI capabilities, and consider how this policy should evolve in response.
Government communicators can use generative AI to create more effective and engaging content for the public, but it must be used ethically and responsibly.
The Framework for Ethical Innovation can help you decide when it is appropriate to use generative AI in your work.
Generative AI can be used to help with tasks such as, but are not limited to:
- Generating text:
- Generate first-draft texts or content to better reach our audiences.
- Assess and tailor communications to make them inclusive, accessible, helpful and relevant. For example, this could include generating automatic subtitles and translations.
- Create plain English summaries of complex policy documents with different versions tailored to a variety of audiences.
- Inspire designers and content creators, and help them come up with new ideas quickly.
- Generating images:
- Generate variations of campaign imagery optimised for different platforms and formats.
- Produce region-specific visual content that resonates with local communities.
- Create infographics that make complex government statistics more accessible.
- Adapt existing visual content, such as resizing the aspect ratio of an existing image in order to fit different formats or screen sizes.
- Generating audio:
- Generate natural-sounding voice narration in multiple accents and dialects.
- Generate appropriate background sounds for video content.
- Create audio versions of written materials for accessibility purposes.
- Generating video:
- Create sign language interpretations using digital avatars.
- Generate localised versions of national government communications campaign videos featuring relevant regional contexts.
- Create video simulations or before-and-after visualisations of government initiatives.
- Enhance the quality and fidelity of existing video or audio content.
- Brainstorming and ideation:
- Quickly apply good practice from industry and Government Communications policies, guidance and frameworks to our work. For example, the Modern Communications Operating Model 3.0.
- Explore problems or topics. For example through:
- Offering diverse perspectives and opening up thinking on a topic.
- Providing critical analysis of a topic or proposed approach.
- Identifying previously unconsidered risks and threats associated with a topic.
- Supporting qualitative and quantitative research and surveys, at greater scale. Potentially through the use of conversational AI tools to encourage more detailed survey responses.
Government communications aims to build trust in our approach through acting transparently.
Therefore we will:
- Secure written consent from a human before their likeness is replicated using AI for the purposes of delivering government communications. In the limited number of cases we currently expect, a record of this will be made available to the interested public via an official channel, for example, listed on the Government Communications website, and included in the description and Alt text of the content.
- This is to ensure that legitimate government communications can be discerned from deep-fakes or other mis/dis-information. Our aim is to mitigate against any unintended consequences that may come from greater use of AI avatars in government communications.
- We will carefully consider and declare our other uses of generative AI within government communications when doing so supports the key messages of a campaign.
- Clearly notify the public when they are interacting with an AI-powered service rather than a human. This will include to what extent, and for what purposes, an individual’s interactions may be logged or used. For example, using anonymised data on interactions to improve the quality of the service.
- Publish a log of changes to this policy to the Government Communications website. Generative AI is a fast-developing field, and our approach will evolve and adapt to keep in line with emerging technologies, risks, thought leadership, and official government guidance.
- Allow for AI web crawlers in robots.txt on our campaign websites, to ensure trusted and authoritative information from government official sources and campaigns can be accurately surfaced to the public in AI-powered search overviews and similar technologies.
In Government Communications, we will not:
- Apply generative AI technologies where it conflicts with our values or principles, for example, as set out in the Civil Service Code, the Government’s Publicity Conventions, or our guidance on proprietary and ethics.
- Use generative AI to deliver communications to the public without human oversight, to uphold accuracy, inclusivity and mitigate biases. Human oversight will be part of:
- The production and review stages for content that will remain static. For example this could include press releases, printed posters, and direct mail. In this scenario, human oversight includes the government communicator(s) creating the content, and the lead responsible for the communications activity.
- The production, configuration, testing and evaluation of dynamically generated or interactive communications before they go live. For example this could include chatbots, live conversational services, and services that dynamically generate digital advertising content. In this scenario, human oversight includes the technical team designing and developing the interactive communications, and the government communicator(s) responsible for the communications activity.
- Share any private, protected, or sensitive information with third-party AI providers without having appropriate data sharing and security agreements in place.
Further guidance and resources
Examples of guidance relevant to the use of generative AI (at the time of writing):
- Official guidance on using artificial intelligence in the public sector, AI Playbook for UK Government and Introduction to AI Assurance. These outline best-practices for the public sector to consider in the following core areas:
- Importance of using the correct AI products to meet user needs
- Ensuring use of AI is compliant with data protection laws
- Meaningful human control at the right stages, often called ‘human-in-the-loop’
- Importance of governance and risk assessment for AI projects
- Planning and preparing for AI systems implementation
- Understanding AI ethics and safety
- Official guidance on Ethics, Transparency and Accountability Framework for Automated Decision-Making. This seven-point framework supports safe, sustainable and ethical use of automated or algorithmic decision-making systems through:
- Testing to avoid any unintended outcomes or consequences
- Delivering fair services for all of our users and citizens
- Being clear who is responsible
- Handling data safely and protecting citizens’ interests
- Helping users and citizens understand how it impacts them
- Ensuring legal compliance
- Building something that is future proof
- The Government Digital Service (GDS) Data Ethics Framework for government. The Framework guides appropriate and responsible data use in government and the wider public sector. It helps public servants understand ethical considerations, address these within their projects, and encourages responsible innovation. It has three overarching principles, and outlines specific actions that should be taken for each:
- Transparency
- Accountability
- Fairness
- The latest guidance from the Information Commissioner’s Office (ICO) on automated decision-making and profiling and how to use AI and personal data. The guidance outlines the considerations of the UK General Data Protection Regulation in this context, including:
- Emphasising the principle of transparency, encouraging organisations to provide clear and understandable information about automated decision-making processes and the logic behind profiling activities.
- Underscoring the importance of ensuring fairness in automated decisions and profiling, urging organisations to identify and address any biases or discrimination that may arise. It also emphasises accountability, requiring organisations to take responsibility for the impact of their automated systems.
- Guidance and advice on how to improve how to handle AI and personal information, particularly around personal and special category information in line with UK GDPR.
- Advising organisations to practise data minimisation, collecting only the necessary information for automated decisions or profiling. Additionally, the guidance suggests conducting Privacy Impact Assessments (PIAs) to evaluate and mitigate potential risks associated with automated processes, ensuring compliance with data protection regulations.
- Official guidance on Human-centred approaches to scaling AI can provide human-centred frameworks and tools to effectively scale and de-risk AI tools, considering the ‘people’ factor of technology acceptance.
- The Algorithmic Transparency Recording Standard (ATRS) as laid out by the Algorithmic Transparency Recording Standard Hub.
- The Incorporated Society of British Advertisers (ISBA) Advertising Industry Principles for the use of generative AI on creative advertising.
- Furthermore, government communications will follow and adhere to developments in international copyright law as they emerge.
Last updated: 1 August 2025
Updates include:
- Adding recent relevant publications
- Adding greater detail and specificity into the use of all generative AI formats and where AI generated content should be labeled as such
- Addition of Government Communications allowing the use of AI web crawlers to drive trusted government information sourced in AI web search tools.