RESIST 3: A quick reference guide
This quick reference guide provides practical tools and techniques to help government communicators identify, assess, and respond to misinformation, disinformation, and information threats using the RESIST 3 framework.
Contents:
- What is RESIST?
- Step 1: Recognise mis- and disinformation
- Step 2: Early warning
- Step 3: Situational insight
- Step 4: Impact analysis
- Step 5: Strategic communication
- Step 6: Tracking effectiveness
- Further information
What is RESIST?
The RESIST 3 framework supports government communicators to build resilience to information threats and helps improve the communications response. It promotes a consistent approach to tackling the threat by providing six steps to follow:
- Recognise mis- and disinformation
- Early warning
- Situational insight
- Impact analysis
- Strategic communication
- Tracking effectiveness
Step 1: Recognise mis- and disinformation
In this section, you will learn:
- How to spot false or misleading messages.
- How individual messages can connect to build harmful narratives.
- How to understand who is spreading these narratives and why.
- How to assess how serious a threat is.
Understand manipulated, false and misleading information
Manipulated, false, and misleading information is protected by freedom of speech and is called mis/dis/mal, or MDM for short:
- Misinformation: Verifiably false information shared without intent to mislead.
- Disinformation: Verifiably false information shared with intent to deceive.
- Malinformation: True or partially true information used misleadingly.
The important thing to remember about MDM is that, if spread by real people during authentic discussions, it is unlikely to be illegal. Lying, exaggerating, creating rumours, and twisting facts are all governed by freedom of speech, except in very specific cases. While most MDM is legal, things like threats, abuse and harassment, hate speech, and terrorist and extremist content can be illegal.
The creation of coordinated, inauthentic networks to spread MDM, while not illegal, can be an indicator of an organised effort to manipulate public opinion. Two terms relevant to these threats are:
- Information threats: Deliberate and sophisticated efforts to manipulate, harm, or coerce, often involving coordinated inauthentic networks.
- Foreign Information Manipulation and Interference (FIMI): Coordinated efforts by foreign state or non-state actors to disrupt a target country’s political processes and public opinion.
Establish the full picture
Narratives
Identify how messages contribute to larger narratives (for example, polarisation, social proof, grievances, conspiracies) using the FIRST indicators:
- Fabrication: Is there any manipulated content? For example, a forged document, a manipulated image, or a falsified citation.
- Identity: Does anything point to a disguised or misleading source, or false claims about someone else’s identity? For example, a fake social media account, claiming that a person or organisation is something they are not, or behaviour that doesn’t match the way the account presents itself.
- Rhetoric: Is there use of an aggravating tone or false arguments?
- Symbolism: Are data, issues or events exploited to achieve an unrelated communicative goal? For example, historical examples taken out of context, unconnected facts used to justify conspiracy theories, misuse of statistics, or conclusions that are far removed from what data reasonably supports.
- Technology: Do communication techniques exploit technological advantages in order to trick or mislead? For example, coordination between accounts, bots amplifying messages, or machine-generated text, audio and visual content.
Behaviour
Observe suspicious account behaviour that suggests inauthenticity or coordination, including:
- Trolling: The account exhibits behaviours that may suggest it isn’t acting in good faith, for example, engagement is often twisted into opportunities to troll. However, it is important to recognise that robust debate can sometimes appear aggressive without necessarily being insincere.
- Crowding out: The account seeks to intimidate, diminish, or ridicule people it does not agree with, for example, discussions often seek to make the situation so toxic that real people do not want to participate.
- Targeting vulnerable communities: The account seems to target a community that has grievances, or is vulnerable to certain types of messaging, for example, the effort seems deliberately provocative, exploits vulnerabilities, or seeks to add fuel to a fire.
- Doxing (sometimes spelled “doxxing”): Publishing someone’s private personal information online without their consent, usually with malicious intent. The account is quick to reveal others’ personal information; for example, during a discussion the account makes references to where somebody lives or works.
Scale and severity
The final step in this stage involves understanding how the messages, narratives, and behaviours fit together and create an impact. An impact can be to degrade the quality of debate, threaten public safety, or to further undermine national interests.
To evaluate how the various indicators you have collected fit together, consider using a matrix:
- Who is the actor? What have you been able to determine about the account’s behaviour and place within a network?
- What are their goals? Are they good faith participants in a discussion or do they seem to have a different agenda?
- What are their actions? Are they using their freedom of speech to exaggerate or lie, or are they crossing a line into more harmful activities?
- How are they doing it? Are they simply sharing opinions or do you see examples of coordination and more advanced manipulative capabilities?
- What are the effects of their actions? Is it poor quality debate, efforts to undermine a reputation, or a potential threat to public safety?
Step 2: Early warning
This section will help you to understand:
- How to focus digital monitoring on risks.
- How to use digital monitoring to protect priority issues, tasks, and audiences.
Monitor trends
While you likely will already have regular access to some monitoring products, many will not be focused specifically on the risks posed by MDM and information threats.
The existing monitoring you receive should give a good sense of your key audiences, influencers, and of the broader debates that relate to your priority policy areas. At the minimum, you should use this monitoring to gain an understanding of:
- Digital debates taking place in relation to your organisation and its work.
- The main attitudes held by key influencers and audiences.
- How influencers and segmented audiences engage on digital platforms with your organisation and its work.
- Changes in trends over time.
More advanced tools can provide more tailored support to monitoring processes, including identifying AI-generated or altered text, images, video, and audio; the development of problematic narratives; and suspicious and/or harmful content.
Identify risk
Define and assess risks posed by MDM and information threats, considering their potential impact on various aspects of your work (for example, reputation, policy, public safety).
First, define the risk in a concise and precise way:
- Description of risk: What is the problem? Why is it important?
- Associated narrative(s): Describe the types of narratives that threaten this area of your work.
- Types of actors: What types of actors are likely to be involved in spreading the manipulated, false, or misleading information?
Second, define exactly how this risk could impact upon your work:
- Climate of debate: The risk can negatively affect societal debates about the issue, for example, through polarisation.
- Reputation: The risk can negatively affect the reputation of your organisation.
- Policy areas and goals: The risk can negatively affect the ability of your organisation to deliver upon its policy objectives.
- Ability to deliver services: The risk can negatively affect the ability of your organisation to deliver its critical services. This includes risk to staff safety.
- Audiences and stakeholders: The risk can negatively impact upon relationships with these groups, including vulnerable groups.
- Public safety, public health, and national security: The risk can negatively affect these areas.
Risk and impact assessments
The RESIST framework shows you how to use a matrix to systematically evaluate risks, their associated narratives and actors, then rank each risk based on their potential impact from low to high.
Step 3: Situational insight
This section will help you consider:
- What insight you need about information threats and how to use it to respond quickly.
- How to use the ABCDE framework to generate short briefings.
Turn monitoring into insight
Monitoring becomes valuable when it is turned into insight. Insight is a form of analysis that turns interesting data into actionable data.
It answers the question, ‘So what?’.
Insight should be used to:
- Baseline/benchmark over time to show change.
- Identify emerging trends and provide early warning of threats.
- Understand how MDM is distributed to key audiences.
- Generate recommendations.
- Provide support for identifying audiences, as well as developing and targeting messages and campaigns.
Create insight products
Insight is usually presented in reports that are circulated daily, weekly or on an ad hoc basis depending on need. Much of the data can be drawn automatically from monitoring reports or dashboards. A good insight report can be as short as one or two pages: put the most important information at the top and get to the ‘So what?’ quickly.
An MDM insight product should at a minimum include:
- A top-line summary with a short commentary explaining the ‘So what?’ and setting out recommendations for action.
- Sections on priority themes and issues covering:
- Relevant outputs from your team on priority issues, for example a ministerial announcement.
- Examples of disinformation relating to these outputs, including where and how it is circulating.
- Key interactions and engagements, for example is the disinformation being dealt with organically, is it being picked up by journalists and influencers? If so, which ones?
- Trends and changes in attitudes (and influencers and audiences) over time (this can be combined with any polling data you have).
Create briefings using the ABCDE framework
The ‘ABCDE’ framework can help you write clear briefings for non-specialists and wider audiences. It can be used to structure longer analyses but is also useful for short briefings – anything from a few words for each letter of ABCDE can capture the essence of the threat to make it easily understood.
It covers:
- Actor: What kinds of actors are involved?
- Behaviour: What activities are exhibited?
- Content: What kinds of content are being created and distributed?
- Degree: How and to what extent is the content being spread?
- Effect: What is the overall impact of the case and whom does it affect?
For example:
‘ACTOR (such as accounts linked to a country) is exhibiting BEHAVIOUR (for example, coordinating bots) to spread CONTENT (for example, misleading narratives) to the DEGREE (for example, currently 500 inauthentic reposts and 1,000 authentic shares) with the EFFECT (such as disinformation that can cause risk to public safety).’
Step 4: Impact analysis
This section will help you answer the following questions:
- How do I prioritise?
- How do I escalate the most prioritised issues?
- How do I express uncertainty?
- How do I prepare to act?
Prioritise
Use structured analysis techniques to assess MDM and information threats, considering messages, narratives, and patterns of behaviour. Use the questions in the severity assessment table in the RESIST Framework to structure your thinking, and if possible express the likelihood of your assessments.
Assess the severity
Weigh up the type of information threat to assist in your prioritisation. Use the following questions to guide your assessment:
- Who is the actor? What can you determine about the account’s behaviour and place within a network?
- What are their goals? Are they good faith participants in a discussion or do they seem to have a different agenda?
- What are their actions? Are they using their freedom of speech to exaggerate or lie, or are they crossing a line into more harmful activities?
- How are they doing it? Are they simply sharing opinions or do you see examples of coordination and more advanced manipulative capabilities?
- What are the effects of their actions? Is it poor quality debate, or something more serious?
Escalate
Use these categories to help you decide whether to prioritise or escalate an issue:
- MDM (misinformation, disinformation, malinformation): This is usually lawful free speech and not considered an information threat unless it persistently stops you from doing business as usual. Persistent MDM should be added to your early warning monitoring, and you should consider some of the proactive strategic communication options.
- Harmful speech: Confrontational speech that encourages harassment or violence may be a domestic information threat, especially if you observe coordination of accounts. Only treat it as an information threat if there is a genuine risk of harm. In this case, notify other relevant government teams and escalate to policy level.
- FIMI (Foreign Information Manipulation and Interference): If you see signs of involvement of hostile foreign states or foreign threat actors, escalate to policy level and notify relevant government teams. Coordinate your strategic communication response, if appropriate.
Confidence in your assessments
Use confidence indicators to provide more precise assessments:
- High confidence [H]: The evidence currently available is sufficient to reach a reasonable conclusion.
- Medium confidence [M]: It is possible to reach a reasonable conclusion based on the available evidence, but additional evidence could easily sway that conclusion.
- Low confidence [L]: There is some relevant evidence, but it is taken in isolation or without corroboration.
Prepare to communicate
Once the previous steps are completed, you should be able to assign a priority level to the information threat. You may need to develop your own criteria for prioritising information threats based on your specific context. The principle is that the goal, impact, and reach should inform how urgently you prioritise the case.
Keep your assessment outcome-focused. The role of government is not to respond to every piece of manipulated, false or misleading information. You should not take on the role of arbiter of truth or moderator of public debate. A prioritised response is one in which there is a clear and compelling need to act, for example if there is a threat to national security or public safety.
Step 5: Strategic communications
This section will help you answer the following questions:
- What are the most important communication principles?
- What are proactive communication options?
- What are reactive communication options?
- How can I improve partnerships and capacity for communication?
Not all MDM has to be responded to. In many circumstances, public opinion will self-correct, and reacting may inadvertently amplify the MDM. Any public response to false or misleading information that your organisation decides to make should deliver the truth well told.
Proactive responses
A healthy information environment is dependent on people having access to accurate, reliable and timely information, which enables informed decision-making and democratic participation.
We can empower individuals to critically evaluate information and increase their resilience to manipulation through proactive communication responses, such as:
- Public information and raising awareness: Evidence-based content that ensures factual information is provided in the information space on priority policy issues or areas identified as higher risk to reduce the space for MDM.
- Public resilience building: Improving media literacy to empower individuals to fact check information, and review the source and its validity, to build resilience.
- Building trust in public communications: Adopting a transparent, timely, and whole-of-society approach to support citizens to consume evidence-based information.
- Counter-brand campaigning against threat actors or adversaries: Implementing a range of communication activities that seek to ensure a reputational cost to actors or adversaries who persistently spread false, misleading and harmful information.
Reactive responses
As harmful risks to public safety or democracy are identified, a reactive response must be considered to minimise the impact. And in a government context, any response must be considered in close collaboration with policy leads to ensure full alignment. Reactive communication responses may include:
- No communication action: Taking a strategic decision to not respond to MDM, but conducting real-time analysis.
- Debunking: Exposing and countering false or manipulated information by asserting factual information. This carries the risk of amplifying the original false narrative.
- Counter-narrative: Promoting a factual narrative without necessarily referring to the harmful narrative.
- Policy response communication: Using communications to emphasise policy levers; for example, explaining a decision to levy sanctions.
- Crisis communications: Deliver accurate, timely and trusted information to the public about unfolding events.
Capacity-building mechanisms
We must consider the mechanisms for delivering credible communications. These should be built and strengthened over the medium- to long-term by:
- Building relationships with trusted voices in the community
- Coordinating and aligning with policy wherever possible
- Adopting a whole-of-society approach, including private sector, media, civil society, and academia.
- Conducting detailed audience mapping that identifies segments vulnerable to mis- and disinformation
- Identifying the most effective channels for each audience segment
- Building public trust in institutions by delivering transparent, timely, evidence-based and inclusive communications.
- Adopting a strategic campaign mindset using frameworks like the Government Communications OASIS campaign planning framework.
Step 6: Tracking effectiveness
By the end of this section, you will be able to:
- Systematically evaluate communication response processes.
- Set clear communication objectives in line with policy aims and organisational priorities.
- Measure the effectiveness of communication activity.
Why should we evaluate and how do we do it?
Evaluation serves three broad purposes:
- Accountability: Demonstrate the value and impact of your interventions.
- Learning: Understand what works, what doesn’t, and why.
- Adaptation: Make real-time adjustments to improve effectiveness.
Tracking the effectiveness of communication responses to information threats has to go beyond simple metrics and analytics, and consider real-world outcomes and impacts. It has two distinct aspects, evaluating communications and evaluating the response process.
Evaluate communications
Set objectives
Common objectives of communication activity relating to information threats include:
- Reaching audiences vulnerable to specific mis/disinformation.
- Directing audiences to legitimate and credible sources.
- Increasing reach and engagement with accurate information.
- Building audience resilience and critical thinking abilities.
More information on setting objectives for communication activities can be found in the Government Communications OASIS guide to campaign planning.
Government Communications Evaluation Cycle
The Government Communications Evaluation Cycle is an agile framework that helps measure the effectiveness of communication activities, which encourages a continuous process of assessment and learning.
Put the framework into practice
Once baselines are established, regular measurement is crucial. Where metrics show your strategy is not achieving desired results, you can make evidence-based decisions to modify your approach, adjust messaging, or reallocate resources to more effective channels, messages, or targeting.
Evaluate response processes
Evaluation criteria should be determined in advance to be able to evaluate the efficiency and effectiveness of the process.
Further information
If you want to learn more about the RESIST 3 framework, or have a query regarding the information on this page, contact: disinfo-comms@cabinetoffice.gov.uk.