Recommendations for countering false information

Fake news refers to inaccurate reports about real or fictitious events published in media and/or social networks, containing misleading information about socially significant phenomena and incidents.
Fake content includes:
- Deliberately false information
- Partially accurate or distorted content
- Manipulatively presented messages
- Satirical materials
- Fake websites and accounts
- Technically altered content (deepfakes, edited images, photoshopped material)
Sources of false information. Any channel can disseminate false information. Most commonly, fake content spread through:
- User-generated content on social networks
- Messaging platforms and chat groups
- Mass media outlets
Fake content categories by purpose:
- Emotional — designed to evoke specific audience reactions.
- Destabilizing — aimed at disrupting socio-political stability.
- Provocative — intended to incite offline/online actions.
- Profit-driven — motivated by financial gain.
- Defamatory — seeking to discredit targets and undermine trust.
- Unintentional — involuntary creation or distribution of false content.
Dangers of fake news:
- Potential to destabilize socio-political environments.
- Frequent incitement of irrational user behavior (both online and offline). Panic induced by unreliable information often leads to serious consequences.
- Creation of negative perceptions about individuals or organizations.
- Erosion of trust in credible media and scientific institutions.
- Rapid propagation. Technological advancements enable increasingly sophisticated deception methods that challenge emerging regulatory frameworks.
- Potential for material or physical harm.
- Reality distortion and public opinion manipulation, potentially influencing critical societal processes like elections.
Why people believe fake news?
- Information bubbles and echo chambers — A curated news ecosystem surrounding users, creating closed environments where only agreeable sources appear.
- Low media literacy — Users’ inability to verify sources or identify manipulation techniques.
- Cognitive biases — Including confirmation bias (favoring information that aligns with existing beliefs) and intuitive judgment over factual analysis.
- Emotional manipulation — Fake content exploits fear, anger, or hope, overriding rational evaluation.
- Statistical manipulation — Misrepresented surveys and fabricated “majority opinions”.
- False consensus effect — Fabricated polls, invented data trends and “majority opinions” create artificial social proof (“If everyone believes it, it must be true”).
- Rapid dissemination — Content spreads faster than fact-checking occurs, while algorithms amplify sensational material, ensuring viral reach.
- Diminished critical thinking during crises — Heightened stress reduces analytical capacity.
- Advanced technologies — Generative AI produces fabricated articles and hyper-realistic deepfakes, while bot farms simulate false popularity.
Common disinformation tactics:
- False attribution — Misrepresenting statements as coming from authoritative sources to enhance credibility.
- Logical fallacies — Distorted context, manipulated visuals, and fractured fact patterns.
- Pseudoscientific narratives — Conspiracy theories that embed deeply in vulnerable audiences’ worldviews, persisting for years.
- Repetition as validation — Frequent repetition transforms false claims into perceived truths.
- Fabrication — Forged documents, synthetic media, and spoofed government/media websites.
- Technical fakes — AI-generated images, deepfakes.
Fact-checking base:
Information verification is the process of examining facts, statements, or claims circulating through media, social networks, and other channels. The primary purpose of fact-checking is to determine information reliability while identifying errors, false claims, or manipulations.
Key indicators of potentially false or distorted information include: factual inconsistencies, unverified sources, excessive emotional language, contradictions with authoritative references, and absence of specific details.
Particular attention should be paid to: incorrect event dating, irrelevant historical visuals, source credibility, clickbait headlines, grammatical errors, and manipulative phrasing patterns:
- Anonymous attribution (“renowned expert,” “scientifically proven”)
- Unverifiable sourcing (“acquaintance, friend, special services insider contact”)
- Viral sharing demands (“forward to everyone you know”, “urgent – share now”)
- Revelation framing (“the hidden truth revealed”, “what they don’t want you to know”, “we’ve all been lied to”)
Essential verification elements:
- Account authenticity: creation date and verification status
- Content origin: original post versus repost
- Digital evidence: tags, geotags, photo metadata, publication timestamp
Note: These indicators serve as warning signs rather than definitive proof. Final determination requires complete verification.
Stages of fact-checking:
1. Information collection: gathering source materials, texts, and claims requiring verification.
2. Hypothesis formation: developing preliminary assumptions about potential verification outcomes.
3. Hypothesis testing: analyzing selected facts and searching for supporting or refuting evidence through reliable sources and independent research.
4. Final assessment: a thorough identification of all information contradictions or inconsistencies is necessary to establish definitive conclusions regarding the claim’s veracity.
Fact-checking tools:
- Reverse image search: Services like Tineye and Search4faces enable uploading images to find matches online.
- Web archives: (e.g., Wayback Machine) preserve copies of web pages at different points in time. This enables viewing previous versions of websites and identifying changes or discrepancies.
- Telegram bots: Quick OSINT and Getcontact provide access to verification databases.
- Browser extensions: IP Whois & Flags Chrome & Websites Rating identifies server locations; RevEye performs reverse image searches.
- File metadata: Exif data in images contains creation timestamps, author information, and location details. The ExifTool utility enables analysis of this metadata to verify the authenticity of images or other files.
How to verify a video?
1. Reverse image search: Capture video frames and analyze them through reverse image search tools.
2. Context verification: Examine video descriptions for verification clues.
3. Detail inspection: Assess weather conditions, shadows, cuts, audio distortions, background noise, and locations.
4. Comment review: Check user discussions for potential verification leads.
For advanced deepfake detection methods, refer to the article “How to detect deepfakes on YouTube”.
Refutation
To select a response strategy, assess the situation against these criteria:
Fake content reach and audience:
- Estimated viewership numbers
- Target demographic (general public or specific group)
- Platform/media associations (major outlets or niche sources)
Available evidence:
- Existing proof to counter fake news.
- Availability of credible facts or expert opinions
- Potential for evidence-based refutation versus unsubstantiated denial
The Streisand Effect
This describes cases where attempting to debunk false information accidentally amplifies its spread.
- Risk of amplifying the fake through refutation attempts
- Decision whether public response is warranted
Refutation strategies:
1. Preemptive action — A predictive measure taken when disinformation threats are identified. Requires verifying signals and publishing confirmed facts proactively.
2. Post-publication intervention — Disseminating authoritative corrections after fake news appears, notifying publishing editors about the false content, engaging in comment sections, and issuing rebuttals through credible channels.
3. Reputation rehabilitation — Extends beyond fact-correction to actively repair reputational harm caused by the disinformation.
For fake information with limited local reach, avoid amplifying it through widespread rebuttals to prevent the Streisand effect. Focus corrections within the original distribution channels or similar platforms.
When fake news achieves large-scale dissemination within a regional information space, it becomes critical to utilize all available relevant resources while maintaining rigorous standards for refutation content preparation. The clarity and appeal of your message directly correlate with its potential virality. Standard press releases require specific adaptation for social media platforms.
Given that fake news frequently propagates through messaging apps, the optimal strategy involves distributing rebuttals through these same channels. Should access to closed messenger groups prove impossible, focus shifts to alternative platforms that effectively reach the target audience.
The “no response” strategy carries significant risks, as fake news often resurfaces.
Refutation formats:
Refutation post. Typically, text-based, ideal for rapid response. Clearly state what happened and present factual corrections. Avoid clickbait headlines — lead with key facts. When debunking visual fakes (screenshots/documents), label them as “fake” without excessive markings. Overuse of large “Fake” banners may reduce effectiveness through user desensitization. A subtle watermark or small indicator suffices.
Fake-truth comparison. Visual side-by-side presentation contrasting false claims with verified facts. This format captures attention while naturally transitioning into detailed explanations.
Card series. Useful when a single response proves insufficient, requiring expanded factual context.
Video rebuttals. More engaging than traditional formats but time-intensive. Best for non-urgent cases requiring thorough treatment.
Performance indicators
How to measure the effectiveness of your refutation?
- Media metrics. Aim for refutation coverage that matches or exceeds the fake news’ original reach.
- Audience engagement. Successful refutations reduce fake news circulation. Lack of response indicates ineffective messaging. If receiving negative reactions, revise content packaging and strengthen arguments.
Response timelines
- Non-critical fake news: within 24 hours
- Panic-inducing fake news: immediate response (30-60 minutes)
It is essential to verify published content and assess information critically rather than emotionally. Cultivate fact-checking competencies and evaluate your work’s effectiveness.