Narrative Intelligence & Media Monitoring Glossary

A

ABCDE Framework — Camille François's typology for analysing influence operations: manipulative Actors, deceptive Behaviour, and harmful Content. Alexandre Alaphilippe added Distribution; James Pamment later proposed Effect. The starting vocabulary for any serious narrative-attack investigation.

Adaptive Propaganda — Propaganda that mutates in response to detection, debunking, or audience reaction — switching frames, vocabulary, or platforms while preserving the underlying objective. Defeats static keyword-based monitoring almost by design.

Adler Model — Repsense's proprietary impact-scoring model. Calculates how strongly a piece of content shifts brand reputation by combining content quality with sentiment direction. Named after Alfred Adler — sits alongside Bernays and Lippmann in the lineage.

Adversarial / Adversary Narratives — Storylines deployed by actors actively working against the target's interests — competitors, hostile states, ideological opponents. Distinguished from organic criticism by their structural signals and coordination patterns, not by topic.

AI Comprehension Summaries — Machine-generated synthesis that goes beyond keyword tagging to extract claims, sentiment, intent, and context from a piece of content. The shift from 'this article mentions X' to 'this article argues Y about X for reason Z'.

Alert — A real-time or scheduled notification triggered when new media coverage matches a tracked keyword, brand, or topic. The atomic unit of always-on monitoring — and the first thing crisis teams configure.

AMEC — The International Association for the Measurement and Evaluation of Communication. AMEC sets the global standards for PR measurement, most famously the Barcelona Principles, and runs the No-AVE pledge against vanity metrics.

Amplification — The process by which a piece of content spreads beyond its original audience — through shares, reposts, algorithmic boosts, or coordinated networks. Distinguishing organic amplification from inauthentic amplification is one of the hardest problems in modern media intelligence.

Amplification Chain — The sequence of accounts and platforms through which content is propagated — origin, secondary boosters, tertiary amplifiers. Mapping the chain reveals whether spread was organic or staged.

Amplification Structure — The architecture by which a piece of content is spread — who carries it, in what order, on which platforms, with what timing. Distinct from amplification volume, which only counts.

APT (Advanced Persistent Threat) — A well-resourced, often state-affiliated adversary that maintains long-term, low-visibility access to a target environment to steal information or shape outcomes. Borrowed from cybersecurity, the APT concept increasingly applies to information operations: persistent narrative campaigns rather than persistent network intrusions.

Astroturfing — Manufacturing fake grassroots support to make corporate, political, or ideological messaging look organic. The opposite of authentic public opinion — and a hallmark tactic of coordinated inauthentic behaviour.

Authenticity Intelligence — Analysis aimed at determining whether voices, accounts, and engagement are real or manufactured. The defensive complement to coordinated-inauthentic-behaviour detection — and the basis of any serious influencer vetting.

AVE (Advertising Value Equivalent) — The cost a brand would have paid for the equivalent space as a paid ad — used as a crude proxy for the value of earned coverage. Widely criticised; AMEC's Barcelona Principles formally reject AVE as a measure of PR value.

B

Barcelona Principles — A set of seven measurement standards established by AMEC, first agreed in Barcelona in 2010 and updated since. They insist that PR be evaluated on outcomes and impact, not on outputs like clip counts or AVE.

Beat Reporter — A journalist who covers a specific subject area — tech, healthcare, politics — over a long period. Building relationships with the right beat reporters is the bedrock of traditional media relations.

Behavioural Pattern Recognition — A detection approach that flags coordinated activity by its structural signatures — synchronised timing, content homogeneity, cross-platform staging — rather than by raw volume. Catches influence operations during their seeding phase, before they trip conventional volume-based alerts.

Boolean Query — A search expression using AND, OR, NOT, parentheses, and wildcards to define exactly what mentions a monitoring tool should capture. Bad booleans produce noisy data no analysis can rescue — clean booleans are the silent precondition for everything else.

Bot — A social media account run entirely by software, designed to post or engage automatically. In disinformation campaigns, bots are deployed to inflate trending topics, manufacture consensus, and create the illusion of public discussion.

Botnet — A coordinated network of bots run by a single operator, often numbering in the tens of thousands. Commercial botnets are sold as a service for amplification, attack, and ad fraud.

Brand Mention — Any reference to a brand or organisation in a piece of media content — print, broadcast, online, or social. The most basic unit of monitoring; raw mention counts say little without quality metrics layered on top.

Broadcast Monitoring — The tracking and analysis of brand or topic mentions on television and radio. Far harder than print or online monitoring because audio must be transcribed and matched against thousands of hours of programming.

Burst Detection — Algorithmic identification of sharp, anomalous concentrations of content within a short window. Captured as a single score representing the intensity of the spike relative to the baseline — and often the first signal that a viral moment, crisis, or coordinated push is underway.

C

Cascade — The shape and speed at which a piece of content spreads through a network — measured by metrics like time-to-50%-reach, velocity, and source diversity. A cascade's structure reveals whether spread was organic, mainstream-driven, or coordinated.

Cheapfake (Shallowfake) — Manipulated media made with simple editing tricks rather than AI — slowing down a clip, splicing footage, or misleading captions. Cheaper, faster, and often more effective than deepfakes.

CIB (Coordinated Inauthentic Behavior) — A term coined by Meta to describe groups of accounts working together to mislead audiences about who they are or what they're doing. Now used industry-wide as the operational signature of an influence operation.

Clip Count — The total number of stories, posts, or broadcast segments mentioning a brand in a period. The oldest output metric in PR — and on its own, a famously misleading one.

Clipping — A single captured mention from print, online, broadcast, or social. The term comes from the analogue practice of literally cutting articles out of newspapers — it has outlived its origin by decades.

Coded Language — Words and phrasing that signal a meaning beyond their literal sense to an in-group while remaining plausible to outsiders. The mechanism by which discriminatory or extremist content survives content moderation and slips into mainstream discourse.

Competitive Narrative — A storyline deployed by a market competitor to position their own offering — sometimes legitimately, sometimes through deniable amplification. The space where reputation management and competitive intelligence overlap.

Complete Picture of the Operation — A reconstruction that links every observable element of a campaign — origin, amplifiers, content variants, timing, intent — into a single coherent map. The end-state of operation identification, and the deliverable that turns scattered alerts into a defensible decision.

Content Seeding — The deliberate placement of narrative frames and talking points across a chosen set of channels before organic discussion of a topic begins. The opening move of most coordinated influence operations — visible in retrospect as a temporal gap between 'early voices' and the broader conversation.

Conventional Alert — A notification generated by legacy monitoring tools when volume, sentiment, or keyword thresholds are crossed. Useful for known knowns; blind to anything that doesn't fit the configured filter.

Conventional Monitoring — The established practice of tracking mentions, sentiment, and reach against pre-defined keywords. Adequate for stable information environments; increasingly inadequate against coordinated, video-native, and adaptive campaigns.

Coordinated Attack — A narrative attack executed by multiple actors operating in concert — same talking points, synchronised timing, cross-platform staging. The defining feature is structure, not volume; coordinated attacks can be small and still devastating.

Coordination Analysis — The systematic examination of accounts, timing, and content patterns to determine whether activity is organic or organised. The analytical discipline that turns raw coordination patterns into actionable attribution.

Coordination Evidence — The specific observable signals — synchronised timing, shared assets, identical phrasing — that demonstrate accounts are operating in concert. The raw material of operation identification.

Coordination Patterns — Recurring structural signals that distinguish coordinated activity from organic discussion — synchronised posting, shared phrasing, clustered timing, identical asset use. The fingerprint of an operation.

Counter-Messaging — Proactive communication designed to contest a hostile or false narrative directly. More effective when deployed early — once a narrative is entrenched, factual rebuttal struggles to dislodge it.

Crisis Communications — The discipline of managing stakeholder communication during a reputation-threatening event. Modern crisis comms is increasingly proactive — monitoring narrative pressure before a crisis breaks, not just reacting once it does.

Crisis Detection — Identifying that a reputational, narrative, or operational crisis is forming — ideally before it becomes one. The earlier the detection, the wider the response window.

Cross-Border Narrative — A storyline that travels across national or linguistic boundaries, often mutating to fit each new context while preserving its core frame. Cross-border narratives are the natural medium of modern influence operations.

Cross-Platform Amplification Chains — The sequence by which a single narrative is propagated across multiple platforms, with each platform handling a different stage of seeding, spreading, or normalisation. Tracking the chain is more diagnostic than tracking any single platform.

Cross-Platform Content Seeding — Planting the same narrative frames simultaneously across multiple platforms before any organic discussion exists. The most efficient form of pre-positioning, and the hardest for any single-platform monitoring tool to catch.

Cross-Platform Migration — The staged movement of a narrative between platforms — typically Telegram to TikTok to Facebook to mainstream news — with content reformatted for each environment. Synchronised migration is a strong indicator of coordinated activity rather than organic spread.

D

Dashboard — A digital interface that visualises monitoring metrics — volume, reach, sentiment, share of voice — in a single view. Good dashboards collapse a thousand mentions into one decision; bad ones do the opposite.

Deep Video Analysis — The application of computer-vision and audio-NLP techniques to extract claims, sentiment, named entities, and visual symbols from video content. Increasingly necessary as TikTok, YouTube and Reels become dominant channels for narrative formation.

Deep Web — The portion of the internet not indexed by mainstream search engines — paywalled databases, private forums, members-only communities. Distinct from the dark web, which requires special software to access. Where many narrative campaigns are seeded before they reach the surface.

Deepfake — Synthetic audio, image, or video generated by deep-learning models to convincingly impersonate a real person. From cloned voices to fabricated executive appearances, deepfakes have reshaped both narrative attacks and corporate fraud.

DISARM Framework — A framework for documenting the tactics, techniques, and procedures (TTPs) used in disinformation campaigns, modelled on the MITRE ATT&CK framework from cybersecurity. Lets analysts describe an influence operation in a shared, structured vocabulary.

Discourse Activation Point — An external event — a court ruling, a celebrity scandal, a terror attack — that triggers a sudden resurgence of a dormant narrative. The narrative was already present in the ecosystem; the event simply gave it a new occasion to surface.

Disinformation — False or misleading information created and spread deliberately to cause harm. The 'deliberate' part is what separates it from misinformation — disinformation has intent.

E

Earliest Amplification Stage — The window between first publication and first measurable spread — when intervention is cheapest and most effective. Predictive intelligence is built around detecting and acting in this window, before conventional volume-based monitoring would notice anything is happening.

Earned Media — Coverage a brand receives because journalists, creators, or audiences chose to talk about it — as opposed to Paid (ads) or Owned (the brand's own channels). The original product of public relations.

Echo Chamber — An information environment in which a person mostly encounters views that reinforce their existing beliefs. Algorithmically curated feeds make echo chambers easier to fall into and harder to notice.

Engagement Patterns — The shape and rhythm of how audiences interact with content — likes, shares, comments, dwell time — analysed for anomalies that suggest manipulation or coordination. Inauthentic engagement leaves distinctive patterns that authentic engagement does not.

Engagement Rate — The percentage of an audience that actively interacts with a piece of content — likes, comments, shares, replies — rather than passively viewing it. A higher-fidelity signal of resonance than impressions alone.

Entity Recognition (NER) — The natural-language processing technique that identifies and classifies named entities — people, organisations, places, products — inside unstructured text. The technology that turns a stream of articles into a structured database of who and what.

External Propaganda — Propaganda directed at audiences outside the originating state's borders — RT, Sputnik, CGTN are canonical examples. Designed to shape foreign perceptions and decisions.

F

Fact-checking — The journalistic and computational discipline of verifying claims against evidence and rating their accuracy. Reactive by nature, fact-checking rarely catches up with viral falsehoods — but it sets the public record straight.

FIMI (Coordinated Foreign Information Manipulation and Interference) — The European Union's official term for hostile influence operations directed at EU citizens and institutions by foreign actors. Distinct from disinformation by its structural emphasis: FIMI is defined by behaviour and intent, not just content veracity.

Filter Bubble — A term coined by Eli Pariser for the personalised information environment that algorithms create around each user. Closely related to echo chambers, but specifically about algorithmic curation rather than social self-selection.

Framing Analysis — A method for analysing how media positions a topic — what is made salient, what is omitted, what causal story is told. Robert Entman's classic four frame elements are problem definition, causal interpretation, moral evaluation, and treatment recommendation.

Full Information Environment — The complete set of channels, sources, formats, and languages through which a topic is being discussed — not just the slice covered by any one monitoring configuration. The honest baseline against which any partial view should be measured.

Full-Spectrum Narrative Monitoring — Coverage across text, video, audio, and image content; across mainstream, social, and fringe platforms; across multiple languages. The combination of which most legacy tools can deliver only one or two dimensions at a time.

H

Hate Speech — Public expression that attacks or demeans a group based on protected characteristics — ethnicity, religion, gender, sexuality, disability. Definitions vary by jurisdiction and platform, which makes consistent detection one of the harder problems in content moderation.

Hostile Narratives — Storylines actively designed to damage a target's reputation, legitimacy, or operational freedom. Distinguished from organic criticism by intent and structure, not by topic.

HUMINT (Human Intelligence) — Intelligence collected through human sources — informants, defectors, interviews, undercover work. The oldest form of intelligence, and the one most resistant to automation. Narrative analysts increasingly draw on HUMINT-style sourcing through journalists, researchers, and community informants.

I

IMINT (Imagery Intelligence) — Intelligence derived from visual imagery — satellite photos, drone footage, surveillance video. In the narrative space, the closest equivalent is video-native intelligence: extracting meaning from the explosion of TikTok, Reels, and short-form video that text-based monitoring cannot read.

Impressions — The estimated number of times a piece of content was potentially seen, calculated as reach × frequency. A potential, not actual, count — impressions measure opportunities to see, not eyeballs that actually did.

Influence Operation — The civilian-and-commercial framing of a coordinated effort to shape opinion through manipulated information. Same structural definition as an information operation, different vocabulary depending on whether the analyst sits in defence, journalism, or PR.

Influence Operations — Coordinated efforts to shape public opinion or behaviour through manipulated information, often by state or quasi-state actors. The umbrella under which disinformation, astroturfing, bots, and coordinated inauthentic behaviour all sit.

Influencer Vetting — Systematic background analysis of an influencer or content creator before partnership — examining audience authenticity, engagement quality, past controversies, and ties to coordinated networks. Reduces the risk that a brand collaboration accidentally amplifies a hostile or fraudulent voice.

Influential Share — A repost or amplification by an account whose own audience materially extends the reach of the original content. The currency of how narratives actually scale — most shares don't matter; a small number do disproportionate work.

Information Disorder — Claire Wardle and Hossein Derakhshan's umbrella term for the polluted information environment, comprising misinformation, disinformation, and malinformation. A more precise framing than the loaded phrase 'fake news'.

Information Integrity — The condition in which information available to a public is trustworthy, accurate, and free from manipulation. UNESCO and the OECD now treat information integrity as a measurable public good — and the absence of it as a national risk.

Information Operation — A coordinated effort to influence the perceptions or decisions of a target audience through the manipulation of the information environment. Often used interchangeably with influence operation; some practitioners reserve 'information operation' for the military and state framing.

Integration of Media Narrative Data — Combining media monitoring outputs with other intelligence streams — sociological data, audience research, internal business signals — to produce decisions rather than dashboards. The step that turns monitoring into management.

Internal Propaganda — Propaganda directed at the originating state's own population — designed to shape domestic loyalty, perception, and behaviour. Often more subtle and more effective than external propaganda because it operates inside trusted information channels.

IOA (Indicator of Attack) — A behavioural signal that an attack is underway or being prepared, focused on intent and method rather than specific artefacts. Distinct from an Indicator of Compromise (IOC), which is forensic evidence after the fact. In narrative intelligence, an IOA might be a sudden cross-platform staging pattern.

IoT (Internet of Things) — The network of physical devices — cameras, sensors, vehicles, appliances — connected to the internet and capable of generating data. In threat-intelligence contexts, IoT devices increasingly appear as attack surfaces, amplification infrastructure, and behavioural data sources.

K

Key Message — A specific statement a brand or campaign wants to communicate and have echoed in coverage. Message penetration — the share of coverage that actually carries the key message — is one of the most useful PR measurement metrics.

KPI (Key Performance Indicator) — A measurable value tied to a specific objective. In media monitoring, common KPIs include share of voice, sentiment delta, message penetration, and coverage volume — but the right KPI depends entirely on the goal.

L

Legacy Monitoring — The older, volume-based generation of media monitoring tools — keyword alerts, mention counts, sentiment scores — designed for a slower information environment. Still useful, but blind to coordinated activity, video content, and narrative architecture.

Lippmann Methodology — Repsense's narrative-centric reporting framework, named after Walter Lippmann (1889–1974), the pioneer of public opinion research. Centres four pillars: temporal intelligence, framing awareness, and actionability.

M

Malinformation — Information that is genuine but shared deliberately to cause harm — for example, leaking real but private correspondence to damage someone's reputation. The third leg of the misinformation / disinformation / malinformation triad.

Master Narrative — A long-running, deeply embedded story that shapes how new events are interpreted — 'Russia's return to greatness', 'the deep state', 'the climate hoax'. Historian Matthew Levinger argues successful disinformation campaigns plug into pre-existing master narratives rather than inventing new ones.

Media Intelligence — The practice of collecting, analysing, and interpreting media data to extract insights that support strategic decisions. The umbrella above media monitoring — monitoring captures the data; intelligence makes it mean something.

Media Literacy (MIL) — Media and Information Literacy — the skills needed to find, evaluate, and use information critically. UNESCO treats MIL as the strategic counter-measure to information disorder at the population level.

Media Monitoring — The systematic process of reading, watching, and listening to media sources — print, online, broadcast, social — to track coverage of an organisation, brand, or topic. Where the work of measurement begins, but never where it ends.

Metadata Tagging vs Content-Level Comprehension — The distinction between systems that label content by external attributes (source, date, language, sentiment polarity) and systems that actually parse what is being said. The gap between knowing a video exists and knowing what claim it makes.

Misinformation — False information shared without intent to deceive. The classic example: someone earnestly forwarding a debunked health claim because they believe it's true.

MITRE ATT&CK — A globally accessible knowledge base of adversary tactics and techniques observed in real-world cyberattacks. Its structure inspired the DISARM framework for information operations — the same logic of TTP cataloguing applied to narrative warfare.

N

Narrative — Any assertion that shapes perception about a person, place, organisation, or event in the information ecosystem. The unit of analysis for narrative intelligence — bigger than a mention, smaller than a worldview.

Narrative Attack — A coordinated effort to weaponise misinformation or disinformation tactics so that a harmful narrative scales and damages a target's reputation. The operational layer above individual false posts — where individual incidents become a campaign.

Narrative Analysis — The discipline of understanding the storylines, information networks, communities, and influential actors that shape public perception around a topic. Goes beyond mention-counting to ask whose story is being told, by whom, and to what effect.

Narrative Classification — Sorting incoming content by which storyline it advances, opposes, or neutralises. The labelling step that makes narrative monitoring possible at scale — and the step at which most automated systems still struggle with nuance.

Narrative Cluster — A group of related mentions, posts, or articles that converge on the same storyline, sharing language, framing, and timing. The basic unit of narrative-intelligence analysis — bigger than a mention, smaller than a movement.

Narrative Coverage — A quality metric measuring the percentage of total mentions that actually matched a defined narrative. The cleanest noise-versus-signal ratio in modern reporting: 1,000 mentions with 6% coverage is mostly noise; 300 mentions with 40% coverage is real signal.

Narrative Detection — Identifying coherent storylines as they emerge from a stream of mentions — before they become large enough to register as trends. The forward edge of narrative intelligence, and the discipline most dependent on behavioural pattern recognition.

Narrative Entrenchment — The point at which a narrative becomes resistant to factual rebuttal — readers absorb new information through the existing frame rather than allowing it to update the frame. The reason early counter-messaging matters so much.

Narrative Inoculation — Pre-emptively exposing an audience to a weakened version of a hostile narrative — and the rebuttal — so that the full version, when it arrives, fails to land. Borrowed from medical immunology and grounded in psychological research on resistance to persuasion.

Narrative Landscape — The overall map of which storylines are active around a topic, brand, or region — their relative weight, sentiment, and direction. Where narrative monitoring becomes strategic awareness.

Narrative Monitoring — Tracking narratives — not mentions — as they form, mutate, and spread across the information environment. The successor to legacy monitoring: organised around storylines, not keyword hits.

Narrative Tracking — Following a single narrative across platforms, languages, and mutations over time. Distinct from narrative monitoring (which scans broadly) — tracking is targeted, longitudinal, and focused on a known storyline.

Narrative Trajectory — The path a narrative is on — its current velocity, expected reach, likely direction, probable mutations. The forward-looking output that distinguishes intelligence from monitoring.

Narrative Prediction — A discipline that anticipates how narratives will spread, mutate and combine — rather than only describing them after the fact. Combines behavioural signatures, cross-platform tracking and temporal analysis to flag operations during their seeding and early amplification phases, before volume-based monitoring would notice.

Newsjacking — The PR tactic of inserting a brand into a breaking news story to capture attention while it's hot. Coined by David Meerman Scott — powerful when done well, embarrassing when done badly.

NLP (Natural Language Processing) — The branch of computer science concerned with letting machines read, parse, and generate human language. The engine room of modern media monitoring: sentiment, entity extraction, topic modelling, and narrative clustering all depend on it.

O

Online Reputation Tracking — The ongoing measurement of how a brand, person, or institution is talked about across digital channels. The narrower, surveillance-style cousin of reputation management — focused on detection rather than action.

Operation Identification — Recognising that a set of seemingly independent posts, articles, and accounts is actually a single coordinated campaign. Built from coordination evidence, structural signatures, and temporal sequencing — not from content alone.

Operational Context — The strategic and situational environment in which a narrative is being deployed — geopolitical timing, target audience, adversary capability, competing storylines. Without operational context, raw intelligence is just data.

Operational Relevance — The degree to which a piece of intelligence actually changes what defenders or communicators do. Output volume is easy; operational relevance is hard — and is the only metric that matters at the decision-maker level.

Organised Amplification — Spread driven by deliberate coordination rather than authentic audience interest. Identified by structural signals, not content; small organised amplification can outperform large organic spread for short windows.

OSINT (Open-Source Intelligence) — Intelligence collected from publicly available sources — news, social media, official records, leaked databases, satellite imagery. The dominant collection discipline of modern media intelligence: most of what matters happens in the open, if you know how to look.

Outputs, Outtakes, Outcomes, Impact — The four levels of PR measurement codified by AMEC: outputs (what PR produced), outtakes (how audiences received it), outcomes (changes in awareness, attitude, behaviour), impact (effect on the business). Mature programmes measure all four — most still measure only the first.

Owned Media — Channels a brand controls directly: its website, blog, newsletter, app, branded social accounts. The 'O' in the PESO model.

P

Paid Media — Coverage a brand pays for: advertising, sponsored content, paid influencer placements. The 'P' in the PESO model — and increasingly indistinguishable from earned, which is exactly the problem.

PESO Model — Gini Dietrich's framework dividing all media into four interlocking categories: Paid, Earned, Shared, and Owned. The default mental map for modern integrated communications.

Pre-positioning — Planting narrative frames before any organic discussion of an issue exists — so that when discussion does begin, it inherits the planted frames as the natural starting point. A signature tactic of coordinated influence operations.

Predicted Trajectory — A model-generated forecast of where a narrative is heading: how fast, how far, and toward which audiences. The actionable output of predictive intelligence — turns 'this is happening' into 'here's where it's going'.

Predictive Alerts — Notifications generated when behavioural and structural signals suggest a narrative is about to escalate, not after it already has. The output of predictive intelligence — designed to compress the response window from days to hours.

Press Release — A formal written announcement distributed to journalists to seed coverage. The genre is often pronounced dead; it has been dying productively for several decades.

Prominence Score — A ranking of how centrally a brand or topic is featured within a piece of content — based on headline placement, mentions per paragraph, visual emphasis, and proximity to other key terms. A passing mention and a feature story shouldn't count the same.

Propaganda Apparatus — The full institutional and infrastructural machinery that produces, distributes, and amplifies propaganda — outlets, accounts, funding flows, talking-point pipelines. The target of structural analysis.

Propaganda Operations — Coordinated production and distribution of persuasive content designed to advance a political, ideological, or commercial agenda. The umbrella term that includes both information operations and influence operations.

Propagation Maps — Visualisations showing how a piece of content moved through the information environment — which sources picked it up first, in what order, with what reach. The forensic anatomy of a narrative cascade.

R

Rapid-Response Intelligence — Intelligence produced fast enough to support decisions inside an active crisis or campaign window. Measured in hours, not days — and often the difference between containment and escalation.

Reach — The estimated number of unique people who could have been exposed to a piece of coverage. A potential audience, not an actual one — and far less informative than reach × engagement or reach × impact.

Real-Time Narrative Tracking — Following storylines as they emerge and mutate, with minimal delay between event and analysis. The defining capability of modern intelligence platforms.

Reputation Management — The practice of monitoring and shaping how an organisation, brand, or person is perceived publicly. Combines media monitoring, crisis comms, content strategy, and SEO into a single discipline.

Response Window — The time between detecting a narrative threat and the moment it becomes too entrenched to counter effectively. Predictive systems exist to widen this window; reactive systems are defined by how quickly it has already closed.

S

Sentiment Analysis — The computational classification of text as positive, negative, or neutral — typically toward a brand, person, or topic. Modern systems return numeric scores; mature analysis still requires human interpretation, because sarcasm, irony, and cultural context defeat machines routinely.

Share of Voice (SoV) — A brand's percentage of total industry conversation in a defined period. The classic competitive metric — useful as a baseline, dangerous as a north star, because it counts every mention as equal.

SIGINT (Signals Intelligence) — Intelligence gathered from intercepted electronic signals — communications, radar, telemetry. Traditionally a military and government discipline; rarely available to commercial narrative analysts but central to state-level attribution work.

Social Listening — Monitoring social media for mentions, themes, and sentiment around a brand, topic, or competitor. Distinguished from social media monitoring by its strategic intent: listening informs decisions, monitoring just counts.

Sockpuppet — A fake online identity used by someone to deceive others — typically to praise themselves, attack opponents, or fake independent support. Different from a pseudonym in that the operator is hiding their actual involvement.

Source Clusters — Groups of sources that consistently move together — picking up the same stories, in the same order, with similar framing. Source clusters reveal the underlying topology of an information ecosystem and its coordination structures.

Source Entropy — A measure of how diversely a piece of content is being shared across distinct sources. Low entropy means amplification by a small set of accounts (a coordination signal); high entropy means broad, distributed pickup (an organic-spread signal).

Source-Level Attribution — Identifying which specific sources are driving a narrative — who originated it, who first amplified it, who turned it from a fringe item into a mainstream one. The granular layer below platform-level attribution.

Spike Detection — The algorithmic identification of sharp, anomalous rises in mention volume within a short window. Triggers alerts, populates trend charts, and is often the first signal that a crisis or viral moment is unfolding.

Static Narratives — Storylines that don't mutate as they spread — typically because they aren't being actively managed. Easier to monitor with conventional tools, and increasingly rare in contested information environments.

Strategic Narrative — A narrative deliberately constructed to advance political, geopolitical, or commercial objectives. Defined in international relations as 'a means by which political actors construct shared meaning to shape behaviour' — used both by states and by brands.

Stream — A coherent thematic line within a larger campaign — for example, a 'procedural legitimacy' stream and an 'external attribution' stream running in parallel during a single influence operation. Streams are how coordinated campaigns hedge across audiences with different sympathies.

Structural Analysis — Examining the architecture of how content moves and connects — who posts when, in what sequence, with what overlap — rather than what the content says. Reveals coordination that content-level analysis misses entirely.

Structural Behaviour — Patterns in how content is produced, posted, and circulated — independent of what the content says. Often more diagnostic than the content itself, because it is harder to fake at scale.

Structural Signatures — The distinctive structural patterns — timing, account behaviour, asset reuse, cross-platform staging — that allow analysts to identify a coordinated operation regardless of content. The primary detection layer of behavioural pattern recognition.

Synchronised Posting Patterns — Multiple accounts publishing the same or near-identical content within unusually narrow time windows. One of the cleanest signals of coordination — and one of the easiest to obscure once operators know analysts look for it.

T

Talking Points — Short, repeatable units of messaging crafted to be deployed verbatim by spokespeople, supporters, or paid amplifiers. The degree to which the same talking points appear across nominally independent sources is one of the cleanest tests for coordination.

Temporal Sequencing — Analysing the timing and order of when content appears across sources to distinguish coordinated from organic activity. A multi-day gap between identical content appearing in one language ecosystem and then surfacing in another is the kind of pattern temporal sequencing reveals.

Threat Intelligence — Evidence-based knowledge about existing or emerging threats — actors, intentions, capabilities, indicators — produced to inform defensive decisions. Narrative threat intelligence applies the same discipline to information attacks rather than network attacks.

Tier 1 Media — The most authoritative outlets in a given market — major national newspapers, flagship broadcasters, top trade press. A Tier 1 mention carries more reputational weight than the same content from a smaller outlet.

Tonality — Closely related to sentiment, tonality describes the overall posture of media coverage toward a subject — positive, negative, neutral, with degrees in between. Often used interchangeably with sentiment, though some practitioners reserve tonality for the human-judged version.

Topic Modeling — A family of unsupervised machine-learning techniques that discover the abstract topics inside a corpus of documents. The mathematical engine behind automatic narrative-cluster detection.

Toxic Narrative Detection — Spotting storylines designed to incite hatred, fear, or violence — typically against a person, group, or institution — early enough to counter them. A specialised branch of narrative detection focused on harm rather than reputation.

Trend Analysis — The study of how a metric or theme moves over time — direction, velocity, seasonality, anomalies. Without trends, a snapshot of monitoring data is just a number; with trends, it becomes a story.

Troll — A user who deliberately posts inflammatory, off-topic, or insulting content to provoke reactions. Distinct from a bot in being a real person — and from a sockpuppet in not necessarily hiding their identity.

Troll Farm — An organised group of paid trolls operating in coordination to influence public discourse. Russia's Internet Research Agency is the canonical example.

Trope — A recurring stock motif, character, or storyline — antisemitic tropes, racial tropes, 'the corrupt elite' trope. Tropes are the building blocks of harmful narratives because they let new content slot into pre-existing emotional grooves with very little effort.

TTPs (Tactics, Techniques, Procedures) — The 'how' of an adversary's operations: their preferred methods, tools, and behavioural patterns. TTPs are harder to change than infrastructure, which makes them the most reliable basis for attribution — both in cybersecurity and in influence operations.

U

Urgent Attack Briefing — A short, high-priority intelligence product distributed to decision-makers when a narrative attack requires immediate response. Compressed in length, expanded in actionability — the opposite of a routine monitoring report.

V

Velocity — The rate at which content propagates through a network, typically measured in nodes-per-hour or shares-per-minute. Sharp velocity spikes — independent of total volume — are an early-warning signal that a piece of content is going viral or being amplified.

Video-Native Campaigns — Influence operations built primarily for video platforms — TikTok, Reels, Shorts, YouTube — where the persuasive payload lives in audio, image, and editing rather than in searchable text. Invisible to text-based monitoring almost by design.

Video-Native Intelligence — Analysis capability designed specifically for short-form and long-form video — extracting claims, entities, sentiment, and visual symbols from content that text-based pipelines cannot read. Increasingly the only way to monitor where younger audiences actually live.

Virality — The property of content that spreads rapidly through audience-driven sharing. Notoriously hard to predict — and increasingly hard to distinguish from amplified or coordinated spread.

Visual Context Extraction — The computational technique of pulling meaning from on-screen text, logos, locations, gestures, and visual symbols inside video and image content. The bridge between raw pixels and analysable narrative data.

Volume Spikes — Sharp increases in mention count over a short window. The classic trigger for legacy monitoring alerts — and a notoriously lagging indicator. By the time volume spikes, the narrative is usually already entrenched.