The spread of behaviors, attitudes, beliefs and affect through social aggregates from one member to another
Synthetic media used to generate a falsehood to invite reflexive, unthinking acceptance by the target audience. Often, a hoax is used as a vector into other social contagion or deception campaigns
Artwork creative using generative adversarial networks (GAN), other other neural networks or artificial intelligence.
Tendency to comply with authority figures (usually legal or expert authorities). Exploitable by assuming the persona or impersonating an authority figure.
Both humans and automation may be targeted by synthetic media attacks. This criteria references whether the target of the attack was human or automation. The highlighted icon represents the intended target of this submitted media.
A measure of if the attack was constructed by a human or by artificial intelligence. The highlighted icon represents the method of control of this submitted media.
The medium is the format of the content submitted. Highlighted items represent all of the various formats contained in the submitted content.
Text
Image
Video
Audio
Technical complexity of the atttack.
How damaging the attack was intended to be.
Creating synthetic media for fun or to create enjoyment for themselves or others, often through embarrassment or playful, non-destructive controversy
Motivation is the underlying activator, purpose or sustained reasons for why the deepfake threat actor wants to create nefarious synthetic media.
No case specific insights generated.
Targeting is the threat actor’s intentional selection of a target audience, or the group or individual whom he is interested in impacting with his deepfake campaign.
No case specific insights generated.
Research & Reconnaissance occurs when the threat actor is effortfully gathering information about the target audience, the optimal channels to conduct their campaign on, the relevant narratives for the attack, and type of content that will have the desired impact on the target audience.
No case specific insights generated.
Preparation & Planning are the steps and processes that the threat actor takes to acquire the tools and content needed to create the deepfake media for their campaign and their deliberation for the execution of the campaign.
No case specific insights generated.
Production is the threat actor’s use of tools and content for the creation and development of deepfake media for their attack campaign.
No case specific insights generated.
Narrative Testing. A narrative is a story, or an account of related events or experiences. A good narrative will have story coherence, such that both the story being told and its relationship to the real world are cohesive and clear. In deepfake campaigns, threat actors consider and evaluate the possible narratives—particularly in relation to events and context—to support the campaign in an effort to maximize the believability and efficacy of the attack.
Fun and paradoxical image that confuses viewers
Deployment is the threat actor’s intentional transmission of deepfake content to the target audience through selected online channels.
No case specific insights generated.
Amplification is the threat actor’s intentional efforts to maximize the visibility, virality and target audience exposure to their deepfake content.
No case specific insights generated.
Post-Campaign is the period after the target audience has received and been exposed to the deepfake content.
No case specific insights generated.
Notes:
Created through AI program Midjourney