The spread of behaviors, attitudes, beliefs and affect through social aggregates from one member to another
Deepfake technology intentionally using the likeness of famous and/or credible authorities in an effort to shape the behaviors, attitudes, beliefs and/or emotions of the target audience
Tendency to comply with authority figures (usually legal or expert authorities). Exploitable by assuming the persona or impersonating an authority figure.
Affective responses--emotions, moods and feelings--effect cognition and perception. Media that intentionally causes a high degree of emotional load can significantly image how target audience member perceives and thinks about the subject of the media.
Both humans and automation may be targeted by synthetic media attacks. This criteria references whether the target of the attack was human or automation. The highlighted icon represents the intended target of this submitted media.
A measure of if the attack was constructed by a human or by artificial intelligence. The highlighted icon represents the method of control of this submitted media.
The medium is the format of the content submitted. Highlighted items represent all of the various formats contained in the submitted content.
Text
Image
Video
Audio
Technical complexity of the atttack.
How damaging the attack was intended to be.
Intentional strategy and tactics meant to mislead, misdirect and manipulate the perceptions of a target audience through simulation (showing the false) and/or dissimulation (hiding the real)
Use of deepfake and synthetic media to promote a particular political, scientific, social or other cause
Motivation is the underlying activator, purpose or sustained reasons for why the deepfake threat actor wants to create nefarious synthetic media.
No case specific insights generated.
Targeting is the threat actor’s intentional selection of a target audience, or the group or individual whom he is interested in impacting with his deepfake campaign.
No case specific insights generated.
Research & Reconnaissance occurs when the threat actor is effortfully gathering information about the target audience, the optimal channels to conduct their campaign on, the relevant narratives for the attack, and type of content that will have the desired impact on the target audience.
No case specific insights generated.
Preparation & Planning are the steps and processes that the threat actor takes to acquire the tools and content needed to create the deepfake media for their campaign and their deliberation for the execution of the campaign.
No case specific insights generated.
Production is the threat actor’s use of tools and content for the creation and development of deepfake media for their attack campaign.
No case specific insights generated.
Narrative Testing. A narrative is a story, or an account of related events or experiences. A good narrative will have story coherence, such that both the story being told and its relationship to the real world are cohesive and clear. In deepfake campaigns, threat actors consider and evaluate the possible narratives—particularly in relation to events and context—to support the campaign in an effort to maximize the believability and efficacy of the attack.
No case specific insights generated.
Deployment is the threat actor’s intentional transmission of deepfake content to the target audience through selected online channels.
No case specific insights generated.
Amplification is the threat actor’s intentional efforts to maximize the visibility, virality and target audience exposure to their deepfake content.
No case specific insights generated.
Post-Campaign is the period after the target audience has received and been exposed to the deepfake content.
No case specific insights generated.