No threat posed as this is a public service announcement from a self-identified parody channel.
The spread of behaviors, attitudes, beliefs and affect through social aggregates from one member to another
Meme
Artwork creative using generative adversarial networks (GAN), other other neural networks or artificial intelligence.
Affective responses--emotions, moods and feelings--effect cognition and perception. Media that intentionally causes a high degree of emotional load can significantly image how target audience member perceives and thinks about the subject of the media.
Both humans and automation may be targeted by synthetic media attacks. This criteria references whether the target of the attack was human or automation. The highlighted icon represents the intended target of this submitted media.
A measure of if the attack was constructed by a human or by artificial intelligence. The highlighted icon represents the method of control of this submitted media.
The medium is the format of the content submitted. Highlighted items represent all of the various formats contained in the submitted content.
Text
Image
Video
Audio
Technical complexity of the atttack.
How damaging the attack was intended to be.
Creating synthetic media for fun or to create enjoyment for themselves or others, often through embarrassment or playful, non-destructive controversy
Images, videos and audio media can be used to create and hold focal attention on issues, events and topics of concern. Raising awareness can be a particularly strong motivation in scenarios where the distributor of the media believes that the issue is worthy or attention and is not receiving adequate attention or action.
No kill chain commentary due to lack of threat.
Motivation is the underlying activator, purpose or sustained reasons for why the deepfake threat actor wants to create nefarious synthetic media.
No case specific insights generated.
Targeting is the threat actor’s intentional selection of a target audience, or the group or individual whom he is interested in impacting with his deepfake campaign.
No case specific insights generated.
Research & Reconnaissance occurs when the threat actor is effortfully gathering information about the target audience, the optimal channels to conduct their campaign on, the relevant narratives for the attack, and type of content that will have the desired impact on the target audience.
No case specific insights generated.
Preparation & Planning are the steps and processes that the threat actor takes to acquire the tools and content needed to create the deepfake media for their campaign and their deliberation for the execution of the campaign.
No case specific insights generated.
Production is the threat actor’s use of tools and content for the creation and development of deepfake media for their attack campaign.
No case specific insights generated.
Narrative Testing. A narrative is a story, or an account of related events or experiences. A good narrative will have story coherence, such that both the story being told and its relationship to the real world are cohesive and clear. In deepfake campaigns, threat actors consider and evaluate the possible narratives—particularly in relation to events and context—to support the campaign in an effort to maximize the believability and efficacy of the attack.
No case specific insights generated.
Deployment is the threat actor’s intentional transmission of deepfake content to the target audience through selected online channels.
No case specific insights generated.
Amplification is the threat actor’s intentional efforts to maximize the visibility, virality and target audience exposure to their deepfake content.
No case specific insights generated.
Post-Campaign is the period after the target audience has received and been exposed to the deepfake content.
No case specific insights generated.
No security recommendations due to lack of threat.
Notes:
ctrl shift face is a YouTube parody channel depicting deepfake content featuring a variety of subjects.