Summary Threat Type Vulnerabilities Surface Matrix Motivations Kill Chain Recommendations Appendix
Print Report

Kim Jung Un Deepfake Video

Submitted File

On Sep 29, 2020, RepresentUS released a 49 second video, "Dictators- Kim Jung-un" wherein the North Korean Supreme leader, while sitting at his desk, gives an ominous lecture about the fragility of Democracy. At the 46 mark of the video it is revealed that "this footage is not real, but the threat is. Join us." To date the video has received over 549,350 views. From the RepresentUs website: RepresentUS is America’s leading nonpartisan anti-corruption organization fighting to fix our broken and ineffective government. We unite people across the political spectrum to pass laws that hold corrupt politicians accountable, defeat special interests, and force the government to meet the needs of the American people.

Threat Level

Low

Moderate

Elevated

High


Authenticity Spectrum

Real

Suspicious

Likely Fake

Fake



Deepfake Attack Profile

Credibility

Moderate

The more synthetic media is perceived to be legitimate and authoritative, the content is more likely to be trusted, persuasive, and acted upon.

Interactivity

Low

Synthetic media can range from non-interactive, not ongoing, or not consistent (low) to interactive, ongoing, and consistent (high).

Familiarity

High

Synthetic media can range from very recognizable and familiar (high) or hardly (or not) recognizable and familiar (low).

Evocation

Moderate

Synthetic media can range from evoking a significant affective response (high) to barely or not at all eliciting an affective reaction.

Distribution

Mediumcast

Synthetic media can range from broadcast to a wide human audience or technical security measures (high) to a narrow, specific human audience or tailored technical security measure (low).


Deepfake & Synthetic Media Analysis Framework (DSMAF) Assessment™. The media submitted for this Deepfake Threat Intelligence Report (DTIR) was assessed with the Psyber Labs Deepfake & Synthetic Media Analysis Framework (DSMAF)™, a set of psychological, sociological and affective influence factors and sub-facets, that when holistically applied, inform the motivations, intentions, and targeting process in synthetic media and deepfake propagation. The findings of each DSMAF factor is described in respective sections and graphically plotted on the Deepfake Risk Factor Radar. The combined DSMAF findings are given a Synthetic Media Threat Level (Low, Medium, Elevated, or High) for actionable awareness and risk mitigation.

Threat Type

Threat Type is the category of intended purpose and the risk proposed by the synthetic media or deepfake. Often, cyber deception efforts through deepfake content are multi-purpose, and a result, are categorized with multiple threat types.

Social Contagion

The spread of behaviors, attitudes, beliefs and affect through social aggregates from one member to another

Non-Threatening: Raise Awareness

The media submission is not malicious, but rather, meant to raise awareness to the topic captured in the content.

Non-Threating: Educational

The media submitted is not intended for malicious purposes, bur rather, to provide insight and educational information to viewers.

Common Cognitive Vulnerabilities & Exposures™ (CCVE)

Common Cognitive Vulnerabilities & Exploits (CCVEs) are perceptual distortions, cognitive biases, heuristics misapplied, or any mental process that exposes a person to a potential manipulation by an adversary.

North Korean Supreme Leader Kim Jong-un's reputation internationally is largely negative, influenced by his regime's policies on nuclear development, human rights issues, and a general lack of transparency and openness. As a result perception and sensemaking relating to this video can be impacted by biases, emotional influences and cognitive shortcuts

Confirmation Bias

Category: Cognitive Processing

The tendency to seek information that confirms or supports a predetermined position or conclusion.

Devil Effect

Category: Interpersonal Biases

In assessing other people, it is the tendency for a person’s undesirable trait to be generalized to possess other poor traits.

Fear

Category: Other Psychological Vulnerabilities

An attacker leverages fear to gain target compliance.

Emotional Load

Category: Other Psychological Vulnerabilities

Affective responses--emotions, moods and feelings--effect cognition and perception. Media that intentionally causes a high degree of emotional load can significantly image how target audience member perceives and thinks about the subject of the media.


Deepfake Attack Surface & Vectors

As part of the DSMAF criteria, Deepfake Attack Surface & Vectors assesses the intended target; the manner of control, or how the synthetic media is being presented to the target; and medium, or the type of synthetic media being presented to the intended target.


Intended Target

Both humans and automation may be targeted by synthetic media attacks. This criteria references whether the target of the attack was human or automation. The highlighted icon represents the intended target of this submitted media.


Human

Technical

Hybrid

Unknown



Control

A measure of if the attack was constructed by a human or by artificial intelligence. The highlighted icon represents the method of control of this submitted media.


Human

Automation

Hybrid

Unknown



Medium

The medium is the format of the content submitted. Highlighted items represent all of the various formats contained in the submitted content.


Text

Image

Video

Audio

Synthetic Media Exploitation Matrix

The Synthetic Media Exploitation Matrix Is a visual representation of the combined levels of attacker sophistication and maliciousness.

  • Sophistication is a judgment of the level of demonstrated technological prowess and capability involved in the attack.
  • Maliciousness is a conclusion regarding the degree to which the attack was deliberately intended to cause harm.

Sophistication

High

Technical complexity of the atttack.

Sophistication
Maliciousness

Chart Not Available on Printed Version

Maliciousness

Moderate

How damaging the attack was intended to be.



Motivations

Motivations are the underlying activators, purposes or sustained reasons for why the deepfake threat actor sought to create and take the necessary steps to produce and disseminate synthetic media or deepfake content.

Raise Awareness

Images, videos and audio media can be used to create and hold focal attention on issues, events and topics of concern. Raising awareness can be a particularly strong motivation in scenarios where the distributor of the media believes that the issue is worthy or attention and is not receiving adequate attention or action.

Education

The creation and/or dissemination of the media is not intended for malicious purposes, bur rather, to provide insight and educational information to viewers.

The Deepfake Kill Chain™

The Deepfake Kill Chain™ describes the various, distinct, sequential stages of deepfake media creation and dissemination. Understanding these stages, and the adversary’s efficacy in the respective stages not only reveals the adversary’s modus operandi and decision-making process, but when contrasted with the Deepfake & Synthetic Media Analysis Framework™, identifies and elucidates methods of preventing and defending against the adversary’s deepfake attacks.

Motivation

Motivation is the underlying activator, purpose or sustained reasons for why the deepfake threat actor wants to create nefarious synthetic media.

No case specific insights generated.

Targeting

Targeting is the threat actor’s intentional selection of a target audience, or the group or individual whom he is interested in impacting with his deepfake campaign.

No case specific insights generated.

Research and Reconnaissance

Research & Reconnaissance occurs when the threat actor is effortfully gathering information about the target audience, the optimal channels to conduct their campaign on, the relevant narratives for the attack, and type of content that will have the desired impact on the target audience.

No case specific insights generated.

Preparation and Planning

Preparation & Planning are the steps and processes that the threat actor takes to acquire the tools and content needed to create the deepfake media for their campaign and their deliberation for the execution of the campaign.

No case specific insights generated.

Production

Production is the threat actor’s use of tools and content for the creation and development of deepfake media for their attack campaign.

Production level on this video is very high level and believable.

Narrative Testing

Narrative Testing. A narrative is a story, or an account of related events or experiences. A good narrative will have story coherence, such that both the story being told and its relationship to the real world are cohesive and clear. In deepfake campaigns, threat actors consider and evaluate the possible narratives—particularly in relation to events and context—to support the campaign in an effort to maximize the believability and efficacy of the attack.

No case specific insights generated.

Deployment

Deployment is the threat actor’s intentional transmission of deepfake content to the target audience through selected online channels.

No case specific insights generated.

Amplification

Amplification is the threat actor’s intentional efforts to maximize the visibility, virality and target audience exposure to their deepfake content.

No case specific insights generated.

Post-Campaign

Post-Campaign is the period after the target audience has received and been exposed to the deepfake content.

No case specific insights generated.



Cognitive Security Recommendations

This section identifies the steps and measures to prevent and defend against the synthetic media/deepfake content assessed in this DTIR. For a more detailed recommendation, training or consultation, connect with Psyber Labs.


Videos, such as those relating to disliked figures can often trigger biases, emotional load and cognitive shortcuts. As a result, viewers should always assess this content devoid of emotional investment and pre-existing beliefs, but rather, viewing and listening to content within the context, timing and environment in which they were prepared and disseminated.

Appendix

DTIR™ Version: 1.0

Submission Date (UTC): December 18, 2023 16:36

Assessment Date (UTC): December 27, 2023 02:42

SHA256 Hash: 29c1883c01ba13b39ae28ce9a8a69077b28e28e2a9819a2cf0180237323cc05b

Source: https://www.youtube.com/watch?v=ERQlaJ_czHU