Summary Threat Type Vulnerabilities Surface Matrix Motivations Kill Chain Recommendations Appendix
Print Report

AI-generated Reverend Dr. Martin Luther King, Jr. advocating for former President Donald Trump. "Trump Flag" Scam

Submitted File

A social media advertisement displays three different screen sections, one featuring Rev. Dr. Martin Luther King, Jr. giving a speech about former United States President Donald Trump; the second displays a "Trump 2024" campaign sign with a montage of different "free gifts"; and the third, a text section stating "Voting Trump in 2024? Take our Short Poll to Receive A Free Gift. Just Pay Shipping." A "Get Gifts" button is prominently available to click. The Rev. Dr. Martin Luther King, Jr. video and audio (synthetically generated/modified) is a rousing plea to support Turmp: “We’ve been told again and again that we cannot vote for the man that did more for the Black community than any other president. If a Black man dares speak out in support of Donald Trump, a Democrat is always there to call that man an Uncle Tom, a house negro, or even worse.” The advertisement has reportedly been viewed millions of times on both Facebook and YouTube.

Threat Level

Low

Moderate

Elevated

High


Authenticity Spectrum

Real

Suspicious

Likely Fake

Fake



Deepfake Attack Profile

Credibility

Moderate

The more synthetic media is perceived to be legitimate and authoritative, the content is more likely to be trusted, persuasive, and acted upon.

Interactivity

Moderate

Synthetic media can range from non-interactive, not ongoing, or not consistent (low) to interactive, ongoing, and consistent (high).

Familiarity

High

Synthetic media can range from very recognizable and familiar (high) or hardly (or not) recognizable and familiar (low).

Evocation

High

Synthetic media can range from evoking a significant affective response (high) to barely or not at all eliciting an affective reaction.

Distribution

Broadcast

Synthetic media can range from broadcast to a wide human audience or technical security measures (high) to a narrow, specific human audience or tailored technical security measure (low).


Deepfake & Synthetic Media Analysis Framework (DSMAF) Assessment™. The media submitted for this Deepfake Threat Intelligence Report (DTIR) was assessed with the Psyber Labs Deepfake & Synthetic Media Analysis Framework (DSMAF)™, a set of psychological, sociological and affective influence factors and sub-facets, that when holistically applied, inform the motivations, intentions, and targeting process in synthetic media and deepfake propagation. The findings of each DSMAF factor is described in respective sections and graphically plotted on the Deepfake Risk Factor Radar. The combined DSMAF findings are given a Synthetic Media Threat Level (Low, Medium, Elevated, or High) for actionable awareness and risk mitigation.

Threat Type

Threat Type is the category of intended purpose and the risk proposed by the synthetic media or deepfake. Often, cyber deception efforts through deepfake content are multi-purpose, and a result, are categorized with multiple threat types.

Social Contagion

The spread of behaviors, attitudes, beliefs and affect through social aggregates from one member to another

Digital Impersonation for Disinformation

Deepfake technology intentionally using the likeness of famous and/or credible authorities in an effort to shape the behaviors, attitudes, beliefs and/or emotions of the target audience

Digital Impersonation to Defraud

Deepfake technology intentionally using the likeness of famous and/or credible authorities in an effort to legitimize a scheme to defraud the target audience

Propaganda

Information, especially of a biased, misleading or non-rational nature, used to promote a political cause or point of view

Political Instigator

The media is intended to serve as a catalyst for political argument, discord and divisiveness.

Fabricated Content

Content created to serve an information or psychological operation purpose.

Disinformation

False information purposely spread to influence public opinion or obscure the truth

Deception

Intentional strategy and tactics meant to mislead, misdirect and manipulate the perceptions of a target audience through simulation (showing the false) and/or dissimulation (hiding the real)

Common Cognitive Vulnerabilities & Exposures™ (CCVE)

Common Cognitive Vulnerabilities & Exploits (CCVEs) are perceptual distortions, cognitive biases, heuristics misapplied, or any mental process that exposes a person to a potential manipulation by an adversary.

This video leverages a form of gaining credibility, known as "Derivative Legitimacy"--whereby associating with a positively valence person is meant to boost/bolster legitimacy of someone else. Here, Rev. Dr. Martin Luther King, Jr. is being used to give President Trump "Derivative Legitimacy" with those who admire and respect Dr. King.

Halo Effect

Category: Interpersonal Biases

In assessing other people, it is the tendency for a person’s positive trait to be generalized to possess other positive traits.

Liking

Category: Social Norm Vulnerabilities

Tendency to do favors for people whom we like. Can be exploited by establishing rapport with target before asking for action.

Authority

Category: Social Norm Vulnerabilities

Tendency to comply with authority figures (usually legal or expert authorities). Exploitable by assuming the persona or impersonating an authority figure. 

Unity

Category: Social Norm Vulnerabilities

Perceived shared identity based on similarity in a trait, affiliation, or belief. This can be a powerful influence tactic as people tend to be more open to persuasion by someone they identify with.

Emotional Load

Category: Other Psychological Vulnerabilities

Affective responses--emotions, moods and feelings--effect cognition and perception. Media that intentionally causes a high degree of emotional load can significantly image how target audience member perceives and thinks about the subject of the media.

Mere Exposure Effect

Category: Cognitive Processing

The Mere Exposure Effect is a cognitive bias where individuals show a preference for things they’re more familiar with. Repeated exposure to a stimulus increases liking and familiarity, even without conscious recognition.


Deepfake Attack Surface & Vectors

As part of the DSMAF criteria, Deepfake Attack Surface & Vectors assesses the intended target; the manner of control, or how the synthetic media is being presented to the target; and medium, or the type of synthetic media being presented to the intended target.


Intended Target

Both humans and automation may be targeted by synthetic media attacks. This criteria references whether the target of the attack was human or automation. The highlighted icon represents the intended target of this submitted media.


Human

Technical

Hybrid

Unknown



Control

A measure of if the attack was constructed by a human or by artificial intelligence. The highlighted icon represents the method of control of this submitted media.


Human

Automation

Hybrid

Unknown



Medium

The medium is the format of the content submitted. Highlighted items represent all of the various formats contained in the submitted content.


Text

Image

Video

Audio

Synthetic Media Exploitation Matrix

The Synthetic Media Exploitation Matrix Is a visual representation of the combined levels of attacker sophistication and maliciousness.

  • Sophistication is a judgment of the level of demonstrated technological prowess and capability involved in the attack.
  • Maliciousness is a conclusion regarding the degree to which the attack was deliberately intended to cause harm.

Sophistication

High

Technical complexity of the atttack.

Sophistication
Maliciousness

Chart Not Available on Printed Version

Maliciousness

High

How damaging the attack was intended to be.



Motivations

Motivations are the underlying activators, purposes or sustained reasons for why the deepfake threat actor sought to create and take the necessary steps to produce and disseminate synthetic media or deepfake content.

Suggestion

Offering information that affects the target audience legally, morally, ideologically or in other areas

Deception

Intentional strategy and tactics meant to mislead, misdirect and manipulate the perceptions of a target audience through simulation (showing the false) and/or dissimulation (hiding the real)

Financial Gain (Money)

Drive and intention to accumulate large sums of money or other financial resources

Ideology (Cause)

Use of deepfake and synthetic media to promote a particular political, scientific, social or other cause

Political Tumult

The creator and/or disseminator of the media seeks to create political discord, argumentation and divisiveness.

Influence

Intentional effort to shape the perceptions, decisions, and behaviors of target audiences to achieve specific objectives.

The Deepfake Kill Chain™

The Deepfake Kill Chain™ describes the various, distinct, sequential stages of deepfake media creation and dissemination. Understanding these stages, and the adversary’s efficacy in the respective stages not only reveals the adversary’s modus operandi and decision-making process, but when contrasted with the Deepfake & Synthetic Media Analysis Framework™, identifies and elucidates methods of preventing and defending against the adversary’s deepfake attacks.

Motivation

Motivation is the underlying activator, purpose or sustained reasons for why the deepfake threat actor wants to create nefarious synthetic media.

While there is certainly a political aspect to this advertisement, the main purpose is to convert views to credit card entries to pay for the shipping of the "free gifts." This scam, known as the "Trump Flag Scam" is a form of "credit card laundering"

Targeting

Targeting is the threat actor’s intentional selection of a target audience, or the group or individual whom he is interested in impacting with his deepfake campaign.

The producers of this content have used a "volume targeting" technique where the ads are being strategically placed in high-volume on social media platforms, so that the widest and brightest audience will view the advertisement

Research and Reconnaissance

Research & Reconnaissance occurs when the threat actor is effortfully gathering information about the target audience, the optimal channels to conduct their campaign on, the relevant narratives for the attack, and type of content that will have the desired impact on the target audience.

No case specific insights generated.

Preparation and Planning

Preparation & Planning are the steps and processes that the threat actor takes to acquire the tools and content needed to create the deepfake media for their campaign and their deliberation for the execution of the campaign.

Historical video of REv. Dr. Martin Luther King, Junior was acquired to train both video and voice AI models to ensure an accurate depiction of Dr. King making the speech.

Production

Production is the threat actor’s use of tools and content for the creation and development of deepfake media for their attack campaign.

The production of the video is not of pristine quality, and the presentation of three separate sections or screens in the video makes the presentation cramped and quite "busy"

Narrative Testing

Narrative Testing. A narrative is a story, or an account of related events or experiences. A good narrative will have story coherence, such that both the story being told and its relationship to the real world are cohesive and clear. In deepfake campaigns, threat actors consider and evaluate the possible narratives—particularly in relation to events and context—to support the campaign in an effort to maximize the believability and efficacy of the attack.

The plea for audience engagement focuses around Rev. Dr. Martin Luther King, Jr., defending and imploring support for former President Donald Trump: “We’ve been told again and again that we cannot vote for the man that did more for the Black community than any other president. If a Black man dares speak out in support of Donald Trump, a Democrat is always there to call that man an Uncle Tom, a house negro, or even worse.”

Deployment

Deployment is the threat actor’s intentional transmission of deepfake content to the target audience through selected online channels.

This content was posted and viewed millions of times on both Facebook and YouTube.

Amplification

Amplification is the threat actor’s intentional efforts to maximize the visibility, virality and target audience exposure to their deepfake content.

Content on social media, and video platforms online are a natural incubator for virality, particularly highly evocative, unusual and unexpected content.

Post-Campaign

Post-Campaign is the period after the target audience has received and been exposed to the deepfake content.

This video has been made in multiple formats, sometimes with Rev. Dr. King, sometimes with other famous people, for the same purpose as part of a larger campaign.



Cognitive Security Recommendations

This section identifies the steps and measures to prevent and defend against the synthetic media/deepfake content assessed in this DTIR. For a more detailed recommendation, training or consultation, connect with Psyber Labs.


The use of Rev. Dr. Martin Luther King, Jr.,--and important, beloved and respected historical figure--making an impassioned speech in defense of former President Donald Trump is meant to capture and hold target audience member's attention--particularly due to the oddity of the video since Dr. King was assassinated on April 4, 1968 and did not know Trump. Emotional appeals made by authoritative figures can be very persuasive. Psyber Labs recommends contextualizing videos, such as this before forming any conclusions, any intentions, or making any decisions. Ask the following questions: 1) Is the premise and content of this video possible? 2) While the video is interesting and compelling, what is being asked of you? 3) why would the makers of this video choose Rev. Dr. Martin Luther King, Jr. as the spokesperson? Asking these questions allows you to go into "System 2" thinking--or more evidence based deliberative, thinking --that is not prone to emotion or heuristic cues.

Appendix

DTIR™ Version: 1.0

Submission Date (UTC): April 08, 2024 02:49

Assessment Date (UTC): April 08, 2024 04:04

SHA256 Hash: e91aa7fb58135d8930a8c6fd1ff2dae31e88fa62e9f17d40d584365c87ffc507

Source: https://www.forbes.com/sites/emilybaker-white/2024/03/12/deepfaked-celebrities-hawked-a-massive-trump-scam-on-facebook-and-youtube/?sh=5ccdd9732a4d