Summary Threat Type Vulnerabilities Surface Matrix Motivations Kill Chain Recommendations Appendix
Print Report

Iran-backed hackers interrupt UAE TV streaming services with deepfake news

Submitted File

On December 10, 2023, a UAE-based news service, Dubai residents using the android-based HK1RBOXX set-top box were interrupted with a message stating: “We have no choice but to hack to deliver this message to you,” followed by the AI-generated anchor introducing “graphic” footage, as well as a ticker showing the number of people killed and wounded in Gaza so far.

Threat Level

Low

Moderate

Elevated

High


Authenticity Spectrum

Real

Suspicious

Likely Fake

Fake



Deepfake Attack Profile

Credibility

Moderate

The more synthetic media is perceived to be legitimate and authoritative, the content is more likely to be trusted, persuasive, and acted upon.

Interactivity

Moderate

Synthetic media can range from non-interactive, not ongoing, or not consistent (low) to interactive, ongoing, and consistent (high).

Familiarity

Moderate

Synthetic media can range from very recognizable and familiar (high) or hardly (or not) recognizable and familiar (low).

Evocation

High

Synthetic media can range from evoking a significant affective response (high) to barely or not at all eliciting an affective reaction.

Distribution

Mediumcast

Synthetic media can range from broadcast to a wide human audience or technical security measures (high) to a narrow, specific human audience or tailored technical security measure (low).


Deepfake & Synthetic Media Analysis Framework (DSMAF) Assessment™. The media submitted for this Deepfake Threat Intelligence Report (DTIR) was assessed with the Psyber Labs Deepfake & Synthetic Media Analysis Framework (DSMAF)™, a set of psychological, sociological and affective influence factors and sub-facets, that when holistically applied, inform the motivations, intentions, and targeting process in synthetic media and deepfake propagation. The findings of each DSMAF factor is described in respective sections and graphically plotted on the Deepfake Risk Factor Radar. The combined DSMAF findings are given a Synthetic Media Threat Level (Low, Medium, Elevated, or High) for actionable awareness and risk mitigation.

Threat Type

Threat Type is the category of intended purpose and the risk proposed by the synthetic media or deepfake. Often, cyber deception efforts through deepfake content are multi-purpose, and a result, are categorized with multiple threat types.

Computer intrusion used to launch a "No-flag" information operation campaign leveraging GenAI Deepfake content

Social Contagion

The spread of behaviors, attitudes, beliefs and affect through social aggregates from one member to another

Collective Outrage Trigger

Adversaries seeking to outrage target audiences, resulting in collective mentality to mobilize into volatile engagements or damaging, violent protest

Digital Impersonation to Defraud

Deepfake technology intentionally using the likeness of famous and/or credible authorities in an effort to legitimize a scheme to defraud the target audience

Cyber Attack

Synthetic media as the initial access vector for cyber adversary use of malicious code or other tools to gain unauthorized access into a victim computer system or network

Iran IRGC

Iran IRGC information operation efforts

(Iranian Disinformation) اطلاعات نادرست

اطلاعات نادرست (bee itlaee), Iranian Disinformation

مدیریت ادراک Perception Management

Iranian information operation strategies and tactics meant to shape the perceptions of target audiences

Common Cognitive Vulnerabilities & Exposures™ (CCVE)

Common Cognitive Vulnerabilities & Exploits (CCVEs) are perceptual distortions, cognitive biases, heuristics misapplied, or any mental process that exposes a person to a potential manipulation by an adversary.

Suggestibility

Category: Other Psychological Vulnerabilities

Technique that attempts to implant a false memory in the target through suggestion. 

Disgruntlement

Category: Other Psychological Vulnerabilities

A feeling of dissatisfaction with one’s situation or circumstances. May be leveraged by an attacker by offering a path toward resolving the source disgruntlement as a means of manipulating the target.

Fear

Category: Other Psychological Vulnerabilities

An attacker leverages fear to gain target compliance.

Emotional Load

Category: Other Psychological Vulnerabilities

Affective responses--emotions, moods and feelings--effect cognition and perception. Media that intentionally causes a high degree of emotional load can significantly image how target audience member perceives and thinks about the subject of the media.

False Memory Implantation

Category: Other Psychological Vulnerabilities

False memory implantation is a recollection that seems real but is actually a fabricated or distorted recollection of an event by virtue of being fed untrue information about an event or experience. These memories may be entirely false and imaginary, or in some cases may contain elements of fact that have been distorted by interfering information or other memory distortions.

Mere Exposure Effect

Category: Cognitive Processing

The Mere Exposure Effect is a cognitive bias where individuals show a preference for things they’re more familiar with. Repeated exposure to a stimulus increases liking and familiarity, even without conscious recognition.


Deepfake Attack Surface & Vectors

As part of the DSMAF criteria, Deepfake Attack Surface & Vectors assesses the intended target; the manner of control, or how the synthetic media is being presented to the target; and medium, or the type of synthetic media being presented to the intended target.


Intended Target

Both humans and automation may be targeted by synthetic media attacks. This criteria references whether the target of the attack was human or automation. The highlighted icon represents the intended target of this submitted media.


Human

Technical

Hybrid

Unknown



Control

A measure of if the attack was constructed by a human or by artificial intelligence. The highlighted icon represents the method of control of this submitted media.


Human

Automation

Hybrid

Unknown



Medium

The medium is the format of the content submitted. Highlighted items represent all of the various formats contained in the submitted content.


Text

Image

Video

Audio

Synthetic Media Exploitation Matrix

The Synthetic Media Exploitation Matrix Is a visual representation of the combined levels of attacker sophistication and maliciousness.

  • Sophistication is a judgment of the level of demonstrated technological prowess and capability involved in the attack.
  • Maliciousness is a conclusion regarding the degree to which the attack was deliberately intended to cause harm.

Sophistication

High

Technical complexity of the atttack.

Sophistication
Maliciousness

Chart Not Available on Printed Version

Maliciousness

High

How damaging the attack was intended to be.



Motivations

Motivations are the underlying activators, purposes or sustained reasons for why the deepfake threat actor sought to create and take the necessary steps to produce and disseminate synthetic media or deepfake content.

Psychological Pressure

Psychological Pressure is the stress felt from perceived serious demands imposed on one person by another individual, group, or environment.

Injecting Chaos

Chaos injection is the intentional introduction of evocative material--which is often ambiguous and unresolved--to cause confusion and disorder.

Divisiveness

Create group, organization or societal division

Deception

Intentional strategy and tactics meant to mislead, misdirect and manipulate the perceptions of a target audience through simulation (showing the false) and/or dissimulation (hiding the real)

Provocation

Instigating, eliciting or forcing the target audience to take and action that is advantageous to the deepfake threat adversary

Ideology (Cause)

Use of deepfake and synthetic media to promote a particular political, scientific, social or other cause

Political Tumult

The creator and/or disseminator of the media seeks to create political discord, argumentation and divisiveness.

Influence

Intentional effort to shape the perceptions, decisions, and behaviors of target audiences to achieve specific objectives.

The Deepfake Kill Chain™

The Deepfake Kill Chain™ describes the various, distinct, sequential stages of deepfake media creation and dissemination. Understanding these stages, and the adversary’s efficacy in the respective stages not only reveals the adversary’s modus operandi and decision-making process, but when contrasted with the Deepfake & Synthetic Media Analysis Framework™, identifies and elucidates methods of preventing and defending against the adversary’s deepfake attacks.

Significant time effort and energy were put into creating both the cyber attack and deepfake content for this campaign.

Motivation

Motivation is the underlying activator, purpose or sustained reasons for why the deepfake threat actor wants to create nefarious synthetic media.

This attack was created to scare and shape the perceptions of the target audience

Targeting

Targeting is the threat actor’s intentional selection of a target audience, or the group or individual whom he is interested in impacting with his deepfake campaign.

Viewers of the UAE streaming service platform HK1RBOXX

Research and Reconnaissance

Research & Reconnaissance occurs when the threat actor is effortfully gathering information about the target audience, the optimal channels to conduct their campaign on, the relevant narratives for the attack, and type of content that will have the desired impact on the target audience.

Research efforts were focused on the computer intrusion, implantation and distribution facets of the campaign

Preparation and Planning

Preparation & Planning are the steps and processes that the threat actor takes to acquire the tools and content needed to create the deepfake media for their campaign and their deliberation for the execution of the campaign.

Access to the target platform, knowledge on how to upload content to display to viewers was needed to ensure the success of this campaign. The video and imagery was generated from AI to replicate Gaza

Production

Production is the threat actor’s use of tools and content for the creation and development of deepfake media for their attack campaign.

The deepfake content and still imagery was created with new synthetic media and was not a repurpose and augmentation of existing media

Narrative Testing

Narrative Testing. A narrative is a story, or an account of related events or experiences. A good narrative will have story coherence, such that both the story being told and its relationship to the real world are cohesive and clear. In deepfake campaigns, threat actors consider and evaluate the possible narratives—particularly in relation to events and context—to support the campaign in an effort to maximize the believability and efficacy of the attack.

The narrative conveyed through the deep, fake content, placed blame on Israel for the conflict in Gaza, and made Gaza perceive through a victim-based lens

Deployment

Deployment is the threat actor’s intentional transmission of deepfake content to the target audience through selected online channels.

Computer intrusion and broadcast through HK1RBOXX streaming platform

Amplification

Amplification is the threat actor’s intentional efforts to maximize the visibility, virality and target audience exposure to their deepfake content.

This attack was very jarring, and immediately was amplified by affected users via social media. Further, the campaign was picked up through main stream, information, security, and traditional media outlets.

Post-Campaign

Post-Campaign is the period after the target audience has received and been exposed to the deepfake content.

Multiple instances and channels through the HK1RBOXX streaming service



Cognitive Security Recommendations

This section identifies the steps and measures to prevent and defend against the synthetic media/deepfake content assessed in this DTIR. For a more detailed recommendation, training or consultation, connect with Psyber Labs.


Computer intrusions that are leveraged to broadcast deceptive or influence operation content are evocative and often disturbing. Emotion, driven decision making is often heuristic based and "fast" thinking. Psyber labs recommends that when faced with illicitly, promulgated, media campaigns, view the content with great circumspection, and through Sloan and evidence-based thinking.

Appendix

DTIR™ Version: 1.0

Submission Date (UTC): May 01, 2024 02:06

Assessment Date (UTC): October 15, 2024 21:31

SHA256 Hash: 6baa8b357206203804704cf8866f4c7ab99c9dd87946615d8de0f8f8d17253fb

Source: https://www.darkreading.com/ics-ot-security/hacktivists-interrupt-uae-tv-streams-with-message-about-gaza; https://www.theguardian.com/technology/2024/feb/08/iran-backed-hackers-interrupt-uae-tv-streaming-services-with-deepfake-news; https://www.voanews.com/a/iranian-hackers-interrupt-uae-broadcasts-with-deepfake-news-/7480126.html