Summary Threat Type Vulnerabilities Surface Matrix Motivations Kill Chain Recommendations Appendix
Print Report

Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’

Submitted File

In a sophisticated fraud operation, a finance employee at a global corporation was deceived into transferring $25 million to swindlers who employed deepfake technology to impersonate the company's Chief Financial Officer during a video conference, as reported by Hong Kong police. This intricate deception involved the employee being lured into a video conference under the impression of meeting with fellow staff members. However, it was later revealed by the Hong Kong police in a Friday briefing that these supposed colleagues were actually deepfake imitations. Senior Superintendent Baron Chan Shun-ching, during an announcement to RTHK, the city's public broadcaster, explained that the entire roster of participants the employee saw on the video call was fabricated. Chan elaborated that suspicions were initially raised by the employee following a dubious message claiming to be from the firm's UK-based Chief Financial Officer. The message hinted at the necessity of a clandestine transaction, prompting concerns of a potential phishing attempt. Despite these initial reservations, the employee was misled into dismissing his suspicions after the video call convincingly featured individuals who bore a striking resemblance to his real colleagues, both in appearance and voice, as noted by Chan.

Threat Level

Low

Moderate

Elevated

High


Authenticity Spectrum

Real

Suspicious

Likely Fake

Fake



Deepfake Attack Profile

Credibility

High

The more synthetic media is perceived to be legitimate and authoritative, the content is more likely to be trusted, persuasive, and acted upon.

Interactivity

High

Synthetic media can range from non-interactive, not ongoing, or not consistent (low) to interactive, ongoing, and consistent (high).

Familiarity

High

Synthetic media can range from very recognizable and familiar (high) or hardly (or not) recognizable and familiar (low).

Evocation

Moderate

Synthetic media can range from evoking a significant affective response (high) to barely or not at all eliciting an affective reaction.

Distribution

Narrowcast

Synthetic media can range from broadcast to a wide human audience or technical security measures (high) to a narrow, specific human audience or tailored technical security measure (low).


Deepfake & Synthetic Media Analysis Framework (DSMAF) Assessment™. The media submitted for this Deepfake Threat Intelligence Report (DTIR) was assessed with the Psyber Labs Deepfake & Synthetic Media Analysis Framework (DSMAF)™, a set of psychological, sociological and affective influence factors and sub-facets, that when holistically applied, inform the motivations, intentions, and targeting process in synthetic media and deepfake propagation. The findings of each DSMAF factor is described in respective sections and graphically plotted on the Deepfake Risk Factor Radar. The combined DSMAF findings are given a Synthetic Media Threat Level (Low, Medium, Elevated, or High) for actionable awareness and risk mitigation.

Threat Type

Threat Type is the category of intended purpose and the risk proposed by the synthetic media or deepfake. Often, cyber deception efforts through deepfake content are multi-purpose, and a result, are categorized with multiple threat types.

Sophisticated financial scam facilitated by deepfake video content

Financial Scam

The use of synthetic media to transmit deceptive information intended to obtain money or other things of value from the victim.

Digital Impersonation to Defraud

Deepfake technology intentionally using the likeness of famous and/or credible authorities in an effort to legitimize a scheme to defraud the target audience

Deception

Intentional strategy and tactics meant to mislead, misdirect and manipulate the perceptions of a target audience through simulation (showing the false) and/or dissimulation (hiding the real)

Zishing

Zishing, or Zoom-based phishing attacks, wherein a threat adversary (TA) uses deepfake technology to realistically emulate known, familiar or expected interactant(s) to deceive the victim on the video call to comply with requests in the interest of the TA.

Common Cognitive Vulnerabilities & Exposures™ (CCVE)

Common Cognitive Vulnerabilities & Exploits (CCVEs) are perceptual distortions, cognitive biases, heuristics misapplied, or any mental process that exposes a person to a potential manipulation by an adversary.

Inattentional Blindness

Category: Perceptual Biases

It is a perceptual blindness (rarely called inattentive blindness) that occurs when an individual fails to perceive an unexpected stimulus in plain sight, purely as a result of a lack of attention rather than any vision defects or deficits.

Ingroup Bias

Category: Interpersonal Biases

Tendency for people to give preferential treatment to others they perceive to be members of their own groups.

Liking

Category: Social Norm Vulnerabilities

Tendency to do favors for people whom we like. Can be exploited by establishing rapport with target before asking for action.

Authority

Category: Social Norm Vulnerabilities

Tendency to comply with authority figures (usually legal or expert authorities). Exploitable by assuming the persona or impersonating an authority figure. 

Unity

Category: Social Norm Vulnerabilities

Perceived shared identity based on similarity in a trait, affiliation, or belief. This can be a powerful influence tactic as people tend to be more open to persuasion by someone they identify with.

Mere Exposure Effect

Category: Cognitive Processing

The Mere Exposure Effect is a cognitive bias where individuals show a preference for things they’re more familiar with. Repeated exposure to a stimulus increases liking and familiarity, even without conscious recognition.


Deepfake Attack Surface & Vectors

As part of the DSMAF criteria, Deepfake Attack Surface & Vectors assesses the intended target; the manner of control, or how the synthetic media is being presented to the target; and medium, or the type of synthetic media being presented to the intended target.


Intended Target

Both humans and automation may be targeted by synthetic media attacks. This criteria references whether the target of the attack was human or automation. The highlighted icon represents the intended target of this submitted media.


Human

Technical

Hybrid

Unknown



Control

A measure of if the attack was constructed by a human or by artificial intelligence. The highlighted icon represents the method of control of this submitted media.


Human

Automation

Hybrid

Unknown



Medium

The medium is the format of the content submitted. Highlighted items represent all of the various formats contained in the submitted content.


Text

Image

Video

Audio

Synthetic Media Exploitation Matrix

The Synthetic Media Exploitation Matrix Is a visual representation of the combined levels of attacker sophistication and maliciousness.

  • Sophistication is a judgment of the level of demonstrated technological prowess and capability involved in the attack.
  • Maliciousness is a conclusion regarding the degree to which the attack was deliberately intended to cause harm.

Sophistication

High

Technical complexity of the atttack.

Sophistication
Maliciousness

Chart Not Available on Printed Version

Maliciousness

High

How damaging the attack was intended to be.



Motivations

Motivations are the underlying activators, purposes or sustained reasons for why the deepfake threat actor sought to create and take the necessary steps to produce and disseminate synthetic media or deepfake content.

Deception

Intentional strategy and tactics meant to mislead, misdirect and manipulate the perceptions of a target audience through simulation (showing the false) and/or dissimulation (hiding the real)

Financial Gain (Money)

Drive and intention to accumulate large sums of money or other financial resources

The Deepfake Kill Chain™

The Deepfake Kill Chain™ describes the various, distinct, sequential stages of deepfake media creation and dissemination. Understanding these stages, and the adversary’s efficacy in the respective stages not only reveals the adversary’s modus operandi and decision-making process, but when contrasted with the Deepfake & Synthetic Media Analysis Framework™, identifies and elucidates methods of preventing and defending against the adversary’s deepfake attacks.

Sophisticated financial scam facilitated by deepfake video content

Motivation

Motivation is the underlying activator, purpose or sustained reasons for why the deepfake threat actor wants to create nefarious synthetic media.

The goal orientation and basis for the attack was financially driven.

Targeting

Targeting is the threat actor’s intentional selection of a target audience, or the group or individual whom he is interested in impacting with his deepfake campaign.

The attackers targeted individuals who were familiar with the Chief Financial Officer (CFO), subordinate to the CFO, yet with the authorities to make financial transactions.

Research and Reconnaissance

Research & Reconnaissance occurs when the threat actor is effortfully gathering information about the target audience, the optimal channels to conduct their campaign on, the relevant narratives for the attack, and type of content that will have the desired impact on the target audience.

The attacker researched and understood leadership positions, individuals in leader roles, and located existing video and audio content to create realistic and believable deepfake content of multiple individuals in the financial company.

Preparation and Planning

Preparation & Planning are the steps and processes that the threat actor takes to acquire the tools and content needed to create the deepfake media for their campaign and their deliberation for the execution of the campaign.

The attackers were able to accumulate existing video and audio content to train AI models and create realistic deepfake material for presentation in a video call.

Production

Production is the threat actor’s use of tools and content for the creation and development of deepfake media for their attack campaign.

Generating real-time deepfakes that are convincing requires significant computational power and advanced algorithms. This involves real-time processing and manipulation of video streams, which is computationally intensive and requires sophisticated hardware and software. In this case, the attackers had access to and capability to use tools to create realistic and believable deepfake content. Further, they were able to play this re-recorded content during a live video call.

Narrative Testing

Narrative Testing. A narrative is a story, or an account of related events or experiences. A good narrative will have story coherence, such that both the story being told and its relationship to the real world are cohesive and clear. In deepfake campaigns, threat actors consider and evaluate the possible narratives—particularly in relation to events and context—to support the campaign in an effort to maximize the believability and efficacy of the attack.

The story-line provided to the victim(s) was plausible and reinforced intention to act as requested by the attackers

Deployment

Deployment is the threat actor’s intentional transmission of deepfake content to the target audience through selected online channels.

The attackers were able to deploy the synthetic media in a live video call

Amplification

Amplification is the threat actor’s intentional efforts to maximize the visibility, virality and target audience exposure to their deepfake content.

This was a narrow-cast deployment and not amplified beyond the target victims.

Post-Campaign

Post-Campaign is the period after the target audience has received and been exposed to the deepfake content.

This was a narrow-cast deployment and not was not an ongoing campaign beyond the engagement with the target victims.



Cognitive Security Recommendations

This section identifies the steps and measures to prevent and defend against the synthetic media/deepfake content assessed in this DTIR. For a more detailed recommendation, training or consultation, connect with Psyber Labs.


Real-time interactive deepfake attacks propose a number of cognitive security challenges that require holistic security considerations: Awareness. Many individuals targeted by deepfake attacks, particularly real-time attacks, such as the type posed in the instant case are not aware that such threats exist or are even possible. Thus, increased awareness about the existence and capabilities of deepfake technology among individuals and organizations is critical. This includes understanding how deepfakes, including synchronous versions, are created, the potential they have for misuse, and the latest trends in deepfake technology. Modes of thinking. Heuristics (mental shortcuts/rules of thumb) that facilitate cognitive biases (thinking errors) are vulnerabilities that are best exploited when an individual is not critically thinking. Thus, organizations should encourage critical thinking and skepticism when interacting with potentially manipulated media. Users should question the authenticity of unexpected or suspicious videos, especially those that could have significant implications if true.

Appendix

DTIR™ Version: 1.0

Submission Date (UTC): February 06, 2024 19:11

Assessment Date (UTC): February 07, 2024 18:17

SHA256 Hash: 5b558f421d99a411390d64ee9d610667746b563a47136f4370c9546f45861618

Source: https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

Source: https://hongkongfp.com/2024/02/05/multinational-loses-hk200-million-to-deepfake-video-conference-scam-hong-kong-police-say/

Source: https://www.scmp.com/news/hong-kong/law-and-crime/article/3250851/everyone-looked-real-multinational-firms-hong-kong-office-loses-hk200-million-after-scammers-stage