Sophisticated financial scam facilitated by deepfake video content
The use of synthetic media to transmit deceptive information intended to obtain money or other things of value from the victim.
Deepfake technology intentionally using the likeness of famous and/or credible authorities in an effort to legitimize a scheme to defraud the target audience
Intentional strategy and tactics meant to mislead, misdirect and manipulate the perceptions of a target audience through simulation (showing the false) and/or dissimulation (hiding the real)
Zishing, or Zoom-based phishing attacks, wherein a threat adversary (TA) uses deepfake technology to realistically emulate known, familiar or expected interactant(s) to deceive the victim on the video call to comply with requests in the interest of the TA.
It is a perceptual blindness (rarely called inattentive blindness) that occurs when an individual fails to perceive an unexpected stimulus in plain sight, purely as a result of a lack of attention rather than any vision defects or deficits.
Tendency for people to give preferential treatment to others they perceive to be members of their own groups.
Tendency to do favors for people whom we like. Can be exploited by establishing rapport with target before asking for action.
Tendency to comply with authority figures (usually legal or expert authorities). Exploitable by assuming the persona or impersonating an authority figure.
Perceived shared identity based on similarity in a trait, affiliation, or belief. This can be a powerful influence tactic as people tend to be more open to persuasion by someone they identify with.
The Mere Exposure Effect is a cognitive bias where individuals show a preference for things they’re more familiar with. Repeated exposure to a stimulus increases liking and familiarity, even without conscious recognition.
Both humans and automation may be targeted by synthetic media attacks. This criteria references whether the target of the attack was human or automation. The highlighted icon represents the intended target of this submitted media.
A measure of if the attack was constructed by a human or by artificial intelligence. The highlighted icon represents the method of control of this submitted media.
The medium is the format of the content submitted. Highlighted items represent all of the various formats contained in the submitted content.
Text
Image
Video
Audio
Technical complexity of the atttack.
How damaging the attack was intended to be.
Intentional strategy and tactics meant to mislead, misdirect and manipulate the perceptions of a target audience through simulation (showing the false) and/or dissimulation (hiding the real)
Drive and intention to accumulate large sums of money or other financial resources
Sophisticated financial scam facilitated by deepfake video content
Motivation is the underlying activator, purpose or sustained reasons for why the deepfake threat actor wants to create nefarious synthetic media.
The goal orientation and basis for the attack was financially driven.
Targeting is the threat actor’s intentional selection of a target audience, or the group or individual whom he is interested in impacting with his deepfake campaign.
The attackers targeted individuals who were familiar with the Chief Financial Officer (CFO), subordinate to the CFO, yet with the authorities to make financial transactions.
Research & Reconnaissance occurs when the threat actor is effortfully gathering information about the target audience, the optimal channels to conduct their campaign on, the relevant narratives for the attack, and type of content that will have the desired impact on the target audience.
The attacker researched and understood leadership positions, individuals in leader roles, and located existing video and audio content to create realistic and believable deepfake content of multiple individuals in the financial company.
Preparation & Planning are the steps and processes that the threat actor takes to acquire the tools and content needed to create the deepfake media for their campaign and their deliberation for the execution of the campaign.
The attackers were able to accumulate existing video and audio content to train AI models and create realistic deepfake material for presentation in a video call.
Production is the threat actor’s use of tools and content for the creation and development of deepfake media for their attack campaign.
Generating real-time deepfakes that are convincing requires significant computational power and advanced algorithms. This involves real-time processing and manipulation of video streams, which is computationally intensive and requires sophisticated hardware and software. In this case, the attackers had access to and capability to use tools to create realistic and believable deepfake content. Further, they were able to play this re-recorded content during a live video call.
Narrative Testing. A narrative is a story, or an account of related events or experiences. A good narrative will have story coherence, such that both the story being told and its relationship to the real world are cohesive and clear. In deepfake campaigns, threat actors consider and evaluate the possible narratives—particularly in relation to events and context—to support the campaign in an effort to maximize the believability and efficacy of the attack.
The story-line provided to the victim(s) was plausible and reinforced intention to act as requested by the attackers
Deployment is the threat actor’s intentional transmission of deepfake content to the target audience through selected online channels.
The attackers were able to deploy the synthetic media in a live video call
Amplification is the threat actor’s intentional efforts to maximize the visibility, virality and target audience exposure to their deepfake content.
This was a narrow-cast deployment and not amplified beyond the target victims.
Post-Campaign is the period after the target audience has received and been exposed to the deepfake content.
This was a narrow-cast deployment and not was not an ongoing campaign beyond the engagement with the target victims.
Real-time interactive deepfake attacks propose a number of cognitive security challenges that require holistic security considerations: Awareness. Many individuals targeted by deepfake attacks, particularly real-time attacks, such as the type posed in the instant case are not aware that such threats exist or are even possible. Thus, increased awareness about the existence and capabilities of deepfake technology among individuals and organizations is critical. This includes understanding how deepfakes, including synchronous versions, are created, the potential they have for misuse, and the latest trends in deepfake technology. Modes of thinking. Heuristics (mental shortcuts/rules of thumb) that facilitate cognitive biases (thinking errors) are vulnerabilities that are best exploited when an individual is not critically thinking. Thus, organizations should encourage critical thinking and skepticism when interacting with potentially manipulated media. Users should question the authenticity of unexpected or suspicious videos, especially those that could have significant implications if true.