Deepfake Threats to Organizations
Posted on January 21, 2024 • 3 minutes • 550 words
National Security Agency (NSA), Federal Bureau of Investigation (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) of the United States released a joined information sheet on September 12, 2023. They suggest security professionals take proactive measure against evolving synthetic media, deepfake threats. In that text, I’ll refer some points from the report.
Threats from synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications. (p.1)
Deepfakes are a particularly concerning type of synthetic media that utilizes artificial intelligence/machine learning (AI/ML) to create believable and highly realistic media. (p.1)
Deepfakes are AI-generated highly realistic synthetic media that can be abused to: Threaten an organization’s brand, impersonate leaers and financial officers, and enable access to networks, communications, and sensitive information. (p.1)
[Organizations] should consider implementing a number of technologies to detect deepfakes and determine media provenance, including real-time verification capabilities, passive detection techniques, and protection of high priority officers and their communications. (p.1)
Organizations and their employees may be vulnerable to deepfake tradecraft and techniques which may include fake online accounts used in social engineering attempts, fraudulent text and voice messages used to avoid technical defenses, faked videos used to spread disinformation, and other techniques. (p.3)
Deepfake detection tools have been fielded by several companies, including Microsoft, Intel, and Google. (p.7)
Prior to the 2020 elections, Microsoft introduced the Microsoft Video Authenticator and in 2023 they rolled out the “About this Image” tool to get more context for the authenticity of images they may receive. (p.7)
Intel introduced a real-time deepfake detector in late 2022 labeled FakeCatcher which detects fake videos. (p.7)
Google, in collaboration with academic researchers in Europe, contributed a large dataset of visual deepfakes to the FaceForensics Benchmark in 2019. (p.7)
…synthetic media threats that organizations most often encounter include activities that may undermine the brand, financial position, security, or integrity of the organization itself. (p.7)
Malicious actors may use deepfakes, employing manipulated audio and video, to try to impersonate an organization’s executive officers and other high-ranking personnel. (p.8)
This technique can have high impact, especially for international brands where stock prices and overall reputation may be vulnerable to disinformation campaigns. (p.8)
…impersonating key leaders or financial officers and operating over various mediums using manipulated audio, video, or text, to illegitimately authorize the disbursement of funds to accounts belonging to the malicious actor. (p.8)
Malicious actors may use the same types of manipulated media techniques and technologies to gain access to an organization’s personnel, operations, and information. These techniques may include the use of manipulated media during job interviews, especially for remote jobs. (p.8)
By 2030 the generative AI market is expected to exceed $100 billion, growing at a rate of more than an average of 35 percent per year. (p.10)
…the improved ability to lift a 2D image to 3D to enable the realistic generation of video based on a single image. (p.10)
Mandatory multi-factor authentication (MFA), using a unique or one-time generated password or PIN, known personal details, or biometrics, can ensure those entering sensitive communication channels or activities are able to prove their identity. (p.11)
…an overview of potential uses of deepfakes designed to cause reputational damage, executive targeting and BEC attempts for financial gain, and manipulated media used to undermine hiring or operational meetings for malicious purposes. (p.14)