Why deepfakes are set to be one of 2024’s biggest cyber security dangers


Artificial Intelligence (AI) has revolutionized the creation of visual content. Multiple AI-image generator platforms have become available in recent years, and now, new platforms such as Sora, OpenAI’s flagship AI video editor, are making their way to market.

AI image and video platforms have allowed individuals and businesses to create content with limitless creativity and scalability, while also improving efficiency in cost and time. However, the swift evolution of this technology has outpaced regulatory measures, leaving a gap for its misuse by malicious individuals or groups.

In recent years, the proliferation of deepfake images and videos has surged - media that has been digitally manipulated to replace one person's likeness whether voice, face or their body. The technology has been cast into the spotlight by the recent targeting of high-profile figures including deepfake audio of Keir Starmer, deepfake pornographic images of Taylor Swift and a computer generated video of Martin Lewis. Advances in AI technology mean that deepfakes are becoming increasingly sophisticated, difficult to spot, and with the right equipment, can be broadcast live, meaning individuals could have a conversation with somebody in real-time who look and sound completely different to how you are seeing and hearing them on your screen.

Recent figures show that deepfake fraud material reportedly increased by 3,000% in 2023. And, with the technology now quick, cheap and easy for virtually anyone to use, threat actors have quickly begun to adopt this technology into their arsenal of cyber-attack techniques.

The cybersecurity risk of deepfakes to businesses

Deepfake technology introduces several cyber risks to businesses. Over the years, deepfakes have been used to spread misinformation, deceive audiences, manipulate public opinion, and defame individuals. So, understanding the potential risk is crucial.

Financial damage

The financial implications of deepfake attacks pose a significant threat to businesses, primarily through fraud and scams used to impersonate high-ranking decision making executives who staff trust and respect.

Cybercriminals can create highly convincing audio or video recordings of a CEO, for example, instructing employees to transfer funds or share sensitive information. These deepfakes can bypass traditional security measures, leading to substantial financial losses. In 2019, a UK-based energy firm lost $243,000 after cybercriminals utilised voice-generating artificial intelligence software to imitate the CEO of the brand’s German parent company to enable an illicit transfer.

Operational risks

Deepfakes can increase the efficacy of social engineering and phishing attacks, which pose significant operational concerns for businesses. Traditional phishing attempts often rely on poorly written or generic emails, but deepfakes add a new layer of believability. Attackers can create personalized emails or calls from trusted individuals within the organization, making it harder for employees to spot malicious activity.

Earlier this year, a finance employee at a multinational corporation in Hong Kong was deceived into transferring $25 million to cybercriminals. The criminals used deepfake technology to impersonate the company's chief financial officer in a video conference call. The elaborate scam involved the employee participating in what appeared to be a meeting with several other staff members, all of whom were actually deepfake recreations. This sophisticated attack successfully gained the employee's trust, leading to huge financial losses for the company.

Reputational harm

Deepfakes also have the potential to destroy a brand or individual's reputation. For example, a deepfake showing a CEO doing and/or saying something harmful or controversial could significantly impact trust, business continuity, and market stability, leading to a crash in share prices and an online witch-hunt.

By the time evidence of a deepfake becomes public, it may be too late to stop significant damage being done to your company’s reputation.

Whatever its form, such an attack on your organization could have significant consequences. So, what can you do to address these risks?

Spotting deepfakes and mitigating risk

As deepfake attacks grow it is critical for organizations to take proactive action in safeguarding their environment. By creating a strong, security-focused culture, and updating security procedures to account for the rise in these tactics, organizations can work to mitigate their risk.

Educate employees and partners

Regular training sessions should be held to inform employees about deepfake technology and its possible consequences for the organization. Teach staff how to recognize the indicators of a deepfake, such as unusual facial movements or inconsistencies in audio-visual synchronization.

Strengthen identity verification

This is essential, especially for transactions involving money or sensitive information. Traditional verification methods, such as passwords and PINs, can be easily compromised, which is why implementing multi-factor authentication (MFA) is crucial. This adds an extra layer of security by requiring multiple forms of verification before granting access. You could also create trusted phrases to confirm someone’s identity, acting as a last line of defense when attempting to ward off attacks. This multi-layered approach ensures that even if one security process is compromised, additional measures are in place to prevent unauthorized access and to protect sensitive information.


 

Comments