Why Deepfake Phishing is a Looming Disaster

Check Out All The Smart Security Summit On-Demand Sessions Here.


All is not always as it seems. As artificial intelligence (AI) technology has advanced, individuals have exploited it to distort reality. They created synthetic images and videos of everyone from Tom Cruise and Mark Zuckerberg to President Obama. While many of these use cases are harmless, other applications, such as deepfake phishing, are much more nefarious.

A wave of threat actors are harnessing AI to generate synthetic audio, image, and video content designed to impersonate trusted individuals, such as CEOs and other executives, to trick employees into transmitting information.

Yet most organizations are simply not prepared to deal with these types of threats. In 2021, Gartner analyst Darin Stewart wrote a blog post warning that “as enterprises scramble to defend against ransomware attacks, they are doing nothing to prepare for an impending synthetic media attack. “.

As AI advances rapidly and vendors like OpenAI democratize access to AI and machine learning through new tools like ChatGPT, organizations cannot afford to ignore the threat of social engineering. posed by deepfakes. If they do, they will make themselves vulnerable to data breaches.

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here

The state of deepfake phishing in 2022 and beyond

Although deepfake technology is still in its infancy, its popularity continues to grow. Cybercriminals are already beginning to experiment with it to launch attacks against unsuspecting users and organizations.

According to the World Economic Forum (WEF), the number of deepfake videos online is growing at an annual rate of 900%. Meanwhile, VMware finds that two in three defenders say they’ve seen malicious deepfakes used as part of an attack, a 13% increase from last year.

These attacks can be devastatingly effective. For example, in 2021, cybercriminals used artificial intelligence voice cloning to impersonate the CEO of a large corporation and tricked the organization’s bank manager into transferring $35 million to another. account to make an “acquisition”.

A similar incident occurred in 2019. A fraudster called the CEO of a British energy company using AI to impersonate the chief executive of the company’s German parent company. He requested an urgent transfer of $243,000 to a Hungarian supplier.

Many analysts predict that the rise of deepfake phishing will only continue and that the fake content produced by threat actors will only become more sophisticated and compelling.

“As deepfake technology matures, [attacks using deepfakes] are expected to become more mainstream and develop into new scams,” said KPMG analyst Akhilesh Tuteja.

“They are becoming more and more indistinguishable from reality. It was easy to relate deepfake videos two years ago because they had a goofy [movement] quality and… the fake person never seemed to blink. But it’s getting harder and harder to tell him apart now,” Tuteja said.

Tuteja suggests that security managers need to be prepared for fraudsters using synthetic images and videos to bypass authentication systems, such as biometric logins.

How Deepfakes Impersonate People and Can Bypass Biometric Authentication

To execute a deepfake phishing attack, hackers use AI and machine learning to process a range of content, including images, videos, and audio clips. With this data, they create a digital imitation of an individual.

“Bad actors can easily create auto-encoders – a kind of advanced neural network – to watch videos, study images and listen to recordings of individuals to mimic that individual’s physical attributes,” said David Mahdi, adviser CSO and CISO at Sectigo.

One of the best examples of this approach occurred earlier this year. Hackers generated a deepfake hologram of Patrick Hillmann, Binance’s chief communications officer, taking content from past interviews and media appearances.

With this approach, threat actors can not only mimic an individual’s physical attributes to trick human users via social engineering, but they can also flout biometric authentication solutions.

For this reason, Gartner analyst Avivah Litan recommends that organizations “not rely on biometric certification for user authentication applications unless it uses effective counterfeit detection that ensures the liveliness and legitimacy of the user”.

Litan also notes that detecting these types of attacks is likely to become more difficult over time as the AI ​​they use advances to be able to create more convincing audio and visual representations.

“Deepfake detection is a losing proposition because deepfakes created by the generative network are evaluated by a discriminative network,” Litan said. Litan explains that the generator aims to create content that tricks the discriminator, while the discriminator continually improves to detect artificial content.

The problem is that as the accuracy of the discriminator increases, cybercriminals can apply the information from it to the generator to produce content that is harder to detect.

The Role of Security Awareness Training

One of the easiest ways for organizations to combat deepfake phishing is through security awareness training. While no amount of training can prevent all employees from falling victim to a highly sophisticated phishing attempt, it can reduce the likelihood of incidents and security breaches.

“The best way to combat deepfake phishing is to incorporate this threat into security awareness training. Just as users learn to avoid clicking on web links, they should receive similar deepfake phishing training” , said John Oltsik, analyst at ESG Global.

Part of this training should include a process for reporting phishing attempts to the security team.

In terms of training content, the FBI suggests that users can learn to identify deepfake spear phishing and social engineering attacks by looking for visual indicators such as distortion, distortion or inconsistencies in images and video.

Teaching users how to identify common red flags, such as multiple images with consistent eye spacing and placement, or timing issues between lip movement and audio, can prevent them from falling prey to a skilled attacker.

Fight enemy AI with defensive AI

Organizations can also attempt to combat deepfake phishing using AI. Generative adversarial networks (GANs), a type of deep learning model, can produce synthetic datasets and generate fake social engineering attacks.

“A strong CISO can rely on AI tools, for example, to detect counterfeits. Organizations can also use GANs to generate possible types of cyberattacks that criminals have not yet deployed, and design ways to counter them before they happen,” said Liz Grennan, expert associate partner at McKinsey.

However, organizations that take these routes must be prepared to spend time on them, as cybercriminals can also use these capabilities to innovate new types of attacks.

“Of course, criminals can use GANs to create new attacks, so it’s up to businesses to stay ahead of the curve,” Grennan said.

Above all, companies must be prepared. Organizations that don’t take the threat of deepfake phishing seriously will leave themselves vulnerable to a threat vector that has the potential to explode in popularity as AI becomes more democratized and accessible to malicious entities.

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Discover our Briefings.

Leave a Reply

Your email address will not be published. Required fields are marked *