GenAI is Empowering a New Class of Cyber Attacks

Cyber Attack Prevention for Enterprise Voice

If you think protecting your organization against the constant barrage of cyberattacks is already a Herculean task, that job got a whole lot harder thanks to the rapid increase in availability of Generative AI (GenAI) applications. 

GenAI has been weaponized and is targeting the Voice Channel.

Background

As an outcropping of traditional AI-powered data analysis and predictive modeling, GenAI technology applies sophisticated algorithms and training models against large swaths of data to produce something completely new and realistic, including text, images, video, music, speech, software code, and even product designs.

GenAI, A Tool for Good / Evil

Advocates tout the “creative” power of GenAI platforms and applications like OpenAI and Google Bard to drive efficiencies and speed innovation. But cybersecurity experts, tech leaders, even GenAI platform developers themselves – warn of unforeseen consequences when the unrestrained power of GenAI falls into the wrong hands.

And no one is more excited about that possibility than the growing legion of AI-empowered cyber-thieves. Voice scammers, vishers and phishers now have a powerful new set of tools to advance their criminal campaigns against high-value enterprise targets, complete with interactive scripts, deepfake images, and voice cloning used to further their deceits.

Business Leaders are Worried About GenAI Tech

The weaponization of GenAI technology for criminal intent is clearly a concern for business leaders. According to this IBM report on the emerging threat landscape, “Executives expect hackers faking or impersonating trusted users to have the greatest impact on the business, followed closely by the creation of malicious code.”

Cybercriminals are already applying both mainstream and illicit AI models to a wide variety of activities, including:

  • Writing detection-evading code for malware
  • Identifying vulnerabilities in specific organizations for more precise attack targeting
  • Launching malicious deepfake attacks through AI-generated video impersonations
  • Creating realistic phishing email and vishing (voice phishing) phone campaigns luring employees into a scam

However, nowhere is the threat of synthetic content (content created through a GenAI program or application) development and exploitation more pernicious, nor the attack surface more exposed, than the pathway provided through the enterprise voice channel.

A Perfect Environment for AI-Powered Attacks…your Voice Channel

The fact is, the historically under-protected enterprise voice channel is already a favored medium for social engineering and vishing (voice phishing) attacks perpetrated by nefarious callers intent on gaining access to accounts or internal networks. That’s because a voice call by a skilled threat actor creates a direct connection to malleable human targets who can then be manipulated into divulging account and network compromising information. No doubt, history has proven this surprisingly easy, as there are numerous examples of successful vishing (voice phishing) schemes exacted on large, presumably sophisticated, organizations such as MGM, Caesars, Twitter, and Robinhood, leading to devastating data breaches.

Now, with easy access to cheap (often free) Generative AI applications, these same cybercriminals have a whole new arsenal of tools to supercharge their deceits.

Gathering personal information for a social engineering attack can now be accomplished in seconds. And, with just a bit of information about the person they are imitating plus a short audio clip lifted from social media, cybercriminal impostors can use GenAI models to not only write convincing, interactive scripts that resemble a trusted source, but they can actually Sound like that source through the use of real-time voice cloning and speech synthesizing applications.

GenAI, New Fuel for the Criminal Enterprise

Not only are new AI-generated threat tactics expanding exponentially, so, too, are their legions of criminal practitioners. As noted in Infosecurity Magazine, Voice Cloning as a Service (VCaaS) is now widely offered through the criminal market on the dark web for as little as $5 a month or free to use with a registered account – no training required – which sets the bar low for a whole new generation of would-be hackers. 

As Jonathan Nelson, director of product management for Hiya, predicts, “We should expect a pretty drastic shift in the scam ecosystem and the way scam calls are created within the next couple of years. It’s going to be incredibly fast.”

Can Regulators Fix GenAI Cyber Threats?

The Federal Trade Commission (FTC) has already started sounding the alarm about the growing incidences of  GenAI-enhanced voice scams.

Cybersecurity experts are now confirming that deepfake technology has advanced to the point where it can be used in real time, enabling fraudsters to closely replicate not only someone’s voice, but also their image and movements, in a call or virtual meeting.

While examining potential regulations for GenAI applications development that might, for instance, require a subtle sound “watermark” be applied to altered audio content, these steps will do little to deter the underground proliferation and weaponization of AI-powered tools available to the criminal enterprise.

Fighting GenAI Attacks with a GenAI Defense?

In a report on GenAI’s impact on businesses, Management Consulting Firm Bain & Company acknowledges its future potential as a key cybersecurity tool, but notes the challenge of creating a quality, extensive dataset required for model training due to the sensitivity and  siloed nature of security data. The possibility of a fully-automated, GenAI powered system for detection, containment, eradication and recovery from AI-generated cyberattacks, authors note, “is unlikely over the next 5 to 10 years, if at all.” 

That does not mean AI cannot still play a significant role in enterprise voice threat detection and mitigation. For instance, predictive AI, an earlier iteration of GenAI, is foundational to contact center caller authentication applications that analyze voice biometrics (specific voice patterns and audio characteristics) using sophisticated algorithms, pattern detection and machine learning. These applications are capable of detecting and flagging subtle audio characteristics consistent with fraudulent intent.

But with GenAI, threat actors can simply adapt their voices, scripts and behaviors on the fly with more refined detection evasion tactics when challenged. Which means voice-based fraud detection applications must, themselves, continue to “learn” and adapt to stay ahead of the threats. It becomes a continuous cycle of cat and mouse with no clear end in sight.

A Focus on Data Protection is Overlooking the Obvious

Even in light of new attack tactics, standard measures to protect an organization from a data breach must still be upheld. Common measures traditionally include:

  • data encryption
  • data backups
  • software patches
  • zero-trust access control
  • employee training

Each of these is, itself, an important practice for data security. However, when applied to voice-based attacks, all share a common weakness; They do nothing to close the path of intrusion or ability of intruders to reach the person at the end of the call.

No amount of technology or training can change the fact that humans are inevitably the weak link standing between skilled threat agents and the data they seek.

Regardless of how their tactics have changed, the goal of the voice-based cybercriminal remains the same: To make contact with, and crack, the defenses of an unwitting human target in order to extract compromising information. GenAI is just making that much easier to achieve.

First Line of Defense: The Network Edge

Because of the unique, personal nature of voice-based social engineering and vishing attacks, an effective defense requires a paradigm shift of priorities from data protection to human protection.

Whether a nefarious call and the message it carries is live or auto-generated, AI-augmented or not, its ability to breach a human contact is neutralized if it can’t reach that target in the first place.

And that is where Voice Traffic Filter comes in.

Voice Traffic Filter is a purpose-built, Voice Security system that detects, and removes, nuisance and nefarious calls in their many and evolving forms at the network edge, before those calls can impact network performance or reach, and potentially breach, a human endpoint.

Voice Traffic Filter’s sophisticated, multi-layered call analysis capabilities leverage the intelligence found in the call data itself. It applies a meta-analysis of worldwide data and threat trends layered with pattern recognition and behavior analytics to form a highly-sensitive profile of caller status and intent. Legitimate callers pass through while those flagged as suspicious or clearly malicious are blocked or routed to another resource. Voice Traffic Filter is built on an AI-powered machine learning algorithm which is continually updated in real time to assure effective defense against ever-evolving voice-based threats. 

With Voice Traffic Filter, it does not matter what technology a criminal caller may have used to boost the credibility of their deception; the application simply blocks their pathway to delivery and, in so doing, significantly reduces the organization’s risk exposure to cyber breach.

As GenAI continues to impact how threat agent operate and businesses respond, Voice Traffic Filter remains a consistent, powerful first line of defense. It should be a key part of any comprehensive voice channel defense strategy that both complements and enhances the overall threat defense efficacy of the enterprise cybersecurity ecosystem.

To learn more, click Here.