‘Think like a bad guy’: How AI could turn the tide against robocalls

Automated filtering, sorting increasingly combat automated messages, calls.
article cover

Francis Scialabba

· 4 min read

The scourge of robocalls flooding our phones has gotten scarier with the advent of voice spoofing and other tactics that can make it harder to tell if you’re getting scammed in real time. But some technologists say AI can be used to defeat the very campaigns it’s spawning.

It’s all a matter of learning to “think like a bad guy,” Alex Quilici, CEO of spam-blocking service YouMail, told Tech Brew in an interview.

Driving the conversation around the intersection of robocalls and artificial intelligence is a Federal Communications Commission (FCC) inquiry, opened Nov. 15, that seeks to understand how AI can aid (or frustrate) efforts to crack down on “illegal and unwanted robocalls and robotexts.”

The FCC’s move is preliminary, Quilici said, but he noted that the telecom industry is already employing AI in some sophisticated ways, like pattern recognition, to help discern and break spam cycles—or “understand when numbers are misbehaving,” he said.

Carriers and platforms commonly feed algorithms information about calls that their customers receive to determine an individual call’s “risk factor,” he said. YouMail uses AI in another way, to flag voice messages that could be linked to known scams.

YouMail’s program “classifies a new message to find the closest message that’s out there and says, ‘Oh, OK, this is the health insurance scam,’” Quilici said. “All the calls from numbers doing this can be blocked.”

It’s worth distinguishing that carriers and anti-spam platforms typically employ discriminative AI models to look for patterns in data sets and combat bad actors’ use of generative AI—the sort of technology responsible for bogus kidnapping audio or the contents of phishing emails. These generative AI–aided tactics are typically used to get a potential victim’s attention before transferring them to a live agent, he said, noting that “we haven’t seen any fully automated scams yet,” but using generative AI for good could be the next frontier.

“I think it’s helpful to think like a bad guy, and then you can figure out where the good guys come in, right? So if I’m a bad guy, what I want to do is use generative AI to create lots and lots of different variants of my interactions,” he said. “The good guys will start generating lots and lots of problematic messages, and then see if they actually occur…So that’s clearly one place it’s going to go.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

As both discriminative and generative AI get more powerful, they could lead to new consumer tools and platforms.

Georgia Tech professor Mustaque Ahamad contributed to RoboHalt, a research prototype that uses natural language processing to power a virtual assistant that can weed out robocalls and scams ringing your cell phone from unknown numbers.

“In the past, we’ve seen this [with] human screeners, but the idea is that AI can implement this in an automated fashion,” Ahamad told us. “It doesn’t ring the phone, so you don’t even notice it, and it engages with the caller in a natural way.”

Most robocallers—even if a live agent is on the other end of the line—stick to a script and won’t engage in normal human conversation patterns, he said. In those cases, the virtual assistant can determine it’s an unwanted call and end the conversation.

If the program determines the call is not a robocall, it records more details and pings the phone user with a summary of the caller’s request, much like a live secretary answering the phone and taking notes.

Ahamad emphasized that RoboHalt is a research project, not a commercially available product. Before technology like it comes to market, technologists and regulators will have to consider questions about call monitoring and consent, he said. (The FCC inquiry asks commenters to expand on “the impact of emerging AI technologies on consumer privacy rights.”)

Ultimately, though, advances in call-screening methods and other AI-informed tactics aim to create more hurdles for bad actors trying to leverage the emerging technology for nefarious purposes, Ahamad said.

YouMail’s Quilici agreed that the rise of AI-driven pattern recognition could help beat back the ever-evolving threat landscape.

“The bad guys have to worry about not talking to people, but talking to AI. That AI is capturing what they’re doing and understanding the patterns,” he said. “And then that allows you to look for that behavior when they’re hitting real consumers.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.