AI

How AI could be an election threat in the hands of well-resourced bad actors

The CTO of Sophos weighs in on how AI is changing online security concerns.
article cover

Chester Wisniewski

· 9 min read

The US is barreling toward a pivotal presidential election, right as generative AI tech has gotten realistic enough to make prospective voters second-guess images and text they see online. That’s led some to term 2024 “the first AI election.”

But if it’s any consolation, Chester Wisniewski, director and global field CTO at security software firm Sophos, said he doubts that the technology is yet accessible enough for just anybody—say, “your brother-in-law”—to make convincing deepfakes. Rather, the bigger threat is likely from the concerted efforts of well-resourced state-backed groups.

We talked to Wisniewski about AI threats to the election, the difficulty of proving AI’s role in scams, and measures to prevent disinformation.

This conversation has been edited for length and clarity.

I was wondering if you could start by telling me a little bit about what you’re seeing in terms of the latest threats right now from generative AI. How has that kind of evolved over the past year or so?

The landscape is messy because a lot of it’s based on how much money and resource you have determines very much what kind of outcomes you can get. So people that are well-resourced financially and with data science teams like ourselves are able to now generate very realistic deepfakes, fake images, fake audio, cloned voices—not quite cloning video, not quite in real-time yet.

I’m going to try to stay apolitical in this, but clearly, there’s a lot of disinformation out there. And we’ve seen this growing in past election cycles in the United States, in particular, where videos have been edited to perhaps make somebody look ill who’s not really ill or stumble over their words in a way that maybe they didn’t really in real life. And we’re getting to the point, not where that can really be generated quite on-demand. But certainly, voice cloning, image cloning to a well-resourced organization or nation-state is a thing now. I don’t think it’s something that amateurs and people monkeying around can do effectively because the tells that it’s fake are just too obvious without a lot of effort and resources and experts behind it.

So the demonstrations on 60 Minutes of this stuff are people that are incredibly clever and put a lot of time and money into those demos. And it’s not something you’d gin up in a half an hour as an amateur on the web. And I think that’s true with almost all the different states of what AI can do right now. Similarly, with generative AI, like, is it impressive? Sure, the fact that I can have a conversation with something that’s for free on OpenAI’s website is pretty mind-blowing. But how you can use that effectively is another question. If we’re talking about coming up with new scams or content, it’s terrible—it’s literally just regurgitating things that it already knows. And all it means is that we can accelerate the pace at which a human could do something humans have already done…But a well-resourced actor could train their own models to be much better at something than the generic ones that you and I can pay $20 a month to subscribe and just monkey around with, like, ChatGPT or Google’s Bard or these kinds of things. And so we spend lots of time and money building our own models that we train on specific technical things so that they can assist our researchers in our labs with accelerating their workflows and analyzing massive amounts of data. And of course, we’ve got the money to spend on the huge number of compute you need to build those models, and it allows us to build very specific ones to do what we want. So certainly somebody with ill intention that had a lot of money could probably build models that would be more effective than the ones you and I have access to at generating fake content.

And I think to me, if I have to look at the political landscape of where AI sits right now, if we’re talking about what’s the biggest risks, I think it’s the mass generation of fake accounts to push an agenda that a human wrote—I don’t think we’re worried about what the AI is going to write or that it’s going to invent some new thing that is disruptive. I think a human is going to come up with the thing they want to do, and this allows them to do it at a scale that we haven’t seen before.

And in 2016, with the Internet Research Agency allegedly running kind of a bot farm on Twitter promoting both pro- and anti-Clinton and Trump memes trying to, I believe, shake people’s belief in the system—that was that human scale. It was cheap humans in Russia that they could pay small amounts of money to. But it meant that humans could only manage probably thousands of accounts with the number of humans they had to operate them. And humans also are not great, for example, if they’re Russian humans—not to be casting aspersions, but we have an awful lot of suspicions about the Russians right now—they’re not great at writing English that’s convincing [anyone] that they’re a dude in Omaha or a gal in Atlanta. And ChatGPT is great at that. These large language models allow foreign actors to easily disguise themselves at scale and with speed that they couldn’t before.

Like for an Internet Research Agency meme, they had to have some really good English speakers grammar-check that stuff and write that stuff, and then they would just make small alterations and manually be sending out all these tweets or Instagram posts or whatever they were doing. This can all be automated now. And so that’s where my fear is most. I’m less concerned with some of the reports I’ve seen of Republicans making AI-generated commercials. These are regulated organizations, that if we really want to do something about it, we can pass laws, we can have norms and agreements on what we’re going to do about these things. I have less and less faith in our ability to regulate things these days, but it could be regulated if we put our minds to it.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

But the disinformation on TikTok and Instagram and Facebook and X…does concern me because we already saw [that] human-scale disruption allegedly had some impact, I believe, in the past. And this allows these things to be at scale.

Have you paid attention to Adobe’s Content Authenticity Initiative or other projects to create digital labels for AI-generated content? What are your thoughts on that approach?

Some of us have been asking for that initiative to start 20 years ago, so we wouldn’t be in this situation. And unfortunately, we waited until it was a problem, and now we’re doing something about it. So I’m not well-versed in all the details of it. But I’m encouraged that the conversation’s happening, and I think we’re making progress. I haven’t done a full survey—I know Sony has been doing this in their cameras for the last few years in the firmware. And I believe the other major manufacturers are, as well—Canon, Nikon, etc.

But of course, citizen journalism is such a thing now that it needs to be your iPhone, too. It needs to be your Android devices, because we wouldn’t know about so many of the major newsmaking events of the last five years if it wasn’t for citizen journalism being out there recording…We can’t just depend on professional journalists and equipment.

It seems like we’ve heard a lot about AI being used nefariously in various online scams. But there are only a few oft-cited instances of actual political deepfakes. Have you actually seen any of these political uses yet, or is this more hypothetical at this point?

I think it’s hypothetical at this point. The problem is we can never prove it…So it used to be that a human had to text you the romance scam and then get you to move to WhatsApp and then lure you in…And we suspected that AI was being used by some of these pig butchering scams. We suspected AI was involved, but we couldn’t prove it because there’s no way to know it’s a bot at the other end without a spycam in the bedroom of the person scamming you…And eventually they screwed up, and we did get the “I’m just a large language model” blurb in the middle of one of these chat transactions when we were doing some research. I’m like, “OK, finally, we have evidence that at least this scammer [was using AI]”…Because other than that, we’re just as helpless as OpenAI, like, how do you know the text was generated?

Has AI led to any kind of proliferation of scams at all, or is it just existing scammers becoming more sophisticated?

I don’t know that they’re more sophisticated. But again, I think at scale, these pig butchering scams, if they don’t have to have a human sitting there texting with you, clearly they can scam 10,000 victims at once instead of 1,000 or 500, or how many [they were] able to do before…ChatGPT speaks fluent in almost every language…And I would be really shocked if phishers and scammers of all sorts are not using this for translation purposes to not give you that clue that the grammar is just a little bit off or it’s British English instead of American English, which might clue you in. “Why would Bank of America use an “S” instead of a “Z” in “organization?” Those clues are gone now. We just have to assume that, “Why would the criminals not use this?” Because to be fair, the translation capabilities of it are mind-blowingly good.

I know you can’t predict the future, but do you expect that point that you talked about where anybody can easily use this technology to occur before the presidential election? Or is that further out on the horizon?

It’s hard to speculate because everything is moving so fast. I won’t even speak for our team, but I’ll speak for myself and say that, I don’t think we need to worry about your brother-in-law making a deepfake of President Trump or President Biden before the next election. But I think if we saw something that felt a little off, I don’t know that I would say that the Russians can’t do it, or that the Chinese can’t do it, or possibly even the Iranians or the North Koreans, like, when we think about things in budgets of our personal scale, and forget how cheap this is compared to traditional weapons.

Like one missile costs over a million dollars that we give to Ukraine. And that million dollars—the amount of GPUs that can buy to make fake AI stuff, like the bang for your buck for destroying one facility with a missile versus disrupting an entire society with a misinformation campaign—is enormous payback. And it means pretty much every country in the world is capable of financing something to do these things at scale if they want to…But for individuals doing this, I think it’s far enough off that we’ll maybe get a pass for 2024.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.