The US is barreling toward a pivotal presidential election, right as generative AI tech has gotten realistic enough to make prospective voters second-guess images and text they see online. That’s led some to term 2024 “the first AI election.”
But if it’s any consolation, Chester Wisniewski, director and global field CTO at security software firm Sophos, said he doubts that the technology is yet accessible enough for just anybody—say, “your brother-in-law”—to make convincing deepfakes. Rather, the bigger threat is likely from the concerted efforts of well-resourced state-backed groups.
We talked to Wisniewski about AI threats to the election, the difficulty of proving AI’s role in scams, and measures to prevent disinformation.
This conversation has been edited for length and clarity.
I was wondering if you could start by telling me a little bit about what you’re seeing in terms of the latest threats right now from generative AI. How has that kind of evolved over the past year or so?
The landscape is messy because a lot of it’s based on how much money and resource you have determines very much what kind of outcomes you can get. So people that are well-resourced financially and with data science teams like ourselves are able to now generate very realistic deepfakes, fake images, fake audio, cloned voices—not quite cloning video, not quite in real-time yet.
I’m going to try to stay apolitical in this, but clearly, there’s a lot of disinformation out there. And we’ve seen this growing in past election cycles in the United States, in particular, where videos have been edited to perhaps make somebody look ill who’s not really ill or stumble over their words in a way that maybe they didn’t really in real life. And we’re getting to the point, not where that can really be generated quite on-demand. But certainly, voice cloning, image cloning to a well-resourced organization or nation-state is a thing now. I don’t think it’s something that amateurs and people monkeying around can do effectively because the tells that it’s fake are just too obvious without a lot of effort and resources and experts behind it.
So the demonstrations on 60 Minutes of this stuff are people that are incredibly clever and put a lot of time and money into those demos. And it’s not something you’d gin up in a half an hour as an amateur on the web. And I think that’s true with almost all the different states of what AI can do right now. Similarly, with generative AI, like, is it impressive? Sure, the fact that I can have a conversation with something that’s for free on OpenAI’s website is pretty mind-blowing. But how you can use that effectively is another question.
Keep reading here.—PK