Could AI and Deepfakes Sway the US Election?

A few months ago, everyone was worried about how AI would impact the 2024 election. It seems like some of the angst has dissipated, but political deepfakes—including pornographic images and video—are still everywhere. Today on the show, WIRED reporters Vittoria Elliott and Will Knight talk about what has changed with AI and what we should worry about.

Leah Feiger is @LeahFeiger. Vittoria Elliott is @telliotter. Will Knight is @willknight. Or you can write to us at politicslab@WIRED.com. Be sure to subscribe to the WIRED Politics Lab newsletter here.

Mentioned this week:OpenAI Is Testing Its Powers of Persuasion, by Will KnightAI-Fakes Detection Is Failing Voters in the Global South, by Vittoria Elliott2024 Is the Year of the Generative AI Election, by Vittoria Elliott

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for WIRED Politics Lab. We’re on Spotify too.

Transcript

Note: This is an automated transcript, which may contain errors.

Leah Feiger: This is WIRED Politics Lab, a show about how tech is changing politics. I'm Leah Feiger, the senior politics editor at WIRED. A few months ago, a lot of people were concerned about how artificial intelligence might affect the 2024 US election. AI-generated images, audio, and video had just gotten so good and was so easy to make and spread. The WIRED politics team in our project tracking the use of AI in elections around the world actually called 2024 the year of the generative AI election. Lately, it seems like some of the panic around AI has subsided, but deepfakes of Kamala Harris, Joe Biden, Donald Trump, and other politicians and their supporters are everywhere. And as we'll talk about today, legislation on political deepfakes, including AI-generated pornography, is really tricky. So with the election looming, what has changed, if anything, and how much should we really be worrying about AI? Joining me to talk about all of this are two of WIRED's AI experts. We have politics reporter Vittoria Elliott—

Vittoria Elliott: Hi, Leah.

Leah Feiger: Hey, Tori. And from Cambridge, Massachusetts, senior writer Will Knight. Will, thank you so much for coming on. It's your first time here.

Will Knight: Yep. Hello. Thank you for having me.

Leah Feiger: So let's start with porn, if that's OK. Tori, you have a big article out today all about US states tackling the issue of AI-generated porn. Tell us about it. How are people handling this?

Vittoria Elliott: It's actually really piecemeal, and that's because on a fundamental level, we don't have national regulation on this. Congresswoman Alexandria Ocasio-Cortez, who was herself the target of nonconsensual deepfake porn this year, has introduced the Defiance Act, which would allow victims to sue the people who create and share nonconsensual deepfake porn as long as they can show that the images or videos were made nonconsensually. And then Senator Ted Cruz also has a bill called the Take It Down Act that would let people force platforms to remove those images and videos. But there hasn't really been major movement on these in several months, and this issue has gotten a lot of attention, particularly because we've seen the spate of young people, of middle and high schoolers, using generative AI technology to bully their peers, to make explicit images and videos of their peers. And we obviously have data that shows that while generative AI is maybe still being used in politics, and we definitely have a ton of examples where it is, mostly it's used to target and harass and intimidate women.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: So break it down a little bit more for me. What states are taking action here? What do you mean that it's getting choked up in the federal government? What does that look like?

Vittoria Elliott: I mean, it means that we've got bills on the table, but they're not really getting a ton of movement right now. There's a ton of other stuff on Congress' plate. We're going into an election year. A lot of the focus is in the next couple months really about running campaigns, whereas state legislatures, they sort of have a bit more latitude to move more quickly. And this is a really easy bipartisan issue to say, we see this technology that's being deployed, we want to protect young people, and secondarily, we also want to protect women from being abused on the internet.

Leah Feiger: So what specifically are states doing to protect people against this, and what does that look? Is it kind of the same across the board, or what does that look like?

Vittoria Elliott: It really varies from state to state. So for instance, there's a bill in Michigan right now that focuses on minors. It's particularly focused on dealing with explicit deepfake nonconsensual porn made against young people, and that would, say, allow a victim to sue the person who created it. In some states it would come with a criminal liability, which means you could possibly be prosecuted and go to jail for that. When we're talking about images of minors very specifically, there's already a lot of rules around what you can and cannot have on your computer, on the internet, when it comes to explicit images of minors. So there's a lot of building blocks there for legislators to work with.

Leah Feiger: And obviously AI has been used to make porn for years now. Why are specific legislators getting involved in this?

Vittoria Elliott: It's actually really interesting. I think this year, as we have talked about a lot, the threat of AI in politics feels very real, and it is. I don't want to take away from that, but the reality is we already know that a lot of the AI-generated content is porn, and it is targeting women. A lot of it is nonconsensual. I spoke to a Republican state legislator in Michigan, Matthew Bierlein, and he actually came to be the cosponsor of the state's package against nonconsensual deepfakes via his interest in deepfake political ads. Initially, that was the first thing he wanted to do when he got into office last year as a first-term state legislator, he wanted to sponsor a bill about deepfake political ads to make it a campaign finance violation. And through his work on that, he got brought in to other policies around AI-generated content, and particularly around nonconsensual deepfakes. And when the Taylor Swift incident happened earlier this year, where a nonconsensual deepfake of Taylor Swift was circulated widely on social media platforms, particularly on X, and she was unable to get it down, Bierlein and his cosponsor sort of looked at that as the moment to really push this forward because she's so visible, and for someone so powerful and so rich to still be the target of this and to be so powerless in being able to control her own image, just really hammered home to them that this was the moment to do this.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: But also obviously there's the companies here, and Will, you are an absolute expert on all of these companies and what they're doing to put guardrails or not put guardrails. Is there so much AI porn? I feel almost wild asking that question, but it is everywhere and it is not stopping.

Will Knight: I think that the answer to that is that there are a lot of open source or unrestricted applications out there. They're pretty easy to download and get hold of. The technology used to generate images, it's essentially open source. People know how to do it, and it's not been very difficult to copy what is available from the big companies who do put restrictions on their programs. You can't have celebrities' faces, let alone pornographic images generated usually, or people can figure out how to break those guardrails sometimes. But as is the case with AI generally and its implications, I think that one of the points is that it's always been possible to … if you have the resources to create a faked image of somebody doing whatever, but it's suddenly made it so accessible. You can download that on the internet. There are these Discords you go to, which are just filled with people creating different images. And so the genie is out of the bottle there, I think.

Leah Feiger: They're spreading like wildfire. This isn't deepfake porn, but Elon Musk's X is … I feel like my entire For You page is AI imagery, much of which is actually pushed and posted by Elon himself. I mean, this week did you guys see the image of Kamala Harris wearing a communist hat and dressed in all red, and he was like, “This is the future you'll have if you don't vote for Trump.” This is just happening with impunity.

Will Knight: Yeah, I mean, it's fascinating to watch that because I think a while back the sort of narrative was, or the idea was, that deepfakes would completely fool people and show somebody doing something incriminating, but it's not really what's happening. It's more that they've just become these really simple tools of propaganda. Maybe some people are fooled, but mostly it's just these ways to mock people or mass-produce propaganda style images. So yeah, it's the Comrade Kamala. I mean, that's one kind of fascinating, because it also reveals how biased those AI programs are, because it just is not very good at actually making it look anything like Kamala Harris.

Leah Feiger: No, no. I mean, they need to have captions for me to realize exactly what's happening, actually. But you're right, the mockery element of it all is clearly there, but also dangerous. And Tori, your article today about all of the legislation targeting deepfake porn, are they having any success? Are there any states that have really figured out how to legislate against this?

Vittoria Elliott: I don't know that there's one particular state that's figured it out. We have 23 states that have some kind of law on the books, but the issue is they don't all dovetail. So again, when you have a state that primarily focuses on minors and a state that primarily focuses maybe on adult women, those are two radically different sets of law. If you're investigating across state lines, that can get really dicey because something might be illegal in one state but totally fine in another. And the internet is famously borderless. So it can mean that this patchwork of laws can make it really hard to enforce beyond that sort of localized level when we're talking about a high school or a middle school or maybe an abusive relationship. But when we're talking about something that's spread really widely, that's harder to enforce.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: Will, do you think that any of the larger AI companies, OpenAI, et cetera, are working with state officials to even help figure out how to create these boundaries? I guess I'm thinking of what questions do we even know to ask, to be putting in this legislation, other than blanket bans?

Will Knight: I think that they are, to some degree, working with politicians, advising them somewhat. And there are certain technologies that they've collectively signed up to use that will sort of watermark images. But as I think Tori's written about, it's a real moving target, and the technology gets better, and these ways of catching deepfakes, they keep getting around that. I talked a little while ago to Hany Farid, who's a sort of world expert who Tori knows well on catching deepfakes, and he has this new company. His view is that we're going to end up with a situation where it's similar to sort of anti-malware or spam restrictions, where everybody will have to have something, that a lot of companies and a lot of individuals even will have to have technology to catch these things. And he also suggests that really it will become a sort of personalized thing. It won't be just something that affects politicians. Maybe revenge porn is just the vanguard of that because you start to see some financial scams that have been successful, where for just a second you see the face of your CEO, and then he's asking you to wire a bunch of money. So you can sort of see how that can perhaps spread much more widely.

Leah Feiger: Porn is just the slippery slope here, that and Comrade Harris. Love to hear it. Given this patchwork though that you're both talking about, it obviously makes sense for the federal government to be doing something here, at least in the US context. And people who have been nonconsensually included in AI-generated pornography, including AOC, have tried to get Congress to regulate the issue, like you said, Tori, but what's the holdup?

Vittoria Elliott: I don't think this is something that politicians are actively like, "Oh, we don't care." But it's on a stack of other things. And one of the things that is really sticky, I think it's a topic that a lot of people can be like, "Yeah, that's bad. We should do something about it." But thinking about what it actually means is harder. So for instance, a lawyer that I spoke to mentioned that a lot of times, especially when we're dealing with deepfakes of adults, primarily adult women, because that is mostly who's targeted, you have to show intent. You have to show that it was meant to harm somebody. And that can be really hard because not everybody is shooting text messages to each other being like, “I hate this person. I'm going to make a deepfake of them.” So proving intent is a whole thing. And then I spoke to Kaylee Williams who is a PhD student at Columbia who's focused on nonconsensual deepfakes, and she mentioned that when we're talking about famous people, Taylor Swift, AOC, public figures, a lot of times, even though it obviously seems abusive to us on the outside, people who make these nonconsensual deepfakes are thinking of these as fan content. They're thinking of it as, I think this person's great and I find them really attractive or hot and I made this. They're not thinking about it as, I'm flooding the internet with something that's abusive. They're thinking about it as, I want to see this version of this person. And so in those cases, proving intent around harm would be really hard. So I don't think it's just an issue that people do or do not care about this, but thinking about how you might actually enforce this is really difficult. And then on a federal level, we have a ton of cybercrime sort of laws that currently exist and that particularly deal with child sexual abuse material. We have a lot of stuff around that, but building out stuff for adult women when we're not looking at abuse toward minors, that gets a little stickier.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: Sure. I mean, it sounds like they definitely have a road ahead to handle all of this. We're going to take a quick break, and when we come back, more on how AI is impacting the 2024 election.

[Break]

Leah Feiger: Welcome back to WIRED Politics Lab. Tori, Will, you both cover AI all the time, and it really does seem like panic over AI in our elections has dissipated. The New York Times a couple of weeks ago ran a piece titled “The Year of the AI Election That Wasn't.” Do you agree with that? Does it feel like the fear of deepfakes has gone away, or are you still talking to people who are really concerned about what the next couple of months could look like?

Will Knight: I think it's true that it's not been as huge of an issue, but my impression is people are still concerned, because one of the key concerns is you might have a quite a convincing deepfake very late on in the election that could have a big impact, right?

Leah Feiger: Right.

Will Knight: But I think one of the other things that's sort of fascinating is, we've not seen very convincing deepfakes emerge. You do have the sharing of these images like Comrade Harris, and I think that's actually part of a really broadly concerning kind of campaign to erode the truth that a lot of people have kind of leant into, right? And you saw it very much with Trump talking about AI-generated crowds, and I don't think that really landed that much, but maybe it did with a lot of his supporters. And this idea that you can just deny what is real and that the truth is kind of relative or fungible is something that's been in the works for a while, and it feels like it could turn out to be quite a powerful thing there.

Leah Feiger: And it's happening daily. I mean, we talked about Comrade Harris and the crowd size thing, but also even on the smaller scale, do you guys remember, it was a couple of weeks ago and Trump was sharing posts and photos of AI-generated Swifties for Trump, and it was like hordes and hordes of young women all wearing Swifties for Trump shirts, and it was very jarring, but you're totally right, Will, X is obviously not a place I'm looking to for the truth or even the news anymore, but it is so supercharged.

Vittoria Elliott: This goes back to this idea that maybe it's not going to fool people, but it is effective propaganda. I know that Lord of the Rings is fake, but I still cry every time Sam hauls Frodo up that mountain. And people—

Leah Feiger: I love that during this pod we still get to learn things like this about each other.

Vittoria Elliott: But people might look at the Comrade Kamala thing and know that that's fake, but it doesn't change the fact that that resonates with something they feel very deeply about her. And I think one of the things also is that when we're talking about AI in elections, I think people default to being like, ah, the deepfakes. But deepfakes are only one very specific use of AI. We've already talked to a bunch of different people for the AI Global Elections Project where they're using ChatGPT to write speeches. They are automating outreach. In India, they were automating phone calls to constituents. That's all uses of AI that are not necessarily meant to be deceptive, but that doesn't change the fact that it's still happening. And I would not be surprised if as we wind down in December, more and more campaigns make it pretty evident that they actually did use a ton of AI, but not in this forward facing, overtly deceptive way, but maybe in the subtler ways of really efficient voter targeting or generating responses or chatbots or whatever that maybe are ways in which people are not looking for AI, because it's quieter and in the back end, it's not something that's scary and deceptive on its surface. But I mean, I think it's there. I just think we might be overemphasizing a little slice of it.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: The New York Times did report a little bit ago about how AI companies haven't been that successful though at selling their products to campaigns, that folks were trying to use AI callers to reach out to voters, and obviously you wrote about it in your tracker, this worked for Indian voters, this did not work for American voters. The minute that they were told that it was an AI bot calling on behalf of an official or a campaign, they hung up. Will, how are companies managing that right now? Obviously they tried to actually create, like Tori said, all of these sort of not manipulative uses of this as just kind of a roll in to the workplace suite. This is your other version of Microsoft Word and Excel, et cetera, and also your AI bots, but it hasn't been so successful. How are companies handling that?

Will Knight: I think it's good to remember we are just at the beginning of this widespread use of language models and more recently audio- and video-capable models. And while they haven't maybe been that successful in selling it, they are, I know, working quite hard to experiment with and understand how persuasive these tools can be. And one of the things to remember is the reason ChatGPT was such a success was because it was very capable at persuading people. It was intelligent and it was telling the truth, and it often wasn't at all, and it's designed to do that, it's trained to be good at giving people answers they want. And so OpenAI started rolling out this voice interface which is designed to also provide emotional social cues, exactly what we're doing on this podcast, that feel compelling. Perhaps people will always just reject it, especially if they know it's AI-generated. Don't forget, we're seeing people using things like AI girlfriends because they find them emotionally compelling. And I think one of the big things that looking ahead, and it's too early for it now, but I think that what is likely is that more and more companies will realize that they can sort of weaponize these things for persuasion. I mean, there's already research showing that when you talk to an LLM, it can move your perception of something, and they can work on making those more and more persuasive. I think it could be a big thing for advertising, but perhaps the biggest stakes there are in terms of persuasion in politics. And so you could see those chatbots being very good at actually not just giving people misinformation, but genuinely talking them into a particular perspective. And that could be an interesting arms race there.

Leah Feiger: And so dangerous. I remember you reported on this a couple of months ago, Will, and it's this piece that I still can't stop thinking about, honestly, about AI's ability to influence people, and Sam Altman of OpenAI touting the tech's ability to sway people's behavior. It's not hard to see how that power could be really abused, especially as AI gets more and more capable and people become possibly more reliant on it. Is that something that we could be looking at in the future about AI being used to change people's votes by actually changing their minds?

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Will Knight: It seems very likely that that's where things would lead unless there's kind of efforts to really restrict that. I mean, if you have more capable AI assistants that genuinely feel like they have not just intelligence, but empathy and so on, like a good salesperson, I think they could talk you into all manner of things.

Leah Feiger: Talk to me about the guardrails that are in place to stop this. This is arguably the scariest part of our conversation so far.

Will Knight: They're just sort of starting to explore this, and there are guardrails against obviously political uses of LLMs, and they are trying to monitor how it modifies people's behavior, but they're doing that in the wild, which is kind of wild.

Vittoria Elliott: I think it's way too early to be like, this technology is not useful or not influential or whatever. I remember when I set up my MySpace account, my parents being like, “The internet is full of predators and bad information basically. Don't believe anything you read on the internet.” And I think if we had judged how people were going to perceive the information ecosystem by the early days of social media, we would've been like, “Yeah, of course. No one's going to be believing political things on this. This is for sharing music and ranking your friends.”

Leah Feiger: I really deeply miss those days. Yeah.

Vittoria Elliott: Same. And less than 10 years later, we were dealing with the fact that that was the site of some of the most important political discourse of our age, that it could sway elections. And so I think to be at this moment where the AI elections aren't real, we don't know yet. And even though we may now be in a point where we're like, “Ah, AI-generated stuff is so obvious and it's so spammy, and who would believe that? And blah-blah-blah.” We have no idea the ways in which things will change and how quickly they'll change. And we may look back at this moment and be like, “Wow, I can't believe we thought this would never have an impact on anything.”

Leah Feiger: Right. I want to talk about deepfake detection. As deepfakes have gotten advanced enough to really trick people, there's obviously been a lot of companies springing up that have claimed they can detect deepfakes. Will, how good are these technologies?

Will Knight: Well, yeah, I mean, there are a bunch of different ways that you can try and catch deepfakes, from analyzing the file itself to analyzing the image or the audio signal. And obviously as you can imagine, the answer is more AI, and the truth is the detection is not that great. You can sort of demonstrate that if you take some examples, a lot of those tools out there don't do a wonderful job of catching everything, and it is kind of an arms race as well.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: I mean, Tori, you reported just this week about how terrible deepfake detection is outside of the US and Europe. What are the challenges there? Why is it so bad? It already seems to be bad everywhere, but why is it particularly bad outside of the US and Europe?

Vittoria Elliott: Yeah, it's a real challenge, and partly because a lot of the data that AI is trained on, both the tools that create generative AI and the tools that detect it are based on data sets that are overwhelmingly white, overwhelmingly English language, overwhelmingly Western. And that's part of the reason that some of these tools are actually struggling to make deepfakes of Kamala Harris, it's because there's just not enough people that look like her in the data, but in these contexts where people are not speaking English and they're not white and they're not part of this training data, it can be really hard. There's a lot of false positives and false negatives, and even when we're talking about detection just around text, non-English speakers often have different syntax in how they write, and a lot of detection tools will say that's made by an AI even when it's written by a person. And the phones that are available in a lot of places, particularly like these cheap Chinese phones, they are producing media that's just lower quality, and a lot of the AI training data is based on really high-quality media. So if you've got the shitty, shitty media that might get flagged as AI-generated, even when it's real. And these sort of instances where the detection models are really sensitive, that's not limited to the global south. For instance, Sam Gregory of Witness told me how their organization, which has a rapid response detection service that they offer to civil society and journalists, they found that if you just introduce some background audio into a deepfaked audio of Joe Biden speaking, the AI will say that's real, because it can't handle that extra little layer of some background noise. So these detection models are still really hit-and-miss.

Leah Feiger: I mean, we're nine weeks out from election day. I'm seeing a world where companies are springing up saying, “Oh, no, no, we detected that. That's not AI, or that is AI.” And already over the last couple of months, so many things could change. When everyone came out with their podcasts and articles earlier this year about the AI election and their predictions and what was coming, I don't think that any of us could have predicted that Biden would not be in the race, that Trump was going to be running against Harris. There are a lot of differences here. What do you think is coming up? What do you think that we should be looking out for?

Will Knight: It's hard to think of anything that would actually be incriminating for Donald Trump at this time, but if there was a recording of something, he almost certainly would claim it was AI-generated, right?

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: Right.

Will Knight: And you could also even point to AI deepfake detection technology that might be sort of uncertain, but a bit equivocal and say, well, that said it could be, and that could become a factor if there was another sort of recording that was incriminating, I guess.

Leah Feiger: It's so strange because there's almost that information gap, right? Even the fact that AI technology exists and we all know what it is, and we all know what deepfakes are, even without the tech being used, it can play such an important role in the discourse about it. Trump claiming that something is an AI image, like Kamala Harris' crowd sizes at an event, or like you said, possibly something extra incriminating that pops up, and the tech isn't even involved.

Vittoria Elliott: Well, and that's what experts call the liar's dividend, the idea that if anything's possible, nothing is real. I think back to 2016, and I think back to the Access Hollywood tape, and I feel like if that happened right now, we would just have a tweet or a post on Truth Social saying, “That's AI, that's not really me.” It's such an easy shortcut, and I think we are going to keep seeing just the fact of this technology weaponized as a way to continue to spread doubt and sort of fracture a sense of shared reality.

Leah Feiger: I think that's absolutely right, and I have to say on the more granular basis, the thing that I am particularly concerned about right now is, we spent a lot of time talking about AI being used as propaganda. People might recognize that an image is not real, Swifties for Trump is not real, but it's out there and it's possibly influencing people, et cetera. I think that we're about to enter a two-month cycle where there's a lot more at stake, right? Is this ballot box being driven out of Nevada by someone with Michigan license plates? There's just so many different things here, especially when we're looking at questions that are still being asked by election deniers from the 2020 election and 2022, and these communities are primed, election deniers are absolutely primed to claim all sorts of things, and AI is such a useful tool for that. Are we prepared?

Vittoria Elliott: No.

Will Knight: No.

Vittoria Elliott: No.

Leah Feiger: No, good. A resounding no from everyone in the room.

Will Knight: I think it's never been more important to have some shared truth and some commitment to defining it, right? And it's come under attack like never before. But as Tori is saying, it's very interesting, there was a book a couple of years ago called The Death of Truth, which was during Trump's first administration, but it was looking at this idea of attacking the truth itself as a kind of way to control the masses. And yeah, it feels like we shouldn't fall into the trap of saying, well, yeah, truth is kind of relative, which I think has sort of happened certainly in certain parts of the political spectrum for a while.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: I'm really looking forward to having you both on even in a couple of weeks or a couple of months to talk about the relativity of truth in all of this. What examples will we be bringing to the forefront? Who knows. Thank you both so much for joining me today. We're going to take a quick break, and when we're back, it's time for Conspiracy of the Week.

[Break]

Leah Feiger: Welcome back to WIRED Politics Lab. It is time for Conspiracy of the Week, where our guests bring us their favorite conspiracies that they've come across recently or in the past that they are particularly in love with, and I pick the winner. I'm so excited. Tori, you have been dying to win for a very long time. What do you have for us this week?

Vittoria Elliott: Technically, I am offering you two options, but they both revolve around my real boyfriend, RFK Jr. I have Google Alerts for him. I have his Telegram channel, which I check. We are very connected.

Leah Feiger: I'm honestly just so glad that this has been a part of your election coverage experience. I'm glad that you feel this parasocial relationship to a no longer presidential candidate even.

Vittoria Elliott: Well, campaign surrogate.

Leah Feiger: Campaign surrogate, of course. OK, hit me. What do we got?

Vittoria Elliott: Okay. Well, so my first thing is there have been obviously every time there's, quote-unquote, “bad news” about RFK, there's some weird animal shit that happens. There's some weird animal story. First it was the dead bear in Central Park when the New Yorker piece came out. Then right after he announced he was withdrawing his candidacy and supporting Donald Trump's bid for the presidency, we got the story about the whale head that he like—

Leah Feiger: And don't forget the dogs earlier. Will, don't you wish that you were on the politics desk and just immersed with this at all times?

Will Knight: Absolutely.

Leah Feiger: It's the animal desk, actually.

Vittoria Elliott: But people have also criticized him for his TikTok videos of feeding his local ravens, and I think that's actually a really cool thing about them. And I don't know if you knew this, a group of ravens is called a conspiracy, so my favorite conspiracy.

Leah Feiger: That is horrible. That's so bad, Tori.

Vittoria Elliott: You're welcome. I actually have a real Conspiracy of the Week, but it's also RFK really. But I knew you would enjoy a wordplay. I too want a conspiracy of ravens. But another one from earlier this year, at an event in New York in April, RFK said that the CIA was part of a systematic takeover of the American press, and that actually many people who are in charge of large media companies are connected to the CIA. In this instance, he mentioned the new head of NPR being a CIA agent. And I love the idea that actually we are not just wildly underpaid people with incredibly detailed research skill sets. We are in fact double agents. And I would just like to say that if we do have some stash of government money, I have some thoughts about where I'd like to be sent, and we can discuss after this.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Leah Feiger: All right, that's a good one. Thank you, Tori. Will, what do you have for us?

Will Knight: Wow, I don't know if I can really compete with RFK, but as a good CIA operative, I'm going to promote something from the weirder corners of AI, AI and philosophy, I guess. So there's this thing called Roko's basilisk. So the basilisk is a mythological creature serpent that if you looked in its eyes, it could kill you. And so there was this thought experiment someone posted on an AI forum saying that superintelligence in the future would be incentivized to create a simulation in which maybe we all exist inside it, and it would be incentivized to torture anybody who worked against or even thought about the idea of working against it coming into being. So at one point in one of these …

Leah Feiger: Incredible.

Will Knight: … forums, they banned talk of Roko's, this thought experiment, Roko's basilisk. The idea was that if you even thought about it, it could be dangerous, which is particularly bananas.

Leah Feiger: That is so funny. What forums is this proliferating on, or not proliferating on?

Will Knight: This was on LessWrong, which is a very famous forum dedicated to AI risks and alignment and—

Leah Feiger: How often do you personally think about Roko's basilisk?

Will Knight: Well, I actually only discovered it recently, and I try not to think about it just in case. It's like Pascal's wager, isn't it? It's just sort of playing the odds that superintelligence will come into being, so you have to try and make it come into being. Yeah, it's completely mad.

Leah Feiger: Oh, that's a very good one. OK. Oh, actually, this is a little bit hard this week, but I got to go with Tori. CIA assets, here we go.

Vittoria Elliott: Finally. Did the Ravens put me over the edge? I must know.

Leah Feiger: The Ravens did put you over the edge. I liked it, and it was part of, I just saw how much you were working for this, and yeah, it was an A for effort and an A for execution. Good stuff.

Vittoria Elliott: Thank you.

Leah Feiger: And partially, I can't give the win to something that I'm not allowed to think about ever again. Tori and Will, thank you so much for joining us. You were excellent guests.

Vittoria Elliott: Thanks, Leah.

Will Knight: Thanks for having me.

Leah Feiger: Thanks for listening to WIRED Politics Lab. If you like what you heard today, make sure to follow the show and rate it on your podcast app of choice. We also have a newsletter, which Makena Kelly writes each week. The link to the newsletter and the WIRED reporting we mentioned today are in the show notes. If you'd like to get in touch with us with any questions, comments, or show suggestions, please, please write to politicslab@WIRED.com. That's politicslab@WIRED.com. We're so excited to hear from you. WIRED Politics Lab is produced by Jake Harper. Pran Bandi is our studio engineer. Amar Lal mixed this episode. Stephanie Kariuki is our executive producer. Chris Bannon is global head of audio at Condé Nast, and I'm your host, Leah Feiger. We'll be back in your feeds with a new episode next week.

About Leah Feiger

Check Also

Barron Trump Is Finally Taking the Stage

Donald Trump’s youngest son, and his only child shared with Melania Trump, has largely remained …

Leave a Reply