Misinformation Is Soaring Online. Don’t Fall for It

This week, we talk about how fake, doctored, and false media is so easily spread, how the social platforms are dealing with it, and how generative AI is making things worse.
Red pushpins and red thread covering up a blue chat bubble
Photograph: PM Images/Getty Images

Misinformation lives everywhere. False accounts of events, doctored photos, and purposely misleading news stories are quickly shared and passed around on social media, usually by well-meaning people who don’t know they’re sharing incorrect information. It's a big problem in the best of times, but the stakes become much higher during a heated crisis like the current Israel-Hamas war. As the violence in and around Gaza has continued to escalate, people are turning to places like X (fka Twitter) for the latest news on the conflict. But they've been met with a flood of bad info—old videos, fake photos, and inaccurate reports—that researchers say is unprecedented.

This week on Gadget Lab, we talk with WIRED reporter David Gilbert about how misinformation and disinformation spreads across social media, and how recent changes made by X before the Israel-Hamas war have made the problem even worse. We also talk about how the proliferation of generative artificial intelligence tools is making fake photos and videos look more believable.

Show Notes

Read David and Vittoria Elliot’s WIRED story about how disinformation is getting worse on X. Read David on the role misinformation played in coverage of the recent Gaza hospital explosion. Also read David’s story about how posts by X owner Elon Musk are seemingly making the platform’s misinformation problems worse.

Recommendations

David recommends the book A Heart That Works, by Rob Delaney. Mike recommends Bono’s memoir, Surrender. Lauren would like you to send her workout playlists. (She prefers Spotify.)

David Gilbert can be found on social media @daithaigilbert. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here's the RSS feed.

Transcript

Note: This is an automated transcript, which may contain errors.

Lauren Goode: Mike.

Michael Calore: Lauren.

Lauren Goode: Can you think of a time when you were totally led astray by misinformation on the internet?

Michael Calore: Yeah, I remember hearing that everybody who attended CES 2020 was responsible for spreading Covid around the world.

Lauren Goode: Mm-hmm, we were.

Michael Calore: Well, we weren't, it turns out that was misinformation. But I was really worried, because I did attend CES 2020, as did you.

Lauren Goode: Yeah, I remember that.

Michael Calore: Yeah. But yeah, that was not true.

Lauren Goode: Turns out we just had the regular old CES flu.

Michael Calore: That's right.

Lauren Goode: Yeah.

Michael Calore: What about you?

Lauren Goode: Well, I actually thought that Balenciaga Pope was plausible. I was vaguely aware that there was online discourse about it. I looked at the photo and I thought, oh, that's an interesting choice of jacket for the Pope. And then only after that did I realize that the discourse about it was that it was fake.

Michael Calore: Oh, I see.

Lauren Goode: Yeah.

Michael Calore: Well, that's pretty low stakes, all things considered.

Lauren Goode: That time, yeah. But what's going to happen when we start falling for totally fake photos or videos during a critical election period? Or during a brutal and violent conflict?

Michael Calore: Unfortunately, that is already happening.

Lauren Goode: It is, and we're going to talk about it on today's show.

Michael Calore: Let's do it.

[Gadget Lab intro theme music plays]

Lauren Goode: Hi everyone, welcome to Gadget Lab. I'm Lauren Goode, I'm a senior writer at WIRED.

Michael Calore: And I'm Michael Calore, I'm a senior editor at WIRED.

Lauren Goode: And we're joined today by David Gilbert, one of WIRED's newest writers, who covers misinformation and disinformation. David is joining us from Ireland. Thanks David, and welcome to the Lab.

David Gilbert: Good to be here. Thanks, Lauren.

Lauren Goode: I think David might be the first person ever on the Gadget Lab podcast to join us from Cork, Ireland. Is that true, Mike? You've been hosting this a lot longer than I have. Have you ever had a guest on from Cork before?

Michael Calore: I don't think so. I think David's a first.

David Gilbert: I'm glad to be filling that huge oversight in WIRED's podcast history.

Lauren Goode: Truly. We're going to have to have you back on regularly now, just to compensate for that. Today we're actually talking about something much, much more serious, misinformation. Misinformation has been running rampant in the wake of Hamas' deadly attacks on Israel nearly two weeks ago, and the violence has continued to escalate. David, you've been covering this story for WIRED, and you noted in your story that people who were turning to Twitter—and I'm just going to call it Twitter; of course, it's now called X—people who were turning to Twitter for up-to-date news on the conflict have been seeing lots of old repurposed videos, fake photos, and other media that researchers say is just … It's at unprecedented levels. It seems as though we've entered a new era, both geopolitically and technologically. So David, we wanted to pull back from the conflict itself and talk about this level of fake information. I wanted to ask you a pretty simple question to start, which is, what is the difference between misinformation and disinformation? Because we hear those terms a lot and sometimes interchangeably.

David Gilbert: Yeah, and I think a lot of people do use them interchangeably, and it can be kind of confusing. But for me it's pretty simple really: Misinformation is information that's posted that is inaccurate, incorrect, lacks context. Whereas disinformation has the added extra element that it's done on purpose and for a nefarious purpose. So whether it's part of a coordinated campaign like we've seen coming out of the Kremlin for many years, or an individual who's trying to trick people into clicking on a link and donating money to the wrong thing. So disinformation has a purpose behind it, whereas misinformation is just people clicking Share when they don't know any better.

Michael Calore: So disinformation could also be classified as propaganda?

David Gilbert: Yeah, essentially it's a form of propaganda.

Michael Calore: I see.

Lauren Goode: So it sounds like there's an intentionality behind the disinformation, but misinformation is the spreading of news that is not real, it's not accurate.

David Gilbert: Exactly. And oftentimes disinformation, the intentional dissemination of inaccurate material to back up your own narrative or your own side of the story, then turns into misinformation when people share it. Not necessarily because they think they're sharing something wrong, but because they think it's actually true, and they're sharing it because they think it'll help inform other people.

Lauren Goode: Right, OK. What is the most troubling example of disinformation or misinformation you've seen so far during this particular conflict?

David Gilbert: The most troubling aspect for me is just how easy it has been for this … Whether it's disinformation or misinformation, because we've seen both, and we've seen it from both sides of, this isn't a partisan issue, this isn't one side doing it and the other side not doing it. This information in the Israel-Palestine conflict is coming from all sides. What is amazing to me, even after years of covering this, is just how basic and unsophisticated some of this mis- and disinformation is, and yet it is gaining tens of thousands, hundreds of thousands, even millions of views and clicks on social media. And we're still, even years down the line, we're still at a point where this stuff just goes viral before anyone can fact-check it, before anyone can take it down off the platforms.

Michael Calore: So the platforms have mechanisms in place, obviously, to spot misinformation or disinformation on their platforms and to keep it from showing up in people's feeds. What role have those mechanisms played over the last two, three weeks as the conflict has escalated?

David Gilbert: So yeah, we've seen all the different platforms take different steps to tackle this problem over the years. I suppose the most prominent example over the last week has been Twitter, and we've seen over the course of the last year, we've seen Elon Musk decimate the Trust and Safety team, content moderation teams on the platform, to the point to now where there is effectively nothing stopping people spreading disinformation. And so we've just had an absolute flood of inaccurate posts. As you explained, Lauren, old videos repurposed. We've seen a lot of video game footage posted, claiming that it was coming from the conflict and being shared credibly, which is just amazing to me. Because it takes just a couple of seconds to look at it and go, well, no, that's not real. But people just don't take the time. On Facebook, they have a better level of moderation for this stuff, especially graphic, violent videos, which they take down relatively quickly. But then, once you move outside the English language content, the moderation on Arabic content is completely different, and they just haven't invested in the resources to deal with it properly. So they over-moderate and take down any mention of, for example, the word Hamas. Even if you're criticizing Hamas, they will take that down. And then in Hebrew language, they don't moderate enough, and therefore hate speech gets through. It's just an indication, again, of how platforms like Facebook and Instagram haven't really got to grips with non-English language content, and they're failing yet again to deal with it properly. So each platform has its own issues, and it depends on what kind of systems they have put in place. Other platforms like Threads, for example, just doesn't promote news whatsoever. They've been pretty open about it, they don't want news on their platform. And TikTok is very similar, they suppress news, and that's their way effectively of dealing with it. But the problem is that because so many people, young people in particular use TikTok, they therefore don't get any information, and so they're kind of left in the dark effectively. So none of them are doing a really great job around this, but some are doing better than others.

Lauren Goode: I was going to ask you, wouldn't an obvious answer or a solution be, well, just don't go on Twitter, right? For journalists, for researchers, for average citizens, but it doesn't sound like the other platforms are doing a great job of this either. Particularly if they are deprioritizing news, but the fake news and fake content is able to bubble its way to the top of the feed, then there's almost nothing there to counteract that, right? You're not getting a dispatch from WIRED or Bloomberg or The New York Times or the BBC or something that says, here's what's actually going on.

David Gilbert: Yeah, it is easy to say, just don't go on Twitter. But for years, for all its failings, the brilliant thing about Twitter is that when something like this happened previously and you went on there, what you would see is firsthand, not even BBC or New York Times or WIRED or anything, you'd see firsthand accounts of people on the ground who are posting videos directly to news agencies in Gaza, news agencies in Israel, who would be posting primary footage to Twitter. And because they would've been verified in the past, you'd be able to trust to a degree what they were posting. Whereas now the people you're seeing at the top are verified, but they're not journalists. They are people who are trying to game the system and get as much engagement as they possibly can to grow their subscriber base and therefore earn more money.

Lauren Goode: Right, because people can now just pay for a blue check mark.

David Gilbert: Exactly.

Lauren Goode: And Twitter was an important app for dissenters for a long time as well.

David Gilbert: Absolutely, yeah. It's such a shame what's happened to it, because it has been a very important tool to give a voice to people who otherwise may not have a voice. And there is no other app out there in terms of seeing a real-time newsfeed when incidents like this are happening. And unfortunately, what's happened over the last year has completely destroyed that experience, to the point where it is pretty much unusable as a source of news anymore.

Michael Calore: The company does have a tool for folks to mark up their feeds and help identify things that could be misinformation or disinformation. It's called Community Notes. Could you tell us a little bit about how this tool works and specifically how it's been used over the last month or so?

David Gilbert: Sure, yeah. So effectively what Elon Musk has done is gotten rid of all the people in the Trust and Safety team, the Trust and Safety Council, which was hundreds of civil society groups around the world who were working directly with Twitter. So replacing them has been Twitter users. And it initially started off as a pilot program, very small, it was called Birdwatch. Twitter never really rolled it out that much. And when the Ukraine crisis happened, they were criticized for not doing so. When Musk came in, he decided he was going to ramp this up, and so he brought in a load more contributors. And the idea is that as people scroll through Twitter as normal, they're able to write notes on tweets that they think are inaccurate or lack context, and then other contributors can vote on that. And once enough people have voted, then that note is made public. And so in theory, it sounds like a good idea, but—

Lauren Goode: It's like Wikipedia for Twitter.

David Gilbert: Exactly. And interestingly, a lot of the contributors are also editors on Wikipedia, I found out. Because I spoke to about half a dozen of them this week, and the half dozen that I spoke to, across the board, all of them said that it is an absolute mess. It isn't fit for the purpose. A number of them said that there were Musk fan boys or Musk sycophants on the platform who every time Elon Musk posts a tweet, even if it's controversial or misleading, no note will appear because they will down vote any critical note of him saying that this is Elon Musk, it's his opinion. He can say what he wants. I've had access to the Community Notes system this week through another account, and I've seen it firsthand, and on a lot of posts about the crisis in Israel and Gaza, there's just Community Notes contributors fighting amongst themselves, spreading disinformation, boosting conspiracy theories, and it's just another level of disinformation and misinformation on a platform that's already full of misinformation. So it really isn't working as effectively as Twitter claims, because this week when it's been criticized for disinformation, it's claimed, look Community Notes has reached this many people. We've added 10,000 new contributors. But if you look at what's happening in reality, the system is just not having any real impact. And in a lot of cases, it's having a negative impact on the amount of disinformation out there.

Lauren Goode: So we certainly can't rely on the platforms to make changes that are going to help thwart this disinformation. I guess the next question is, what role do regulators play? We should probably note that earlier this week, US Democratic senator Michael Bennett put out a request for information on how all of these tech giants are handling the spread of false and misleading content about the Israel-Hamas conflict. That's just one US lawmaker, but also European Union industry chief Terry Breton has blasted the tech companies too. I'm wondering what you think might come of this. How much of this is grandstanding? Will this have any effect? Is it too late?

David Gilbert: Yeah, all of those. It's grandstanding. It's too late. Nothing will come of this. Terry Breton—

Lauren Goode: I was trying not to be too cynical, but yes.

David Gilbert: The EU, I suppose, has been seen for years as leading in the pushback against social media platforms. There's been lots of fines handed down, and Germany has enacted a law that is much more stringent in terms of hate speech on the platforms, and it is enforcing it to an extent. And the European Union brought in the Digital Safety Act two months ago, which is meant to put more pressure on Twitter and Facebook and other big social media companies. But what Terry Breton came out with in his letter, which he addressed to Elon Musk instead of Linda Yaccarino, the CEO of Twitter, who he should have addressed it to, and then he went and had a back and forth with Musk on Twitter. Officials in the EU who we spoke to this week were saying, this is just grandstanding. There's an election coming up. He's looking to get his name in the newspapers. And what he was proposing was going way beyond the Digital Safety Act, and was effectively kind of how the great firewall of China started off. And so people are just saying that what he was suggesting was completely over the top, was not really feasible. And when I spoke to an EU official and asked him for a step-by-step of when would we get to the point where Twitter would actually be banned in Europe? It's like 12 steps down the line when there's fines and fines and fines. So in Europe, I just don't think there may be a fine, I don't think Elon Musk would really care about a fine. In the US, similarly, for years they've been trying to do something and it has gotten nowhere as far as I can see. So I really don't think there's anything regulators are going to be able to do.

Lauren Goode: All right, we're going to take a quick break, and when we come back, we're going to talk about how artificial intelligence has juiced this entire world of fake media. Not just during violent conflicts, but in elections, in popular culture, and even in porn sites.

[Break]

Lauren Goode: Misinformation has a long, long history, but if you've been listening to this podcast in recent weeks, you are now well aware that there are new tools out there that are letting pretty much anyone with a laptop or a smartphone gin up realistic-looking and -sounding fake news. So David, in the Israel-Hamas conflict, how much of the fake media that researchers have been seeing so far is likely attached in some way to generative AI tools?

David Gilbert: Not very much that I've seen. In fact, a lot of it is just very basic, repurposed videos, Photoshopped videos I guess. There was a photo going around of Ronaldo, the soccer player holding a Palestinian flag, which was, it appears to be Photoshopped because it was pretty crude. I haven't seen a huge amount of AI-generated stuff so far, though that could obviously change.

Lauren Goode: That actually surprises me a little bit. I thought that generative AI content would be a little bit more prevalent in this particular conflict. Maybe the tools aren't as easy to use as we're surmising here.

David Gilbert: I think it's a really interesting question though, because I think a lot of people and a lot of experts in the misinformation and disinformation area have been raising the issue of generative AI for months, possibly years now. And we have seen it in a number of coordinated campaigns. China's been using generative AI tools for creating fake avatars for their social media profiles. But I think we're still waiting for the point where it kind of tips the balance and we see it as a significant driver of a disinformation campaign. The tools are out there, and they're becoming much, much more freely available. And therefore the barrier for entry to this sort of thing is getting lower all the time.

Michael Calore: Speaking of which, I know there's been a lot of attention paid to the visual gen AI tools and particularly images. It's very easy to type in a prompt and create an image. It's more difficult to type in a prompt to create a video. But something that we've seen a lot of is audio deepfakes, audio hoaxes, where you can use a computer to clone a person's voice and have it say things that the person never said. Lauren got pitched on a tool that made it sound like she was speaking in Spanish. They trained this tool on one of your other podcasts, right?

Lauren Goode: That's right, yeah.

Michael Calore: Using your—

Lauren Goode: And they were pitching us and saying, “We can translate your other podcasts into Spanish. Here you go.” It was me and Gideon speaking in fluent Spanish.

Michael Calore: So it actually was what you were saying, but it was eerie because it did sound like you speaking in very fast, very clipped like Catalan Spanish. But the thing that I want to ask you about, David, is that with these audio hoaxes, it seems as though they are easier to get past the systems that are set up to spot these things, right? There are very sophisticated tools for detecting an AI-generated image and an AI-generated video, but those tools are not as tuned to detecting AI-generated audio. Is that right?

David Gilbert: Yeah. And the audio seems to be the one that is gaining quite a lot of traction, not necessarily in coordinated disinformation campaigns, but more people, individuals maybe who are seeking to just cause chaos themselves or just playing a prank. And it seems like the platforms are struggling to stop the spread of those. Now you say that there's tools to detect image and video AI-generated content. I would be interested to see once that kind of stuff starts rolling out more and more how effective those tools are, whether it'll be a game of cat-and-mouse and the image and video generated stuff will kind of stay a step ahead. It's hard to know because I guess I just don't trust whatever the platforms are saying that they're going to be able to detect something that they're actually telling the truth, because they don't really have a history of being accurate and being right about this stuff. So that'll be interesting to see. But yeah, that example you gave, Lauren, about your voice being translated, these tools are so easily available now, and I actually tried one recently where it's actually a video one where you post a video of you saying something in English, you upload it and it sends it back to you, and you pick what language. And it not only does the voice, but it also changes your mouth to make it look as if you're speaking French or Spanish or German. It is not perfect, and if you actually looked at it closely, you would know it's not real. But as we've seen with this stuff in Gaza, people don't look closely. People click. If they think it's interesting or it backs up their worldview, they're going to share it to their audience. That's all it takes.

Lauren Goode: Terrifying, it really is. So now there are major concerns that with the upcoming US presidential election, supporters for both political parties are going to be using these extremely accessible AI tools to make fake stuff and sway voters. In fact, WIRED even published an article earlier this year titled “Brace Yourself for the 2024 Deepfake Election.” How are we actually supposed to brace ourselves though? What can we do to sift through all of the noise, in some instances fake noise?

David Gilbert: Yeah, it's going to get incredibly difficult. As you say, there's already stuff out there. There was the video of Hillary Clinton supporting Ron DeSantis I think was one of the videos that I saw. And the Republican National Committee posted a video where it kind of painted this dystopian scenario where if President Biden got elected again, we'd see China invading Taipei and lots of other stuff that hasn't actually happened, but looked as if it would happen. How to spot this stuff is … I think at the moment, the video stuff, if it's done by a professional team, then it's difficult to spot. If it's done by the freely available AI tools, if you look at it closely, it's still not that hard to spot. But to give you an example of how hard it is, even at this point, the BBC have a weekly quiz where they ask you to look at eight images or videos and you decide whether it's real or it's AI. And today I got six out of eight. So that kind of tells you that this is going to be incredibly difficult for people to deal with. And once we get to the point where it's widespread—and we're not there yet, I don't think—once people are using these tools on a regular basis and campaigns are using them, then we're going to have a real problem on our hands.

Michael Calore: I want to ask about pornography, particularly fake pornography. So as we've been talking about, a lot of these videos that are uploaded in the political sphere are so effective because they reinforce people's worldviews. They show them something that they are already maybe predisposed to believing. With nonconsensual porn, particularly somebody who is a big fan of a particular actor, and then they make a porn video showing that actor engaging in sexual acts. It's not so much about reinforcing somebody's worldview, it's about giving them something that maybe already exists in their imagination, and here it is in front of them in a way that is believable. So it's like it doesn't really matter to them that it's fake.

Lauren Goode: The consumer, you mean?

Michael Calore: Yeah, to the person.

Lauren Goode: Yeah right. Because for the person, it's a very unique and insidious form of abuse.

Michael Calore: Yes, it absolutely is. But for the people who are making it and the people who are consuming it, and for the websites that are hosting it, it's entertainment. And it's understood from the beginning as entertainment. So while it is abuse, it is a different set of consequences, and it's a different set of priorities for the people who are making it. And I'm curious if you have thoughts about how this is going to play out over the next couple of years.

David Gilbert: Yeah, it's a huge, huge problem. Matt Burgess was writing about it for WIRED this week, about the scale of the problem and how it's so pervasive. And this isn't coming about in the last six months, we've kind of been seeing this stuff crude initially for about four or five years, but now extremely sophisticated. And some of the people who are making this stuff are doing so at a really, really high level. And it's actually where some of the most advanced deepfake technology is being used. We've always seen the porn industry has been kind of at the cutting edge of technology, going all the way back from VHSs to DVDs to online and now deepfakes. And I can't see a way how this is pulled back. The cat is out of the bag. There are Discord servers where you can go on, you can send someone images of anyone, whether it's an actress that you like, or it is a girl who lives down the street or a boy who lives down the street. You send them video or images of those and they will come back to you. You pay them a nominal amount of money and they'll come back to you with a deepfake porn video, in the space of hours, that is relatively convincing looking. And I think at this point, we're so far beyond trying to either do something to prevent the technology being used in this way, that the only way that it can be really tackled is true legislation and where this kind of stuff is explicitly made illegal to nonconsensually use someone's image in a deepfake porn. But again, I'm not sure how long that kind of regulation will take or if there is even someone who is going to stand up and push that legislation forward.

Lauren Goode: David, while we have you on the Gadget Lab, to the start of many, many Gadget Labs with you to make up for our lack of—

David Gilbert: Hopefully cheerier Gadget Labs.

Lauren Goode: Hopefully cheerier. We have to ask you, how do you approach this as a misinformation and disinformation reporter? What's your approach to op sec? How do you determine what's fake and what's not? I mean, you're flooded with information as you're reporting on this issue.

David Gilbert: Yeah, it's really difficult. This week has been probably one of the most difficult weeks in terms of making sure that what we're seeing and what we're doing is, we're not adding to the problem, that we're accurately reporting what's happening. And just a lot of the time, what it requires is slowing down, is not needing to be the first person to repost something. Or if you find something you think is an amazing scoop, is not to just rush and try and get it out and post about it on social media. It's to stop, it's to think about it, it's to go, OK, where does this come from? And it's just, all this is about is tracing it back to its primary source. And that can be hugely difficult these days, but you have to make an effort. And if you can't back it up or you can't find out where something's coming from and verify that it's true, you just don't share it. You don't add to the problem. But it's getting extremely, extremely hard. And I saw this week how one open source investigator who I know and have followed his work for years, retweeted a fake Jerusalem Post Twitter account because they were saying that Benjamin Netanyahu had been hospitalized, which wasn't true. And they got tricked. And that account had, it looked for all the word, like a Jerusalem Post Twitter account. It had a verified badge, but it just wasn't real. So it's getting increasingly difficult. The platforms aren't making it any easier, so you just have to try and slow down. You try and do as much work as possible to figure out where a picture, a video, or a comment is coming from, and if it's real before you share it with anyone else.

Lauren Goode: Good advice and good words to live by. Let's take a break and then we're going to come back with our recommendations.

[Break]

Lauren Goode: David, as our guest of honor, we'd love for you to go first. What is your recommendation for our Gadget Lab listeners?

David Gilbert: So this probably isn't very Gadget Laby, but it's a book that I read this week that I just loved. It's called A Heart That Works, and it's by the actor Rob Delaney, who was in the recent Mission Impossible movie actually, but he's a comedic actor. And he wrote a book about his 2-year-old son, Henry, and how he died of a brain tumor. Now I realize that doesn't sound like a very fun thing to be reading about. But the book is … It's a darkly funny book. So I basically spent the whole time either laughing out loud or in floods of tears reading this book. And it's not a long book. You could easily read it in three or four hours if you had the time. And it was just life-affirming, and it was beautifully written. And I would heartily recommend it to anyone, whether they're a parent or not, to give it a read because it's just a beautiful book about a dad and his love for his son and finding the humor in what was a horrific situation. And again, I realize that's not a very cheery topic, but it is a really good book.

Lauren Goode: I'm adding it to Goodreads right now, so thank you for that. And also, we welcome recommendations of all stripes here on the Gadget Lab podcast. If that was life-affirming for you in some way, then we appreciate that.

David Gilbert: And also follow Rob Delaney on Instagram, because he is extremely funny.

Lauren Goode: He is indeed. OK, adding that. You said it was called, what was that again?

David Gilbert: A Heart That Works.

Lauren Goode: A Heart That Works. I had followed a little bit of Rob's story on Twitter, and it is incredibly sad and touching.

David Gilbert: Yeah, it's amazing.

Lauren Goode: Want to read, added to the list.

Michael Calore: The sound you hear is Lauren actually adding it to Goodreads.

Lauren Goode: That's right.

David Gilbert: Good.

Lauren Goode: Thank you for that, David.

David Gilbert: No problem.

Lauren Goode: Michael, what's your recommendation?

Michael Calore: I'm also going to recommend a book.

Lauren Goode: OK.

Michael Calore: It's a book that is quite old. It's like maybe a year old. It's called Surrender, and it's by Bono. So yes, I'm recommending Bono's book. I know we have all had a little bit too much of Bono in our lives over the last years and decades, and particularly over the last month or so with U2's stand at the Sphere. But I was not really interested in reading Bono's book when it came out. A close friend of mine who I play music with read it, and he said, “You really need to read this book. There's a lot of stuff in there about creating music and about creativity, and you have to check it out.” So on his recommendation, I checked it out, and it's a delight. It's a very good book. Bono, he has this interesting sort of power in the world because he's the front man of the biggest band in the world, and he has access to people and to places that most of us can't go. He also just wants to do good, sometimes performatively, sometimes privately. But his motivations are pure and his actions for the most part are also very pure. And I respect that about him. The book is unconventional. It is based around 40 different songs that he's written in his life, and he uses these songs as sort of windows into his biographical story and the story of his band. But it is very well written, and I really liked the structure. The structure I think is the thing that actually makes the book work in a way that a lot of rock star autobiographies don't work. Also, just really great stuff about the creative process and about what it was like to pull together some of the songs that we all know and love from U2. I am not making this recommendation because our guest happens to be Irish—

Lauren Goode: I was going to ask you that.

David Gilbert: It's OK, I'm just turning on my video to show you this. I also just read the book.

Michael Calore: You just read Surrender?

David Gilbert: It's—

Lauren Goode: It's amazing.

David Gilbert: Yeah, it's on my bedside locker, as the book I read before A Heart That Works, and I couldn't agree with you more. It's an incredible book, and that's as someone as, like a lot of Irish people don't exactly love Bono, but I have a newfound respect for him after reading his book.

Michael Calore: Exactly the same.

Lauren Goode: I have so many questions right now.

Michael Calore: Well, quickly, I will say the audio book is excellent. I chose the audiobook—

David Gilbert: Yeah, I've heard—

Michael Calore: And he narrates it.

David Gilbert: Yeah, I've heard it's really, really good.

Michael Calore: It is. I mean, you can't imagine a world in which Bono would let anybody else read his book into the microphone, but it really elevates the experience. I read one chapter on my Kindle, and then I immediately switched to the audiobook.

Lauren Goode: David, two questions for you.

David Gilbert: OK.

Lauren Goode: One, now that I see you on video, are you wearing an Irish wool sweater?

David Gilbert: No, it actually—

Lauren Goode: It looks like a proper sweater.

David Gilbert: I actually bought it in London when I was in London recently.

Lauren Goode: Oh dear, OK.

David Gilbert: Sorry, I know.

Lauren Goode: Also, why don't some Irish people like Bono?

David Gilbert: Because—

Lauren Goode: It's the most controversial part of this podcast.

David Gilbert: Because Irish people have an issue with people being too successful. So we think Bono's got too big for his boots.

Lauren Goode: His … OK.

David Gilbert: His 1-inch heel boots. Yeah, there's kind of a resentment, I guess, among a lot of people. But reading this book, the perception of him is … Or a lot of people's perception seems to be completely wrong. He's very funny. He's a brilliant writer, and some of the stories in it, especially the stuff around how he wanted to make … He didn't want to just remain on one level. He always was trying to push the envelope and do something different. A lot of times it didn't work, but its just really a really interesting book.

Michael Calore: Yeah.

Lauren Goode: Oh well, I'm going to have to borrow one of the two copies that my brothers rejected last Christmas. This is why I'm laughing. Last Christmas I heard great things about the book. I bought two hardcover copies, brought them home, and gave them to my older brothers and both rejected them. They were like, eh.

Michael Calore: Honestly, I had the same reaction, and—

Lauren Goode: They're also musicians. One's a professional musician, the other's an amateur musician. I was like, “You're going to be into this.” And they were not.

Michael Calore: They should read the book.

Lauren Goode: I agree. And now I'm going to read it too.

Michael Calore: So this is my recommendation. Get over it, read Bono's book.

Lauren Goode: Read Bono's book, OK.

Michael Calore: What's your recommendation, Lauren?

Lauren Goode: My recommendation is not a book, although it's funny, the book I'm reading right now is about Ireland. But I'm going to put a pin in that, David, and we're going to come back to that. My recommendation is, I'm going to turn the tables a little bit and ask our listeners for a recommendation for me. I need a new workout playlist. I did this thing for a couple of years where I was slowly building up an exercise playlist on Spotify, because while I was exercising, I would hear a song that I would like. I would tag it, and then it would be sent to this playlist. And I've experienced this thing recently where when I'm running or doing anything vigorous, I put it on and I just, I hate every song. Nope, not that. Nope, next song, next, next. And I realized it's because that if you make a playlist as you are exercising, your endorphins are flowing and you think everything sounds great, you're like, yeah, I'm really into this. And it turns out it's not very good. It's not a good playlist at all. So if any of you, our dear Gadget Lab listeners, or my colleagues here have really good workout playlists. I prefer things with lyrics for exercise. I do like instrumental music, but I am looking for lyrics, a little bit of distraction. Tend to gravitate towards pop or something like it when I'm exercising. If you have recommendations, find me on all the platforms. Also, my email's out there in the world, lauren_goode@wired.com. Send me your workout playlist recommendation. I'd love to hear it. I need something new.

David Gilbert: You know, Lauren, you could take the 40 or so songs that are in the Bono book and create a playlist out of that, and then you could use it as a workout playlist and read the book at the same time to get a proper, fully immersive experience.

Michael Calore: 40 songs, one workout.

Lauren Goode: This is going to be the last moment of Bono bashing on this podcast. It turns out David, I already have a full YouTube album on my iPhone. Do you want to know why?

David Gilbert: Because they forced you to put it there. That's another reason why people don't like Bono.

Lauren Goode: Pretty much. Still mad. Still mad about that. Does he address this in the book?

Michael Calore: Yes. Yes, a few times.

Lauren Goode: I actually, after the last Apple event, earlier this year, in September, I tweeted “Another Apple event has gone by. Still mad about U2.” And I got a flurry of responses. I think it was the last time I was on Twitter too, so yeah. But no, I did actually listen last night as I was running, David, I listened to “Who's Going to Ride Your Wild Horses?” And I did not reject that one. I didn't say next. I was like, OK, this is a pretty good jam.

Michael Calore: Great tune.

David Gilbert: Yeah.

Michael Calore: Best album.

Lauren Goode: Yeah. All right, well that's our show for this week. David, thank you so much for joining us. Thank you for sharing your wisdom. We'd love to have you back on, like we said earlier, we need a little more guests from Cork. Did I tell you my grandfather, my great-grandfather's from Cork?

David Gilbert: No.

Lauren Goode: Yeah. I'm going to send you his address so you can … It's a whole, we'll do a whole other podcast on this.

Michael Calore: What's his name?

Lauren Goode: Thomas Goode.

Michael Calore: Thomas. You can just walk outside and yell Thomas.

Lauren Goode: Well, he's been long gone, but yes.

David Gilbert: Ireland's small, it's not that small though.

Lauren Goode: Yeah, I wonder if it's near you. I would love to know.

David Gilbert: Everything is near me in Ireland.

Lauren Goode: But thank you so much for joining us.

David Gilbert: It's been an honor.

Lauren Goode: It's been really lovely having you on.

David Gilbert: It's been great.

Lauren Goode: And Mike, thanks as always for being a great cohost.

Michael Calore: Of course, you're welcome.

Lauren Goode: And thanks to all of you for listening. If you have feedback, you can find all of us on Twitter. Question mark, Threads, Instagram, Mastodon, Blue Sky, I don't know, just check the show notes, we'll be there. Our producer is the excellent Boone Ashworth, who I have to note, rode an electric jet ski this week, came back in one piece, and came back raving about it. His photos are super cool. It looked incredible. These are real photos, not fake photos. So everyone should go check out that story on WIRED.com later this week. Goodbye for now, and we'll be back next week.

[Gadget Lab outro theme music plays]

Michael Calore [singing to the tune of U2's "Who's Going to Ride Your Wild Horses?"] Who's going to ride your electric jet ski?

[Everyone laughs.]

David Gilbert: Oh my God.

Lauren Goode: There's your sting for the show, Boone.