Skip Navigation

Hunter’s Laptop, Deepfakes, and the Arbitration of Truth

Image depicts Hunter Biden, the son of US President Joe Biden. Image via Teresa Kroeger, World Food Program USA/Getty Images

In April 2019, a man who may or may not have been Hunter Biden walked into a Delaware computer repair shop. The man dropped off a water-damaged Macbook Pro, left, and never returned, setting the stage for a controversy that would roil the US political landscape. 

The laptop hurtled to the surface of public discourse on October 14th, 2020, just weeks before the November presidential election, after a New York Post story used emails recovered from the laptop to rehash claims of corruption that had long been levied against the younger Biden and his father. Disinformation and Russian election interference dominated headlines in 2020, and this seemed to be an open-and-shut case of both. That the laptop’s journey from the repair store to the Post was midwifed by Trump associates Rudy Giuliani and Steve Bannon didn’t help its case. So Twitter, Facebook, news organizations, and the rest of the mediasphere, following cues from disinformation experts, former intelligence officers, and the Biden campaign, restricted the distribution of the Post story on their platforms and cast doubt on the laptop’s provenance in a slew of cautionary articles. 

However, with recent reporting suggesting that several of the emails may, in fact, be real, the laptop incident looks less like an instance of disinformation successfully quashed and more like a harbinger of dangers to come. As an era of more advanced disinformation—such as deepfakes—begins to bear down on the US, the Biden laptop imbroglio should serve as a lesson and a warning about the arbitration of truth.

***

“Smoking-gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad,” began the New York Post article. Beneath this screaming headline, the Post asserted that on April 16th, 2015, Joe Biden met with a representative from Burisma, a company whose board Hunter Biden had joined a year before. The Post pointed to an email recovered from the laptop sent by the Burisma adviser, Vadym Pozharskyi, to Hunter Biden: “Dear Hunter, thank you for inviting me to DC and giving an opportunity to meet your father and spent [sic] some time together. It’s realty [sic] an honor and pleasure.” 

The story was certainly big, if true: Had the elder Biden actually met with Pozharskyi at Hunter’s request, it would contradict his claims that he’d never spoken with his son about Hunter’s “overseas business dealings.” The Post followed this first accusation with a smorgasbord of salacious allegations against the Bidens. A week later, Biden said the meeting never occurred; in April 2021, the president doubled down on his denial.

Outside of the corridors of the Trump administration, the election-focused liberal establishment quickly sought to associate the laptop with one term: Russian disinformation. Major news organizations ran articles sourcing unnamed government officials about Russia-related FBI inquiries into Giuliani and Hunter’s laptop. Dozens of former intelligence officials co-signed an open letter suggesting the cache was Russian disinformation. Democratic Senator Chris Murphy went on broadcast news to say “Rudy Giuliani is, at this point, whether he knows it or not, a conduit for Russian disinformation.” And, perhaps most memorably, during the final presidential debate Joe Biden angrily declared the laptop a “Russian plant.” 

Admittedly, in those feverish October days leading up to the election, it wasn’t unreasonable to prejudge the laptop as a Russian psy-op. For one, the story provided by computer repair store owner John Paul Mac Isaac didn’t make much sense. He refused to answer questions, seemed unable to recall a basic timeline of events, and gave several conflicting accounts throughout an hour-long interview with reporters. In one memorable segment, reporters asked Mac Isaac about his relationship with Rudy Giuliani—whom Mac Isaac had originally said he reached out to, not the other way around. “When you’re afraid and you don’t know anything about the depth of the waters that you’re in,” Mac Isaac intoned, “you want to find a lifeguard.” Likely regretting what he’d said, he followed that up with “Ah, shit… no comment.”

Put this in the context of heightened fears of Russian electoral incursions, along with pre-existing warnings to the White House about Giuliani’s Russian connections, and the media establishment’s move seemed justified, even righteous, at the time.

But a year later, this near-certainty has begun to falter. In late September 2021, POLITICO slipped into a morning newsletter that one of its reporters, Ben Schreckinger, had corroborated some of the emails in the cache—including the email about the 2015 Burisma meeting. POLITICO hedged its bombshell report, acknowledging that in addition to the “genuine files, it remains possible that fake material has been slipped in.” But even partial confirmation of the laptop story by a major, reputable news organization began to turn heads. 

What’s more, it’s not as if Hunter Biden ever explicitly denied that the laptop was his. “There could be a laptop out there that was stolen from me,” he told CNN. “It could be that I was hacked. It could be that it was the—that it was Russian intelligence. It could be that it was stolen from me. Or that there was a laptop stolen from me.” And the arranged meeting in question? Biden’s spokesperson denied the meeting ever happened, but in past months evidence for an encounter between the then-vice president and the Ukrainian has grown stronger. The Washington Post noted that Biden was, in fact, at Cafe Milano in DC at the same time as Pozharskyi on April 16, 2015, as the email suggests. Though The Washington Post wrote that there’s no hard evidence the two men actually interacted in the restaurant, the paper admits: “One mystery is why the drop-by was not listed on Joe Biden’s schedule.”

***

Even if it included falsified elements, the Hunter Biden laptop was far from a deepfake video. The term “deepfake” (a portmanteau of “deep learning” and “fake”) refers to falsified images, videos, or audio recordings created using artificial intelligence. More than images or audio, many people trust videos implicitly: there’s been no shortage of concern about the rise of deepfakes, especially advanced video deepfakes, in our hyper-connected world. 

However, like an advanced deepfake, there’s still no firm answer on the veracity of laptop’s emails; it’s unclear if there ever will be. What matters at least as much, or perhaps even more, is how politicians on both sides seized on the laptop snafu to push their political agendas. This unbreachable gulf between artifice and truth is ripe for exploitation.

In June 2019, a grainy video proliferated throughout Malaysian social media channels that allegedly showed the country’s Economic Affairs Minister, Mohamed Azmin Ali, having sex with a younger staffer named Muhammad Haziq Abdul Aziz. Although Azmin insisted that the video was fake and part of a “nefarious plot” to derail his political career, Abdul Aziz proceeded to post a video on Facebook ‘confessing’ that he was the man in the video and calling for an investigation into Azmin. The ensuing controversy threw the country into uproar. Azmin kept his job, though, after the prime minister declared that the video was likely a deepfake—a claim several experts have since disputed.

Was the low-quality, hidden-camera video genuine? No one knows, except its creators. But just like in the case of the laptop, the truth was inaccessible—and into that ambiguity stepped powerful parties with their own versions of truth. The chaos engendered by the Hunter Biden laptop’s unverifiability provides a window into a future in which a more sophisticated piece of visual disinformation wreaks unimaginable, unstoppable havoc.

High-level deepfakes that evade expert-level detection are still rare. They “have to be made in a very intentionally manipulative way, because you have to pay a world expert to actually create them,” said Brown University Professor Michael Littman, chair of a recently published Stanford report on AI that touches on the progress and perils of deepfakes. “The fear people have is that that level of quality will get easier to make.”

But could there soon be deepfakes that even top experts can’t identify? 

“Yes,” Littman said. “It’s already the case that there are images where top experts debate…The fact of the matter is they are getting better and better and better.”

Strategies for identifying deepfakes are advancing, but not as quickly as the competition.

One promising approach involves tracking a video’s provenance, “a record of everything that happened from the point that the light hit the camera to when it shows up on your display,” explained James Tompkin, a visual computing researcher at Brown. 

But problems persist. “You need to secure all the parts along the chain to maintain provenance, and you also need buy-in,” Tompkin said. “We’re already in a situation where this isn’t the standard, or even required, on any media distribution system.”

And beyond simply ignoring provenance standards, wily adversaries could manipulate the provenance systems, which are themselves vulnerable to cyberattacks. “If you can break the security, you can fake the provenance,” Tompkin said. “And there’s never been a security system in the world that’s never been broken into at some point.”

Given these issues, a single silver bullet for deepfakes appears unlikely. Instead, each strategy at our disposal must be just one of a “toolbelt of techniques we can apply,” Tompkin said.

To be sure, misleading information has flourished in recent years without deepfakes. Such disinformation, often in the form of a crudely edited photo or a deceptive caption, follows a common trajectory: it spreads rapidly on social media before facing a blockade of countervailing fact-checkers, after which it fizzles out of the mainstream but still lurks in fringe communities. The Hunter Biden laptop initially followed a similar pattern, with concerned social media companies slowing its viral spread or preventing users from sharing it entirely.

But in the Biden case, the follow-up of a fact-check sucker punch never arrived. The saga of Hunter Biden’s laptop, then, can be seen as a test run for a harrowing future where nobody except the creators of a video know for sure whether it is real. Consider: a video of a US general informing other officers of a plan to initiate a missile strike. Low-effort lies have been enough to sway millions of minds before; what is the plan for when they genuinely can’t be disproven?

One thing that’s clear is that the verifier of deepfakes holds immense power in a situation where complete truth is unattainable. Many Americans trusted Joe Biden and his surrogates when they confidently labelled the laptop as Russian disinformation, even though this confidence wasn’t borne out by the evidence. This also applies to Trump’s Director of National Intelligence John Ratcliffe, who asserted without evidence that the laptop wasn’t Russian disinformation. Especially in the heat of the moment—days before an election, for example—declarations from powerful entities about the veracity of a video must be treated with skepticism, and ideally a healthy dose of outside corroboration. 

On August 4, the Senate Committee on Homeland Security and Governmental Affairs advanced the Deepfake Task Force Act, which would create a working group to address the hazards posed by deepfakes. This legislation is doubtlessly an important gesture toward the future, despite some concerns that the government is not paying enough attention to old-fashioned disinformation.

However, as the events of last October highlighted, Americans must be cautious about who they depend on for determining between deepfake and reality. Governments have vested interests in defending their dominion, and will not hesitate to engage in manipulation to advance that goal: Our political authorities cannot and should not be the sole arbiter of truth. “I feel like it can’t be government oversight,” Littman said, “but it can be that people sign on to expectations or systems that make it easier to track the provenance of the information.” Deepfake damage prevention must be decentralized, and any government intervention should focus more on empowering citizens to determine truth than determining it unilaterally. The threat is already here: The military coup government in Myanmar, known as the Tatmadaw, has both purveyed videos that many consider deepfakes and disputed the veracity of genuine online documentation of human rights violations. In the US, state-level attempts to ban malicious deepfakes outright have faced scrutiny from news associations, which argue that such laws could inadvertently chill free speech.

“How do we respond? We have to be a little skeptical,” Littman said. “We need additional proof. I think that’s where we need to get to with imagery as well—a picture’s not sufficient anymore to be convincing. I don’t think that’s new, we just have to start treating something we found very trustworthy as less trustworthy. It joins all the other stuff we have to stop taking at face value.”

Still, for Littman, there’s a silver lining in the coming storm. “I try to console myself… because the fact that people are lying is proof that there’s still an opportunity to convince people of the opposite, and to actually debate, and get people to engage their faculties of reason. Words still matter.”

SUGGESTED ARTICLES