Skip Navigation

Disinformation and Democracy: An Interview with Young Mie Kim

Young Mie Kim is a Professor at the School of Journalism and Mass Communication and an Andrew Carnegie Fellow at the University of Wisconsin-Madison. Professor Kim’s research centers on the influence of digital media on political communication in contemporary media and politics across political campaigns, issue advocacy groups, and voters. Professor Kim’s most recent project, Project DATA (Digital Ad Tracking & Analysis), employs a real-time, user-based ad tracking tool to study the sponsors, content, targets, and algorithms of online political campaigns across social media platforms. The project highlights the oft-obscured operations of digital campaigns while highlighting pernicious microtargeting practices. Kim’s 2016 article “The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook” won the Kaid-Sanders Award for the Best Political Communication Article of the Year by the International Communication, and her work has appeared in over 400 major media outlets both in the United States and abroad (The New York Times; WIRED; BBC). Her research has also appeared in journals including Communication Research, the Journal of Communication, and the Journal of Politics, among others. Kim received her Ph.D. from the University of Illinois at Urbana-Champaign. 

Zach Stern: What prompted you to start Project DATA?

Young Mie Kim: For almost my entire career, I have been studying passionate publics—people, like those who care about the abortion issue because of their religious values, who care about political issues because of particular values, identities, or interests. I realized that these people tend to put their issues ahead of political party identification and that they tend to distrust mainstream media, opting instead to spend a lot of time on social media platforms like Facebook. They’re mobilized because of the issues they care about, and in the data-driven, algorithm-based digital age, I realized that political leaders and campaigns try to identify and target these passionate people using data and microtargeting tools because it’s much easier for political campaigns to convert, persuade, and mobilize people who are very passionate about particular issues given their personal concerns. These groups are scattered around the country, so it has traditionally been hard to identify and mobilize them, but with microtargeting tools, you can now do just that. I realized that studying advertising—the outcome of very deliberate strategy—would be a good way to examine how political campaigns identify these people and come one step closer to understanding the strategies and tactics that they use. 

That’s why I started this project, but I quickly realized that political ad data on social media is not publicly accessible. Even when Congress investigated Russian election interference after the 2016 election, they relied entirely on data that the platforms voluntarily handed over to them, so I hired a computer scientist and developed an app that tracks digital political ads across social media platforms—a user-based and real-time reverse engineering tracking tool that would allow us to track who sent ads and capture ad content. In addition to the app, we also surveyed users on their demographics and their political attitudes so that we could reverse engineer how the ads targeted particular groups and build targeting profiles for political ads. It is very important to have an independent investigator outside the tech platforms, and this project collects data independent of these tech platforms through our user-based app. 

ZS: Whether it is the government throwing some force behind it or expansion among researchers like yourself, could Project DATA be scaled up to cover more users or more platforms? 

YMK: It is not easy because we have to protect the users and have to think about surveillance. I don’t necessarily like the idea of the government doing this. With advanced data analytical techniques, you can get a pretty good idea of who people are if you really drill into individual-level analysis, but we don’t do that. I think the best model is to do this kind of research in real time with a larger pool of participants in an ongoing manner by some kind of independent consortium. At the moment, we only track data for a certain period of time in election years, but ideally, we want the project to be ongoing. Relatively speaking, in 2016 we had 17,000 people participate in the study and 11,000 people complete all of the surveys we asked for, but still, compared to what tech platforms have, it is a relatively small sample.

ZS: How has the project developed from its first election cycle in 2016 to the 2018 and 2020 elections?

YMK: I started this project to investigate how passionate publics are identified, targeted, and mobilized on political platforms by political campaigns, and initially, I ran a candidate campaigns analysis—looking, for example, at Hillary Clinton and checking how many ads matched in our dictionary. We realized that many ads had unidentifiable sponsors, so we tracked the landing pages of these ads to identify the sponsors. Yet, half of the sponsors we examined were still not identifiable because they were not registered with the Federal Elections Commission. Some nonprofits are exempted from disclosure requirements, but if this were the case we should have been able to find them in the IRS data that we looked through. Still, half of the sponsors were not identifiable, so we left them aside because there was nothing to do. Then, one year later, the House Intel Committee released meta information on Russian ads, and after matching this data with the landing pages from our data, many of the suspicious groups that we identified turned out to be Russian groups. Since then, our research has become more forensic, trying to identify the strategies and tactics of Russian actors and Russian election interference patterns. We then realized that domestic actors also used similar techniques, so we have been trying to understand the relationship between domestic and foreign suspicious groups. 

ZS: How are domestic and Russian actors similar and different both in terms of their tactics and when they arrived? 

YMK: I’m not sure which one came first. All I can tell is that around 2014, we began to see suspicious and unknown groups posing as grassroots organizations on social media platforms. In terms of their strategy and tactics, they are very similar. Basically, a campaign generates many unidentifiable, untraceable Facebook accounts and then runs each of these accounts as if they were independent from each other. Some of these groups focus on conservative issues to mobilize extremists or extreme Trump supporters while others pretend to be Latino or African-American groups to try to suppress turnout among these demographics. I call it asymmetry mobilization. In 2016, for example, both Russians and domestic groups (again, these suspicious groups seem to be coordinated within the same network) mobilized Trump supporters with the Trump agenda—issues like nationalism and anti-immigration—and tried to suppress the turnout of likely Clinton voters by targeting Bernie Sanders and Jill Stein supporters or by targeting Clinton supporters with attack ads. I’ve seen a similar pattern in 2020 as well. We didn’t see any foreign actors in 2018, but domestic campaigns were very similar. 

ZS: In terms of how we should go about bringing some of these suspicious or unidentified groups out of the darkness and into a place where we can better regulate them, do you see the Honest Ads Act as a solution to this problem? 

YMK: The Honest Ads Act is a good starting point. It has two major components that I like. One is the election engineering communication. In broadcast advertising, if ads mention candidate names, there is a disclosure requirement that mandates that they put a disclaimer on the ad. Honest Ads would do that for online advertising. The second element that I like is the creation of a centralized national archive that includes not just the ads but also the targeting information used by political advertisers. That would be really important transparency. Major tech platforms like Facebook and Google have political ads archives, but the problem is that they are constructed according to self-regulatory measures, so their definitions of political ads are not consistent. We want to have clear, conspicuous, and consistent guidelines for transparency—basically a sunshine policy for online advertising focusing on transparency rather than censorship. 

ZS: What else would you like to see done to combat disinformation and online voter suppression?

YMK: First of all, I think that the public discourse has focused too much on censorship—what should be taken down and who has the responsibility for that. It is not productive and it has never been productive. Tech platforms are always behind on taking down misinformation or ads run by extremists or hate groups because this approach only focuses on content and raises questions around freedom of speech. Those who are against such takedowns always talk about freedom of speech, and those who are for the takedowns are never satisfied. Others wonder whether we really want gigantic tech platforms to have control over everything. What we need to do to have some accountability is to increase transparency.

The solutions should be at multiple levels. Some people point to media literacy, but research has shown that those who engage with misinformation are actually a highly educated and highly politically involved group. We didn’t find any correlation between education level and misinformation exposure on social media, and media literacy literature is mixed. Some studies show great effects, others show the opposite. We also need action at the tech platform level. They should invest their resources in civic integrity and be more transparent. Creating an ad library and collaborating with other platforms would be good. But I think the most important thing is actually to have good federal policy, including transparent tech guidelines by the FEC and congressional action.

ZS: What do you think Congress could and should do to combat disinformation and online voter suppression on platforms like Facebook?

YMK: We have laws and policies that can regulate online disinformation including election interference, but these policies are so outdated that they don’t adequately regulate disinformation on these platforms. We need ad archives, for example, including targeting method and targeting information, so that the voters can decide whether to trust particular information. The Facebook whistleblower—Frances Haugen—also pointed out that we need greater algorithm regulation and transparency. Without adequate regulation we’re going to have very limited solutions. There are a number of different ways you could do that. Some people argue for algorithm auditing by a third party or agency. Australia recently passed a bill that requires Google, Facebook, and other large tech platforms to share how they formulate their news recommendation system, and then share that information with major news organizations since people now find these stories on social media rather than directly from news websites. That market intervention kind of regulation is one example, but there are a number of different ways you could work the algorithm component. 

ZS: What role should public pressure play in combating disinformation and online voter suppression, and where are the limits of its impact? 

YMK: Public pressure is really important. We see this kind of a discussion because academics and nonprofits have mobilized and educated the public to criticize platforms, and that’s why platforms are doing self-regulation. But ultimately we need policy or regulation not focusing on censorship but rather an FTA-type market intervention. If you think about these misinformation problems, they are clearly tied to the whole media ecosystem—the news industry is heavily reliant on tech platforms and tech platforms don’t share algorithmic information with the public, so we are dealing with an unbalanced, unfair market. To sustain a healthy democracy, we need to recognize that tech platforms and elections are part of our infrastructure and that we need to intervene if the market is about to fail. Information is the currency for voters, so to make a decision voters must be fully informed. We need to create a healthy information environment, and information accessibility, availability, and equality are very important.

ZS: Do you see microtargeting as an inevitably pernicious practice, or are there versions of this technology that you find valuable in terms of civic and other issue-based organizing? 

YMK: Microtargeting can be used in both good and bad ways. Younger voters, for example, are relatively alienated in political processes, so microtargeting can be a great tool for identifying and customizing messages to them. The same goes for hard-to-reach voters like people in rural areas without consistent internet access. So there are very good ways to use microtargeting, but microtargeting is also inherently political. What you need to think about, at least in the political domain, is who is able to use microtargeting political campaigns for their interests and information asymmetry—where political campaigns know a lot about people but people don’t know much about these campaigns. 

Microtargeting is always used to amplify some condition, but the problem arises when campaigns use microtargeting to define who voters should be. For example, if you look at the meta information for the Internet Research Agency, they explicitly excluded Asian Americans because they are relatively less politically active than other groups. My research also shows that Hispanics and Latinos spend a lot of time online, but when we control for their time spent on social media, Hispanics got fewer political ads than white voters and got more voter suppression ads than white voters. So here microtargeting limits the political involvement of certain groups while amplifying certain other groups, creating inequality in political involvement. I can’t say microtargeting is fundamentally bad, but at least in the political domain, it benefits political campaigns a lot more than voters and doesn’t give everyone equal weight.

*This interview has been edited for length and clarity.