Skip Navigation

Lifting the Veil: An Interview with Ivan Oransky

Image via Harvard School of Public Health

Garnering headlines last year in the controversy over former Harvard President Claudine Gay’s resignation, academic fraud took center stage. For Ivan Oransky, these questions are nothing new. In 2010, Ivan Oransky cofounded Retraction Watch, a blog and online database dedicated to tracking retractions by scientific journals, including the kinds of retractions that emerge from plagiarism and other types of fraud. An esteemed medical journalist and editor, serving in the past as editor for Reuters Health, Scientific American, MedPage Today, and others, Dr. Oransky is currently Editor-in-Chief of The Transmitter, a publication dedicated to cultivating dialogue around contemporary neuroscience. He is also an M.D. by training and a distinguished writer in residence at New York University’s Arthur L. Carter Journalism Institute.

Editor’s Note and Disclosure: The interviewer worked part-time at Retraction Watch for 2 years (ending in June of 2023) and interviewed Dr. Oransky in early 2024.

Ariella Reynolds: You built the Center for Scientific Integrity and Retraction Watch, consisting of a website and database dedicated to providing visibility to retractions and academic fraud across scientific literature. How did all this come about? What’s the origin story, if you will, of your mission? 

Ivan Oransky: I would say there are two origin stories. One is how I came to do medical journalism to begin with, and the other is how Adam Marcus and I co-founded Retraction Watch and then the Center for Scientific Integrity. I think my career origin story probably starts in high school when, growing up as the son and grandson of physicians, I had a natural tendency, maybe even pressure, or push, to go to medical school to do that. But I also grew up with a real interest in journalism. I was editor-in-chief of my high school newspaper and used to read Larry Altman, who was really the only person allowed to use an honorific, an MD, on the New York Times when he was correspondent there for decades. And that sparked an interest, and I continued to be interested in it. And I was very active with the college newspaper when I went to university.

And then I met someone – and I would say this is where the long origin story of Retraction Watch started – my senior year of college. I met George Lundberg. At the time, he was the editor-in-chief of JAMA, the Journal of the American Medical Association. He was doing a visiting professorship of some kind up at Harvard, which is where I was, and I went to his office hours. He asked me what I was going to be doing. He had heard a little bit about me and he was curious. And I said, “Well, as it happens, I’ve literally just gotten into my first medical school,” though that wasn’t where I went to. And he said, “You should be on the editorial group that runs the medical student section of JAMA.” Long story short, or somewhat shorter, I ended up doing that and had a wonderful experience during medical school. Sat in on peer review meetings at one of the world’s leading, if not the world’s leading, medical journal as a 25-year-old or something – I hadn’t even graduated yet – and just became really enthralled with this concept of peer review, the concept of quality with scientific publishing, warts and all. George was hardly a cheerleader for everything– in fact, later on, he was dismissed for publishing something controversial. 

But I’ve always been interested in how things work. I mean, don’t ask me how my car works; I leave that to my brother. But I think covering how things work is some of the most rewarding journalism you can do. And usually you’re showing flaws in how things work, because that’s how we all get better at what we do, including looking at our own work. You uncover some amazing things and hopefully help create a solution. So some years later now, and it’s 2008, 2009, Adam – again, my co-founder of Retraction Watch – he broke a really big story about someone named Scott Rudin, who was a pain management researcher and anesthesiologist. It turned out that he had been using fake data in many clinical trials. He went to federal prison for charges related to that and he paid restitution. He’s had 25 retractions. Adam broke this story wide open because he was the managing editor of Anesthesiology News. And I knew Adam, and we started trading emails and phone calls about attractions. We thought, these are interesting. No one’s covering them, and yet there are all these stories hiding in plain sight. So we launched in August of 2010. We had no idea what we were getting into.

Now, I’d like to think if we had, we still would have done it, but we thought it would be an occasional exercise, an occasional blog post here and there. We’d cover some fun, interesting things, do something useful, but keep our day jobs. And we have kept our day jobs the whole time. But here we are, 13 and a half years later, and it’s very much taken on a life of its own.

AR: And now TheNew York Times, Boston Globe, and other major media are coming to you for your perspective on academic fraud– and they’re obviously interested in more than academic integrity. We are in the middle of ‘plagiarism wars,’ as they’ve been called in some of these media– wars that arguably originated with the New York Post‘s charges of plagiarism against Claudine Gay and were amplified by Bill Ackman and others; and then Claudine Gay’s resignation, followed by Business Insider‘s plagiarism charges lobbied at Ackman’s wife, with Ackman responding that he’s now gunning for all the big-name institutions [at the time of this interview]. So with the pursuit of academic integrity as kind of this new political weapon of choice, are we poisoning the very well that we’re attempting to clean up? Are we sending a clear message that academic integrity matters, or is it just too wound up with politics and ulterior motives? 

IO: What I like about those questions is that I think they reflect a good understanding of the dynamics and the context, which is always so important. I referred – and I sort of still think about it this way – I referred to what’s going on now as a “plagiarism arms race.” I would agree that it’s also a plagiarism war. During the Cold War, of course, it was an arms race. Maybe it’s not historically accurate to say there was no war, but there were no wars between the two major players, the Soviet Union and the US.

So you do have this arms race. I’m very concerned about that, and I think sometimes, maybe people are a little surprised to hear that I think that we do need to establish better standards and think about what’s going to happen. We’re sort of all gunning for each other. Everybody’s gunning for one another, with some exceptions, almost never as a pure academic integrity exercise. That being said, it’s also true that we have to be able to take criticism, and universities have to be able to take criticism. Individual scientists and researchers need to be able to take criticism, no matter where it comes from. And I think that very quickly, with Claudine Gay’s story, you saw that both sides of that react negatively because of where the criticism came from: You had people saying, “Yes, she plagiarized, but it’s actually okay with me.” And I really find it hard to believe that would have been the case if she was on the other political side of history or of scholarship. Similarly, Bill Ackman is now incensed that people are looking at his wife’s academic work. We contributed one particular story to that narrative, which is a funny one, if nothing else, because it turned out that she had plagiarized a non-scholarly article in Physics World that had already been plagiarized by someone else some years ago. He was a major concussion expert, or I would almost now say, alleged concussion expert, and he was dethroned in the same way that a lot of people are being dethroned. 

And Bill Ackman said that it’s unfair to go after that, but, well, you actually have to live by the mindset of, “live by the sword, die by the sword; all is fair in love and war,” which is probably all too appropriate for what has been happening with Bill Ackman and Neri Oxman, his wife. I don’t know where this all leads us and whether or not we are poisoning the well, but I do think that we have to look at whether this is simply a manifestation of hypocrisy or double standards, or weak thinking, inconsistent thinking, at universities and other institutions, some of which is probably unavoidable because of the pressures that universities are under, some of which were entirely predictable.

And so whenever I do give interviews, I try my best to talk about the context and to expand the boundaries of that conversation beyond just the plagiarism arms race or plagiarism wars. It’s also about other kinds of misconduct in academia because the same arguments have been used when it’s about what I consider to be even more serious academic misconduct and fraud: There have been attempts going back to the 1980s – and earlier than that, but very publicly going back to the 1980s – to make it sound as though none of these things ever happened, that serious misconduct is so rare that we shouldn’t even focus on it. And that is, to me, at the root of it, whether it’s a plagiarism arms race, or a war, or any of this kind of race to the bottom; this is what’s happening in a lot of these cases. Because when you make claims that can be easily disputed, the claim here being that fraud and misconduct basically don’t happen (and people literally got up in Congress and said that in the 1980s, when Al Gore held a hearing on it)– when those claims are so obviously wrong, and will be outed as wrong, you set up the institution that you are defending or supporting for all kinds of charges of hypocrisy. 

And really, you’re destroying trust when you do that. So, I think the seeds of all of this were planted long before social media, long before the New York Post. And there were some other blog posts before the New York Post, of course, long before Bill Ackman became upset with the way Claudine Gay was handling the Hamas-Israel conflict. This was planted decades ago. And we are now reaping what we sow. And I say ‘we’ because we are all in some way part of the academic infrastructure and enterprise.

AR: Returning to specifically plagiarism for a moment, how big a problem is it really? Roughly what percent of retractions does it account for, and is it trending up or down? I think we’ve all seen the “everyone does it” rebuttal played out on social media, so is that true? Is everyone doing it? To what extent do you think it’s underreported? 

IO: It’s vastly underreported. Off the top of my head, I don’t have a plagiarism retraction figure for you, but I’m quite confident because I did look at it once recently. Retraction being quite rare strictly for plagiarism means that it’s rare even within what’s already a rare event. And that could be for any number of reasons. In recent years, it’s probably because it’s actually being screened for in a somewhat effective way, or at least people know that they’re going to be screened for it, so they’re not trying to plagiarize as often. But it’s clearly underreported. And I think there is truth to the fact that everyone does it.

But depending on your perspective, that’s either really good news, really bad news, or somewhere in between. I would say it’s bad news. Full stop. But I think it’d be very hard to anticipate the actual percentage of people who do it. I will say that I believe that the overall rate of misconduct, which, again, is far bigger than plagiarism, is about 2% in the scientific literature. Others have said it’s even higher. But even though only a 10th of that, so 0.2% of papers, are now retracted, based on surveys people have done, and data on how often papers that we know are problematic are actually retracted, it should be about 10x higher, or about 2% of the literature.

AR: And I know that one site that’s looking into this is the International Center for Academic Integrity, which reports that 58% of over 70,000 high school students across the United States admit to plagiarizing during high school, and almost 40% of over 63,000 US undergrads report that they have plagiarized from a written source during college. Why do you think that those percentages are so high, especially compared to the very low amount of retractions? 

IO: Just to make sure we’re not comparing apples to pomegranates or bananas, you would expect – not that you want this, but it’s not surprising – that rates of plagiarism among professional scholars are going to be lower than students for various reasons: stakes, what’s actually reviewed, who’s looking at it. You would also hope, actually, that the kinds of people who plagiarize are sort of, if you will, discouraged, if not weeded out, from moving forward.

That being said, the difference is still stark– it’s still such a large number. And whenever I say this, I’m very careful to say that this does not give people who commit these acts a pass. That’s certainly not my intent, and I think we need to deal with them individually, as well, and honestly, with what I’m about to say, hopefully in a compassionate manner. The system that we have created in terms of scholars, professors, and researchers, has really pushed “publish-or-perish” to such an extent that people have to do whatever they can and whatever they have to do to publish papers.

And so that, to me, explains a good lot, if not all, of the misconduct. Writ large, it’s not just plagiarism. In terms of students, we’ve created the same system. And I say this as a faculty member at a journalism institute: I think that our high schools and even our post-secondary education systems are failing our students when it comes to writing. I’m really disheartened by what I see as first attempts among some students, for example, and I blame the schools that they were at before. Now those schools themselves are under pressure. You have to keep going back to something– maybe to the Big Bang, I don’t know. But all of the incentives are misaligned. 

I would also argue that writing and critical thinking go together. If you don’t know how to do either of them, you’re a little more likely to plagiarize. If you don’t know how to do both of them, I don’t see how you do anything but plagiarize. And so I think that root cause analysis is so important, and I think we should not be surprised when we see high rates of plagiarism and other misconduct among students and faculty.

AR: You referred to stakes earlier. So my question is, what should be at stake? Why should we care about researchers lifting passages from other researchers? 

IO: Part of the answer is how things are now. Because it’s demonstrably and ridiculously unfair that the people who did the work, came up with the idea, wrote it down, got it published or didn’t get it published, but being plagiarized anyway, don’t get the credit. Because credit is everything. Again, you will not be able to publish something that isn’t original or that you can’t convince someone is original, which is what you’re doing with plagiarism, and if you can’t do that, you can’t get tenure and you can’t get promoted. But I think, bigger picture, the conversation that we’re having – well, the one that you and I are literally having right now, but also the larger conversation that is happening – is about institutions. 

Now, I happen to believe in higher education. I’m biased there, having been a product of it and also part-time employed by it, so take that all with a grain of salt, but I happen to believe writ large in effective institutions. And I know there are flaws and I report on them every day, but I think that we really run a great risk if we allow previous lack of candor to let critics chip away at a highly imperfect, but still really critical, institution of higher education. We need to work on ways to make it better.

And the problem with plagiarism is that it really gives, I would argue, effective and probably justified ammunition to those who would really like not just to get rid of plagiarism, but to dethrone higher education as an institution. And again, I say that as someone who thinks that higher education needs to reform in critical ways– certainly the area that I look at, which is scholarly publishing and what’s related to it. That being said, I think I’ve never been a fan of destroying institutions because I don’t think, particularly in this current political and geopolitical environment, that anything will be rebuilt, let alone something better.

AR: How do you believe that ChatGPT will fit into this? How will it, as well as other forms of generative AI, support or challenge our collective efforts to uphold standards of academic integrity? 

IO: I don’t typically make predictions, but the quick answer, and I think the right answer is, I don’t know. I’ll tell you what I’ve seen, which is that, as is often the case with it, academic publishing has not been able to keep up with the development of ChatGPT. There’s this technological utopian strain of thought which says, you know, we’ll just keep up with it, right? ChatGPT will get so good at finding itself in references – for example, hallucinated references, which is one of the things ChatGPT is famous for – or just whole text, that it’ll kind of work itself out. I’d love to believe that, but I don’t, and I don’t because I’ve seen too often schemes by people, who are probably smarter than the rest of us, in service of fraud, misconduct, paper mills, for example, which are using all sorts of techniques to dishonestly submit and get papers published.

So I’m not all that optimistic, but I’d also, again, get back to root causes and say, if we were to actually do something about incentives, maybe there wouldn’t be the need to, in some way, fraudulently use these techniques, including ChatGPT. So, my other concern with the plagiarism arms race, and ChatGPT, and other things that are sort of bubbling up, is that we all get fixated on new technology because we’re humans and we like shiny objects. That actually makes solving the bigger picture problem much more difficult.

AR: You’re also editor-in-chief of The Transmitter, a publication focused on advancing neuroscience research. How does the work that you do at Retraction Watch inform your work at The Transmitter and vice versa? 

IO: To some extent, it’s just that I’ve always felt that even, and maybe even particularly at any trade publication like The Transmitter, which is for professionals and scientific researchers, accountability work should be welcomed and should be really important. And sometimes it’s cheered quietly, but sometimes it’s cheered quite publicly. Scientists don’t want to see misconduct in the ranks any more than anybody else, and in fact, it really annoys them. And that’s being kind. Anger is really the visceral reaction that they have because these are people who are taking away opportunities from honest scientists.

So, to some extent, accountability work informs all of my work. Somewhat more directly, there are retractions in neuroscience, and there is misconduct in neuroscience, though this doesn’t apply to most neuroscientists by any stretch. But we have started covering those issues here. We started doing some of that work even before we relaunched as The Transmitter, when we were Spectrum. And it’s some of our most read and most challenging work, although all the work is challenging. 

I just think that the Venn diagram surrounding good science journalism and accountability journalism should maybe be oval rather than a figure eight. And I think that that’s starting to happen in some quarters. And we want to be part of that because we think that allows scientists to do their work better.

AR: Major news outlets only now seem to be lifting the veil on academic fraud at our major universities, the most notable of which is perhaps accusations against Harvard’s former president, Claudine Gay. But you’ve been lifting that veil since at least August of 2010 when you posted your first blog on fraud and retractions. Since then, you’ve built out Retraction Watch, which you sold a few months ago but not before building it out to 40,000-plus entries. What are maybe the top three things you have figured out about academic fraud and retractions over the last 13-plus years? 

IO: Well, one is that fraud is clearly still more common than anyone really likes to admit it is. I think you’re right that we, and I would say others, have been lifting the veil for some time now and certainly in the last year or so. A lot of people are more willing to say that this is a more common problem than they were admitting to before. Still not the major players, though: A lot of them are still finding ways to make it look like an anomaly, which continues to be a problem. 

The second thing that we’ve learned, which I think is consistent with that, is that correcting the scientific record, which includes retraction, but is not limited to retraction, is a long and often painful, arduous process for everyone involved, including the authors who have to retract. It’s not something that anyone seems to engage with or seems to engage in with any sort of enthusiasm, though maybe the sleuths who find the problems are enthusiastic in an inappropriate way. 

Ultimately, retractions take too long. If they do happen, allegations continue to be ignored. 

I guess if I have to pick three, I’ll pick something positive for the third thing, which is that more people are paying attention. And some journals, publishers, and even universities are becoming much more proactive about correcting the record and taking these issues seriously. It’s still a small number, but I think that’s a positive development. 

AR: Now, one point Bill Ackman tried to make in one of his tweets is that MIT’s academic handbook didn’t mention Wikipedia until four years after his wife wrote her dissertation, raising the question of whether this sort of plagiarism represents academic dishonesty. That made me wonder whether the issue of academic fraud is larger than just fraud, because if we’re debating whether it’s okay to plagiarize a decidedly unscholarly source such as Wikipedia for an MIT PhD-level dissertation, then maybe we ought to have a larger conversation about what passes for research and scholarship. Your thoughts on this?

IO: I think specifically in this case it is somewhere between moving the goalposts and rearranging the deck chairs on the Titanic, to mix metaphors quite a bit there. Maybe they were playing soccer on the deck of the Titanic, but I don’t think so. Anyway, I think that once you are getting into a discussion of when exactly something that’s clearly not okay, became officially not okay, you’ve lost the plot. And in this particular case, there was another thing that Bill Ackman’s company tried to say when Ackman was responding on behalf of his wife, Neri Oxman, which I find interesting for lots of reasons: When The New York Times asked about a story we reported, which was that she had plagiarized not just from Wikipedia and other sources, but in fact from a story in a magazine called Physics World, which turned out to have already been plagiarized by someone else, part of the response was, well, if she had just had AI tools, she would’ve caught the fact that she had overlap or whatever they called it. 

To me, that set of responses reflects a profound intellectual dishonesty. And so do I think that we should all sort of get together and decide on better standards and better definitions for plagiarism and other kinds of misconduct? The answer is yes, because it’s obviously yes, but that can’t be at the expense of continuing not to do anything about problems when we find them, or frankly, being selective in the problems that we look for, and therefore do anything about. And the weaponization of these sorts of hunts for issues in the scientific literature is a big concern to me and to lots of other people. And that isn’t going to be solved by just creating better definitions of plagiarism.

AR: So you believe that we should be committed to rooting out plagiarism wherever it is, but not necessarily focus on changing the standards that are aligned with that, besides potentially making them better? 

IO: I think the standards are actually pretty clear. And again, I think they can always be improved and they should be updated, but that’s such a distraction. That is typically what the establishment does when it tries to put the brakes on any kind of movement, or, dare I say it, revolution. They say, “Oh, we just need to have endless arguments about how many angels can dance on the head of a pig.” No, what we need to look at is why people would feel the need to, or the pressure to, or end up, plagiarizing. That’s the conversation we need to have. And there is only a limited amount of oxygen in this particular room. It’s actually a pretty small room. Not all that many people are interested in this. And if we divert all of their attention to redefining plagiarism, we are never going to solve the real problem, which is “publish-or-perish,” and the fact that people who plagiarize, commit misconduct, mess around with images, make up data, are doing it because the incentives drive them to do it.

AR: Let’s say that you were named president of NYU, where I believe you’re currently a Distinguished Journalist in Residence and faculty advisor. What would you do, both at the level of student and faculty, to elevate rigorous standards for academic integrity and research? 

IO: Well, the first thing I would do is resign because I am not qualified to be president of NYU. And if the president is listening, please understand this is purely a hypothetical. I like my job there, and I think I do quite well, but I would not pretend to be president. And no disrespect to the president of NYU at all, but I actually think this is bigger than a single institution. If I can move your hypothetical a little bit, while I don’t actually want to be named director of the NIH for any number of reasons (and also I’m not qualified for that at all), that is a position where I think you can have some influence: If you change the incentives, if you say, “We are no longer going to allow people, directly or indirectly, to judge grant applications and create priorities based on publishing and how many times things have been cited,” I think then you can start to have a real ripple effect.

It’s not that I don’t think NYU or any other major research institution can have an impact– I’ve seen some really great “thousand-flowers-blooming” kinds of projects happening there. But I think we need to go even further upstream than that. And which makes it easy for me because I’m even less qualified for a role like that.

AR: Switching gears a little bit, in a Freakonomics podcast, you said recently that 2% of papers should be retracted for errors and/or some form of fraud. Yet, you said that only 0.1% of the world’s literature is retracted today, about 1 in 1000 papers. So, we’re missing an enormous number of land mines in our academic literature. Here’s the question – and maybe it’s more of a thought question than anything else: If just one of those papers happens to be cited by thousands of others and used as the basis of a new drug, for example, then doesn’t it only take one to bring down scholarly and clinical work as we know it? And if that’s so, how do we truly assess the potential damage here? 

IO: So let me actually just make one correction/update: I stand by everything that’s in that interview, but it was conducted in September of 2023. It didn’t air until January of 2024, which is fine. That’s what happens. But the data I cited were actually out-of-date by the time it aired. So Nature did a really good job showing the trends toward the end of the year, which included more than 10,000 retractions just in 2023, which is a record. So now, while the rate of retractions isn’t markedly different, it’s actually 0.2%. For the purpose of discussion, it doesn’t matter that it’s 0.1 or 0.2: Either one is clearly a fraction of what it should be. But just for the sake of actually trying to keep the record up-to-date and all that, I did want to make that little bit of a meta point. 

I think there are a couple things to think about when talking about this. It’s not that I don’t think we should focus on, say, a particular paper that has maybe led to a patent, maybe led to clinical trials, and it’s been cited lots of times. Sometimes I think we should focus on that because it’s a really crisp example of how a single paper, a sort of butterfly flapping its wings, can have a dramatic effect. So that’s important. And telling stories like that is important because it captures people’s attention. On the other hand, I think that we should really be much more concerned about the pernicious nature of a lot of these papers that might not even ever get cited and what role that plays in terms of the trust we have in a given clinical trial that may not even have had any papers that were problematic leading up to it. 

In fact, more than a decade ago now, I’m pretty sure, someone did an analysis specifically on clinical trials. Grant Steen did this analysis where he looked at how many clinical trials either had studies that were themselves fraudulent, or whose papers leading up to them, turned out to be fraudulent or retracted for some reason. What he did then was look at how many people were in those clinical trials, and he came up with a number that was just shy of 400,000. Now, those were some pretty conservative estimates in my opinion, but reasonably conservative. And even then, that’s 400,000 people that you, and I, and everyone else may know, who are in clinical trials based on, at best, false information and, at worst, dangerous, risky information. And again, that’s clearly an underestimate. The number would be much larger today.

AR: Wow. That is a very alarming statistic.

IO: And again, it’s old. So I’m quite confident that it would be much larger now.

That’s why I think we need to start looking at, not the high-profile papers that turn out to be wrong – the Marc Tessier-Lavigne papers, or Claudine Gay – in a different way, but the steady drumbeat and the pernicious nature of the ground-cover papers that turn out to be wrong and undergird everything directly or indirectly.

AR: Finally, how do you anticipate Retraction Watch and The Transmitter will continue to evolve over time to address that sort of concern and any others? 

IO: Well, The Transmitter and Retraction Watch are very distinct, different projects and enterprises. They do have me in common but that’s sort of neither here nor there, I think. 

Talking about The Transmitter first, I think that we’ve just begun our journey and we’re literally, as you and I are speaking, three months old. I don’t know what that is in dog years; I don’t know what that would translate to. But I think we’re still an infant. We have a lot of work to do, exciting work. We have a lot of functionality and ways to connect a community and help them find one another, and learn about one another, and provoke thoughts, research, and conversations among one another, which haven’t been built yet. So we’re excited about that, and I think it’s almost a blank slate. There’s a couple of lines on that easel, but that’s still a very exciting open platform.

I think when it comes to Retraction Watch, it’s sort of an interesting time to think about how we might evolve. One of the things that Adam and I have noticed is that we’re certainly not the only game in town when it comes to covering this anymore. I would like to think we still have a deeper knowledge of the issues than someone who is first coming to it or has been doing it only for a little while. And I don’t think that’s controversial; I mean, it’s just kind of obvious. We’re not experts in other things, in which they certainly are. But what that means is that given our current resources and likely resources for the foreseeable future, we can’t be everything, and we can’t be comprehensive. The database is comprehensive, and it has now been acquired by Crossref, which gives us a lot of terrific things, including it being open and us having a real runway in terms of sustainability and financial resources. But that’s on the database side of things, and we’re very excited about what Crossref’s going to be able to do with the data that we weren’t able to, and I think that’s going to start to pay off in important ways.

On the journalism side, though, we’ve started thinking about ourselves more as guides– not exclusively by any stretch, but more so as expert reporters who can talk to other people and really help them build capacity while building capacity ourselves. And we give webinars, seminars, to other reporters, we speak on panels, we obviously are interviewed by a lot of reporters, and that’s pretty exciting. It’s always been the case, but I think now it’s a bigger part of what we do.

It certainly means that he and I are not doing as many stories ourselves, although we have two reporters who do that now, which is great, and they’re really elevating our game in ways that we haven’t been able to, and it’s just terrific to watch and be part of. I don’t think there are bad choices we can make in terms of what our future is. It’s a good time to be thinking about it, and I think we have shown an impact. 

Now, I’m always careful about saying that, but there are actually plenty of other people saying it. Therefore, I feel confident saying it, and comfortable saying it. That doesn’t mean we’re going to say, “Okay, we’re done,” but I think we are going to think about what the best way to marshal those limited resources is. And I think we’re getting there, but we’ll see.

*This interview has been edited for length and clarity.

SUGGESTED ARTICLES