Most studies published in scientific journals, it turns out, are either exaggerated or wrong. How come? According to epidemiologist John Ionnidis, editors of science journals are no different than everyone else in media. Sensationalism sells.
BROOKE GLADSTONE: In 2005, Dr. John Ioannidis made waves in the scientific community when he charged that most published scientific research later turns out to be false. So last month in The Economist magazine, he laid the blame at the door of the scientific journals that publish that research, saying that the spread of false information is due to something called “the winner’s curse.” Dr. Ioannidis, welcome to the show. DR. JOHN IOANNIDIS: Thank you for inviting me. BROOKE GLADSTONE: I know that “the winner’s curse” is just one of the factors that you cite, but what is the winner's curse? Isn't that term usually applied to the winners of auctions? DR. JOHN IOANNIDIS: In economics, the winner's curse means that when you have lots of people making bids for an auction, the person who eventually wins probably has overpaid. If one could estimate the average of all the bids, that might be closer to the real value of the commodity. BROOKE GLADSTONE: And you say that that is exactly what’s going on in scientific journals, and the “curse” is simply the curse of publishing wrong information. DR. JOHN IOANNIDIS: It’s the curse of publishing exaggerated and possibly sometimes completely wrong information. Now, in scientific circles, currently in many fields, there is lots of research teams generating results on the same research question. Their average results, if you have 20 or 30 or 50 teams, probably reflect the truth.
However, if only what are the most striking results eventually end up being published, especially in the most high visibility journals, these selected results are likely to suffer from a similar winner's curse. BROOKE GLADSTONE: You've also found a bias towards publishing positive results, right? Do you have an example of that? DR. JOHN IOANNIDIS: There's several studies in the literature that suggest that when you have studies that have been equally well designed and equally well conducted, the ones with positive results will be printed far more frequently or they will be published far more quickly.
For example, there's an empirical study that looked at studies on antidepressants. About half of them had found positive results in favor of the drugs, and another half of them or so found negative results — no effectiveness, the drug didn't seem to make a difference compared to placebo. However, the published literature on these drugs is comprised entirely of positive studies. BROOKE GLADSTONE: So the obvious question is, why? Is it just that scientific journals are subject to the same commercial pressures that all other journals are and they just have to move paper? DR. JOHN IOANNIDIS: I think that this is part of the truth. There’s a lot of competition, and I think that any journal is interested to publish research that does seem to matter and does seem to make a difference, and something that other journals have not published. BROOKE GLADSTONE: What’s the overall impact of all of these exaggerated scientific stories in the journals? DR. JOHN IOANNIDIS: The impact could be very substantial. For example, if we're talking about research that deals with whether treatments are effective or not, if the literature is distorted, that means that we believe that some treatments are worth it and they are effective, while, in fact, they're not, or that they are more worth it and more effective than they actually are.
It could lead the whole field of researchers down the wrong path. One could have what we call “herding,” where someone publishes a paper in a major journal and then everybody has to try to work on that field and try to find similar results so as to be trendy, so as to attract funding, so as to seem that one is working in research that is attractive and it was published in that major journal, so why not me as well? BROOKE GLADSTONE: So what’s the fix? DR. JOHN IOANNIDIS: Well, that you don't publish just the results of one team that came up with a very spectacular result, but you publish together the results of that team, along with the results of another five or ten or twenty, and sometimes even more teams who are working on the same question.
And we could ask for journals to be more interested into publishing results that are negative or refuting [LAUGHS] original observations that seem to have been exaggerated. BROOKE GLADSTONE: You've done some really spectacular research that, as a matter of fact, has been accepted by scientific journals. And The Economist magazine ended its piece about you with an intriguing question for you, which is, now that your latest work has been accepted by a journal, is that reason to doubt it?
DR. JOHN IOANNIDIS: I think that all of my research is open to criticism and refutation. [LAUGHTER] And I think that this is just part of the process of science. We make some hypotheses, we generate some data, and these are open to verification or refutation — so absolutely. BROOKE GLADSTONE: Thank you very much. DR. JOHN IOANNIDIS: Thank you. BROOKE GLADSTONE: John Ioannidis is an epidemiologist at the Ioannina School of Medicine in Greece.