Links to interviews of people at the forefront of the fight against bad science:
Over at Vox.com, Julia Belluz has an interview of a leader in the fight against bad science: John Ioannidis has dedicated his life to quantifying how science is broken [see also Ioannidis's recent paper: How to Make More Published Research True]. An excerpt from the interview:
Julia Belluz: How do you guard against bad science?
John Ioannidis: We need scientists to very specifically be able to filter [bad] studies. We need better peer review at multiple levels. Currently we have peer review done by a couple of people who get the paper and maybe they spend a couple of hours on it. Usually they cannot analyze the data because the data are not available – well, even if they were, they would not have time to do that. We need to find ways to improve the peer review process and think about new ways of peer review.
Recently there’s increasing emphasis on trying to have post-publication review [see below for an interview with the founders of PubPeer]. Once a paper is published, you can comment on it, raise questions or concerns. But most of these efforts don’t have an incentive structure in place that would help them take off. There’s also no incentive for scientists or other stakeholders to make a very thorough and critical review of a study, to try to reproduce it, or to probe systematically and spend real effort on re-analysis. We need to find ways people would be rewarded for this type of reproducibility or bias checks.
Julia Belluz: Doesn’t this require basically restructuring the whole system of science?
John Ioannidis: These are open questions, I don’t have the answers. Currently we have a couple of time points where studies get reviewed. Some studies get reviewed at a funding level, and the review may not be very scientific. Many focus on the promises of significance here, and scientists have to overpromise. There’s review at the stage of the manuscript, which seems to be pretty suboptimal. So if you think about where should we intervene, maybe it should be in designing and choosing study questions and designs, and the ways that these research questions should be addressed, maybe even guiding research — promoting team science, large collaborative studies rather than single investigators with independent studies — all the way to the post-publication peer review.
Julia Belluz: If you were made science czar, what would you fix first?
John Ioannidis: [...] Maybe what we need is to change is the incentive and reward system in a way that would reward the best methods and practices. Currently we reward the wrong things: people who submit grant proposals and publish papers that make extravagant claims. That’s not what science is about. If we align our incentive and rewards in a way that gives credibility to good methods and science, maybe this is the way to make progress.
- Another must read is Julia Belluzs interview of PubPeer founders:
Why you can't always believe what you read in scientific journals. PubPeer is a website / platform to promote post-publication review, discussion, and scrutiny. Here's a sample from the interview, where the founders comment on the craze for publishing in "high impact" journals, and take a swipe at the editors at these journals:
JB: There's been a lot of talk in recent years about how broken science is, particularly the peer-review process. What are the bigger systemic changes that need to happen in order to fix it?
PP: The biggest problem is the pressure to chase after "metrics" — indirect measures of scientific success. The most important metric is publication in top journals, which determines jobs, grants, everything. This distorts the scientific process toward mostly illusory "breakthroughs" and "high-impact research" at the expense of careful work. Scientists now find themselves ruled by often-incompetent kingmakers — the editors of the top journals — who effectively decide their futures and make scientific fashion.
PubPeer is helping scientists retake control of their lives, work, and careers by providing a collective judgment that is independent of and ultimately more important than acceptance by the top journals. That judgment is the expert opinion of your peers. We are also big fans of open-access publishing and the use of pre-print servers such as ArXiv or the newer bioRxiv; we believe these will also loosen the stranglehold of the top journals on research.