Monday, April 17, 2017

If lack of reproducibility is a crisis, so is lack of producibility


The accepted practice is instead to adjust the model so that it continues to agree with the lack of empirical support.

This very Zen statement is a part of a commentary by theoretical particle physicist Sabine Hossenfelder-- in Nature, no less (so it must be fashionable, if not also true) -- who writes candidly about the crisis in her field (and its neighbours: astrophysics and cosmology): Science needs reason to be trusted [Caution: paywall]. She calls it a crisis of "overproduction" (i.e., abundance) of theories, but I like to think of it as a crisis of "producibility" of experimental data.

In recent years, trust in science has been severely challenged by the reproducibility crisis. This problem has predominantly fallen on the life sciences where, it turns out, many peer-reviewed findings can't be independently reproduced. Attempts to solve this have focused on improving the current measures for statistical reliability and their practical implementation. Changes like this were made to increase scientific objectivity or — more bluntly — to prevent scientists from lying to themselves and each other. They were made to re-establish trust.

The reproducibility crisis is a problem, but at least it's a problem that has been recognized and is being addressed. From where I sit, however, in a research area that can be roughly summarized as the foundations of physics — cosmology, physics beyond the standard model, the foundations of quantum mechanics — I have a front-row seat to a much bigger problem.

I work in theory development. Our task, loosely speaking, is to come up with new — somehow better — explanations for already existing observations, and then make predictions to test these ideas. We have no reproducibility crisis because we have no data to begin with ... [Bold emphasis added]

Here's something that will makes your jaw not just drop, but go into a tailspin:

In December 2015, the LHC collaborations CMS and ATLAS presented evidence for a deviation from standard-model physics at approximately 750 GeV resonant mass2, 3. The excess appeared in the two-photon decay channel and had a low statistical significance. It didn't look like anything anybody had ever predicted. By August 2016, new data had revealed that the excess was merely a statistical fluctuation. But before this happened, high-energy physicists produced more than 600 papers to explain the supposed signal. Many of these papers were published in the field's top journals. None of them describes reality.

How good are graduate admission interviews, if job interviews are "utterly useless"?


Faculty members in almost all the Indian institutions are getting ready to interview tens (if not hundreds) of students for a handful (or a few handfuls) of PhD slots in their departments. A recent NYTimes article urges us to be mindful of limitations of this format: The Utter Uselessness of Job Interviews by Jason Dana.

I realize there are quite a few differences between the kind of interviews Dana describes in his article and the kind we use. For example, his "experimental" interviews were (probably) unstructured, while we may be using something more structured [such as probing candidates specifically in the areas / subfields they say they are strong in]. Also, given the overwhelmingly large number of candidates compared to the number of available slots, there's usually a pre-screening exercise which relies on previous academic record, research experience, scores / ranks in entrance exams, etc.

And yet, this article reminds us some of the pitfalls of the interview process.