Start with the composite photo above — left to right, the galaxy Messier 82 (M82) taken by the Hubble Space Telescope; the tip of a pine branch in Upstate NY taken by me; and a putative Higgs event from the Large Hadron Collider modelled by the CMS collaboration at CERN in Geneva, Switzerland. Now ask yourself this: Could you explain to anyone else what is happening in each of these?… Yeah, I couldn’t either, though it wouldn’t stop me from trying! But really explain? No, not at the galactic or human or sub-nuclear scales. And yet as a species we know an astonishing amount about all three. How can that be? The answer is that there are two kinds of people involved, and we have learned to trust them both — scientists who dream up experiments, create measuring devices, and carefully analyze the results; and philosophers who help the first think about what it means to know something with confidence. More about one member of the second group later.
In the early-morning hours of July 4th, I sat in front of my computer screen and watched the webcast from CERN as Joe Incandela of CMS uttered the magical words “five sigma” to the audience of applauding physicists. It was a very stringent statistical threshhold indeed for confirmation of the discovery of the Higgs boson, or at least a Higgs boson. The what-next nuances will presumably be with us for years.
Over the next week, it felt like Higgsteria died down quickly, but then I wondered, how quickly? The best real-time pulse to take would presumably have been Twitter, but I didn’t want to know badly enough to learn how to use their API. So I went a level up in granularity and used Google Trends to see the past 30 days of searches on the phrase “Higgs boson”:
The right-hand side looked like a pretty good exponential, so I used the numbers for fixed scaling from the pull-down menu on the bottom of the Google Trends page to see how good the fit was. I didn’t replicate CERN’s 5-sigma, but it is still a very impressive correlation coefficient at R=0.972. And the exponent implies a half-life of 2+ days, a bit long for the widespread sound-bite mentality, but not implausible:
Even though this little analysis is a legitimate mathematical and statistical exercise, I wouldn’t call it “science.” It doesn’t say anything about the people who were doing the searches that Google counted, and I certainly wasn’t testing any hypothesis. There is no equivalent to the Standard Model of particle physics, for example, that predicts how each searcher must behave. PageRank works well for the statistical masses, but there is no Page’s 2nd Law. Sorry, Larry.
This distinction is important because it is at the heart of a blog post from ‘The Stone’, an online commentary section of The Opinion Pages of The New York Times: How Reliable Are the Social Sciences?, by Gary Gutting, a philosopher from Notre Dame. As he points out clearly and compellingly, just because something appears to have been done scientifically doesn’t make it science (as I demurred above). Instead, at least two levels of consideration should come into play. Firstly, we need to ask how results evolve in a given field of science:
Where does the result lie on the continuum from preliminary studies, designed to suggest further directions of research, to maximally supported conclusions of the science? In physics, for example, there is the difference between early calculations positing the Higgs boson and what we hope will soon be the final experimental proof that it actually exists. Scientists working in a discipline generally have a good sense of where a given piece of works stands in their discipline. But often, as I have pointed out for the case of biomedical research, popular reports often do not make clear the limited value of a journalistically exciting result. Good headlines can make for bad reporting.
Secondly, we need to consider the standards of one science compared to others:
The core natural sciences (e.g., physics, chemistry, biology) are so well established that we readily accept their best-supported conclusions as definitive. (No one, for example, was concerned about the validity of the fundamental physics on which our space program was based.) Even the best-developed social sciences like economics have nothing like this status.
Those paragraphs were published on May 17th, and now just a bit less than two months later we have that “final experimental proof,” even if it takes years to figure out exactly what was discovered. More importantly, the extensive coverage and popular response showed that it is possible to have simultaneously both journalistic excitement and stringent experiment — after all, 5-sigma is as close to proof as any lab science is ever likely to achieve. But such exquisitely controlled experiments come at huge cost — billions of dollars, thousands of people, decades of time!
As Gutting emphasizes, it is not just that it would cost us so much more to attempt some sort of “proof” with issues in the social sciences. Rather, over the past few decades, as we have begun to understand the primacy of complex adaptive systems and emergence in social patterns, it is evident that a traditional (linear) scientific approach is fatally flawed. It is literally not possible because human behavior and social values cannot accommodate controlled experiment and reliable prediction:
While the physical sciences produce many detailed and precise predictions, the social sciences do not. The reason is that such predictions almost always require randomized controlled experiments, which are seldom possible when people are involved. For one thing, we are too complex: our behavior depends on an enormous number of tightly interconnected variables that are extraordinarily difficult to distinguish and study separately. Also, moral considerations forbid manipulating humans the way we do inanimate objects. As a result, most social science research falls far short of the natural sciences’ standard of controlled experiments.
Said differently, thank goodness the Higgs boson isn’t alive, or we would probably never have found it.