Two of the lectures I have attended recently as part of my MSc course in History of Science, Technology and Medicine have featured the works of Harry Collins. Collins was one of the pioneers of the “Sociology of Scientific Knowledge” movement that sprang up in the 1970s. The idea was to examine social factors that influence the pursuit of science. Many of the key themes are contained in his 1985 book “Changing Order”.

Among these themes, the “central argument” – according to Collins himself – is what he calls the Experimenters’ Regress. This could be summarised as follows: if you build an experiment to detect some hitherto unobserved phenomenon, and you get a negative result, then it is either because the phenomenon does not exist or because your equipment is incapable of detecting it. I will quote his description of how this applies to a specific experiment – the search for gravitational radiation:

“What the correct outcome is depends on whether there are gravity waves hitting the Earth in detectable fluxes. To find this out we must build a good gravity wave detector and have a look. But we won’t know if we have built a good detector until we have tried it and obtained the correct outcome. But we don’t know what the correct outcome is until … and so on ad infinitum.” [Changing Order, p. 84]

There are currently many, many experiments in progress to detect hitherto unobserved phenomena; several looking for  gravitational waves, several looking for dark matter, and a plethora of particle physics experiments. I would be very surprised if Collins or his successors could produce a single physicist working in these fields who saw this as a regress. To explain that statement, let me summarise how these experiments actually work.

Apparatus for detecting such phenomena is very complicated and difficult to construct, because the phenomena themselves are difficult to detect. One builds one’s apparatus and runs the experiment, then examines the output. Most of this will consist of noise of known origin. It is possible to estimate the noise, and sometimes to actually measure it in the absence of any signal, by using various calibration methods that mimic the actual experiment as closely as possible – for the gravity wave experiment this might involve creating a small mechanical vibration. From this estimate, together with one’s estimate of the experimental uncertainty, one can then announce a result. But at best, this will only give an interval in which the quantity being measured is thought to lie with a given level of probability, giving upper and lower bounds, of a sort, for it. Often however the lower bound will be zero, which means that the phenomenon may not exist.

In this latter case, what the scientist is saying is this: “I have looked for the phenomenon in question, and find that it is no greater than X, with Y per cent confidence”. This is the cue for another experiment, with perhaps greater sensitivity, to search again. The phenomenon can only be said to have been detected when the probability of a zero value is below a certain (usually very low) value. One does not speak of a “correct outcome”, or in terms of binary states such as “detected/not detected”; it is all a matter of degree. So, no regress – just a continuing spiral of more and more accurate experiments with lower and lower uncertainties. The result of each experiment is valid on its own terms – it does not depend on the result of another experiment further along the spiral.

The language Collins uses is indicative, to me, of the tendency of people who have not actually worked in science to speak of qualitative rather than quantitative outcomes. No experiment whose end-product is a measurement (and I cannot think of a modern physics experiment that does not fit into that category) can unequivocally confirm or refute any theory; one can only speak of the probability that the theory is correct. (I shall have more to say about that in my MSc dissertation!)

Critics may jump in at this point and point out that the experiment Collins describes in the previous chapter of the book – the construction of a Transversely Excited Atmospheric Pressure laser or TEA laser – does indeed have a “correct outcome” – a yardstick by which one can assess its outcome. He describes this yardstick as “the ability of the laser to vaporise concrete, or whatever”. Let us look at that in some detail.

Leaving aside the “or whatever”, presumably if one were really interested in having a yardstick with which to assess the laser’s performance, that yardstick would have to be more precise – the laser could be required to make a hole in a sheet of concrete of a certain thickness, or a sheet of steel of a certain thickness, or perhaps to completely vaporise a small sample of standard size. But then there might be borderline cases where it wasn’t absolutely clear if the concrete had vaporised; perhaps the hole might not go right through? Wouldn’t it be better just to measure the energy in the pulse, or, since this might be a more relevant statistic, its power? Let’s say we have a threshold of Z watts, then, and supposing we have a laser that is quite near the borderline. If we fire it repeatedly, we may get a distribution of power values, some of which are above the threshold and some below. Does this laser pass the test? We may get a gut feeling that it is above or below par, but we can’t say with absolute certainty that it is one thing or the other, because of the spread of values. All we can say is that there is a certain probability of it being above the threshold.

What I have been trying to establish in this example is that even where we appear to have a clear, “qualitative” test of whether a piece of apparatus is working or not, that qualitative test – in this case the quality of being able to vaporise concrete – will always turn out to have a quantitative one hiding behind it. However many lasers pass the test with flying colours, there is always the possibility of a borderline case for which no test can decide. (And of course the criterion used in the gravitational experiment – whether the waves are present in detectable fluxes – is quite clearly quantitative, since “detectable” is meaningless unless we quote a threshold). Ultimately we must abandon our idea of binary states – working/not working, detected/not detected – and learn to talk in terms of probabilities.

This is, of course, only part of a wider concern I have about the way the various “science studies” (sociology of science, history of science, philosophy of science etc) are conducted. Collins’ qualitative vocabulary is just one of several indicators one finds in his book that suggest he is an “outsider” where science is concerned. Granted, he seems to have firmly “embedded” himself in the social groups he studied, at least if the TEA laser study is anything to go by; he clearly spent a lot of time with the scientists, and gained their trust by helping out with the experiment. But he is still an outsider looking in. No doubt it is important, when carrying out such studies, to have had training in sociology; but might it not also be reasonable to expect some background in science too?

We see the same in other disciplines – particularly in the history of science, which seems to be dominated by people who have been trained as historians but have not studied much science. This perhaps explains the preponderance of “externalist” historical studies – i.e. ones which concentrate on the contexts in which science is done, but sometimes miss out the science itself – over “internalist” histories which follow the development of the science. Both are, of course, important, and in an ideal world we would have equal amounts of each. But there don’t seem to be that many people trained in science to a basic (first degree) level who go on to become historians of science.

But then we don’t need jacks-of-all-trades when instead we can have collaborations! Most of the studies in History of Science are written by one or two authors. But in the sciences, it is normal for large numbers of experimentalists, theorists, engineers, technicians, programmers etc etc to collaborate on projects, each contributing expertise from their own field – so why not in HoS too, with the scientists advising the historians on “externalist” studies, and scientists driving the “internalist” research with help from historians?

Advertisements