On the Guardian’s history of science blog, The H Word, recently someone posted the following statement:

“the first step in the scientific method is hypothesis”.

Frustratingly, my attempt to respond to this remark was thwarted by the fact that comments on the topic in question had closed shortly after she posted it. So I am going to say what I would have said here instead.

I’ve often heard this claim, but cannot understand why it is so widely accepted. Indeed, it seems to be rolled out almost like a mantra in some circles. Let’s go through it carefully.

There are two main aspects of it that I would like to investigate:

(1) the suggestion that where there is a scientific experiment, there is always at least one hypothesis.

(2) the idea that, if there is a hypothesis, it should be regarded as a part of the method.

I’ll tackle the second one first, because it is easier. Now, to me, the word method suggests “how one does something”. It has clear practical connotations. When first year students in our teaching lab at UCL write up their experiments, the “Method” section typically starts with something like “the dewar was placed on the scale and filled with liquid nitrogen, and its weight recorded at 30 second intervals for 5 minutes …” They do also record the background theory and any assumptions they have made, of course, but this is done under a separate heading, “Theory”. And indeed, in everyday language, “method” means more or less the same. If someone is trying to do something and I ask them what method they are going to use, I expect to hear a list of actions to be performed, not hypotheses.

If I am asked what the scientific method is, I will usually respond by saying something like “well, it means being methodical”. This sound tautologous, but it isn’t, because methodical has connotations of doing something in a logical, structured way. If a friend has lost something, is looking for it in a hurry and is clearly in a bit of a panic, pulling out drawers and opening cupboards at random, one might say “Now, let’s be methodical about this”, and then suggest going through all the drawers in a desk one at a time, then going through all the cupboards in the room, then looking under the sofa, etc etc. In such a situation one might hear someone say, instead, “Now, let’s be scientific about this”. It means the same – doing things in a structured, controlled way.

This representation of the scientific method is not, of course, restricted to science. As I’ve already hinted, it can be applied to any activity that needs to be done in a methodical, logical way. When I was a telecomms maintenance engineer I often had to find faults in complicated systems composed of many separate units. The methodical, scientific way of going about this is to replace these units with known good ones one at a time and (here’s the important bit) to replace the original unit if it turns out not to be that one that is at fault.

What we are doing here, of course, is varying one of many parameters (the identity of the unit in question) while keeping all other parameters constant. That is, to me, the nub of the matter, and it is what makes the method scientific. It is exactly the same process as comparing two fertilisers by applying them to seeds for which all other factors (such as type of soil, amount of water, amount of sunlight, temperature etc) are identical; or testing different drugs on identical rats fed identical diets, etc etc.

If there is any hypothesis operating here at all, it is that changing the unit in question (the independent variable) will influence the performance of the system (the dependent variable). Or one could equally say that the hypothesis is that it won’t. There are, after all, only two possibilities. But I can’t honestly say that this hypothesis is necessary. One might, after all, say that one is changing the unit in question to see what its effect will be – i.e. without assuming either of the two possible hypotheses mentioned.

In other situations the hypothesis might be a bit more complicated. We might hypothesise that a particular action will produce a particular sort of response, where the response can take more forms than simply “working/not working”. We might, for instance, hypothesise that the rate of growth of our plants will be linearly dependent on the temperature, or that it will increase as the square of the temperature, or that it will decrease with temperature. But I cannot see any of these hypotheses as being an integral part of the method. One might argue that changing the temperature in any way presupposes a hypothesis about the temperature-dependence of growth rate; but it need not, because we might be simply curious to find out whether there is such a dependence or not.

This is where (1) comes in. It has been argued that the Baconian ideal of simply gathering data without making any presuppositions about it is unattainable, because one cannot simply “observe” without having some idea of what one is going to look for, and “having some idea of what to look for” somehow constitutes a hypothesis. Now, it’s true that performing almost any experiment one can imagine these days, especially in a field such as physics, entails making all sorts of assumptions. For instance, when I was investigating the energy spectrum of conversion electrons emitted following beta decay of bismuth-207 nuclei, I was assuming that  (1) the digital signal coming out of my analogue-to-digital converter was proportional to the amount of charge flowing into its input in a given time period; (2) that that charge was itself proportional to the amount of light incident on the photocathode of my photomultiplier tube; (3) that that light was all coming from the block of plastic scintillator surrounding the tube, and all the light from the scintillator was reaching the tube; (4) that the light emanating from the scintillator was produced by an electron from a bismuth nucleus hitting the scintillator, and was equal in energy to it. In fact I was probably also assuming a whole lot of other things. But I don’t see those assumptions as being necessary to the method.

Suppose I now insert an aluminium plate between the bismuth and the scintillator. I observe that the signal goes down. I might conclude from that that the aluminium can slow down, or stop, the electrons. Or I might just assume that whatever is going on between the bismuth and the scintillator is affected in some way by the plate, without assuming anything about what that “something” is. Well, it’s still an assumption, I suppose.

But in fact those who argue that “the first step in the scientific method is hypothesis” have got to show that there was a hypothesis before I even did the experiment, and that is surely not so easy. I might claim that I know nothing of what is going on, and merely inserted the plate to see if it would make any difference, out of idle curiosity. And I’m not sure if idle curiosity requires any hypotheses at all. As the Incredible String Band put it in “No Sleep Blues”,

I mixed my feet with water, just to see what could be seen,
And the water it got dirty, and the feet they got quite clean.

Don’t we often do things “just to see what can be seen”? And in fact, aren’t quite a lot of experiments motivated by just such a sentiment? For instance, the scientists who painstakingly documented the various nuclear transitions that could occur in bismuth-207, and indeed in all the other isotopes of bismuth, and in all other isotopes of all elements, together with measuring branching ratios, particle energies and half-lives, were surely doing so not to prove or disprove any hypothesis, but simply “to see what could be seen”, for the benefit of those who would come later and make use of the information, such as me.

Perhaps people who advance the view that there is always a hypothesis are thinking of Karl Popper’s perception of science as an endless cycle of hypothesis-testing and rejection or corroboration of hypotheses. Now, it’s true that scientific experiments are often carried out in order to test a certain hypothesis, or to choose between two or more competing hypotheses; but it is far more common for experiments to be done purely for the sake of data-gathering. And we didn’t need Thomas Kuhn to tell us that this is, most of the time, the way scientists spend their time.

Finally a word about who is saying all this. I spend about equal amounts of my time with scientists and “science studies” people (philosophers, historians and sociologists of science) and it does seem to me that it’s mainly the latter group who talk about the scientific method being theory-laden (and also about there being multiple scientific methods). Who is right? Well, there is no answer to that, and there is no adjudicating body, so clearly each group will go on believing what it believes. But I have to say that another feature of the science studies community is that they often have done very little actual science. So, given a choice of accepting a definition of scientific method from someone who’s actually practised it as opposed to someone who hasn’t, who are you going to choose? I know which option I’ll go for.

Advertisements