I have read and responded to a lot of blog posts recently on topics in the history and philosophy of science. Many of these concern themselves with the question of the “truth” or “fasity” of a given theory.

My take on this is that “truth” and “falsity” are concepts beloved of philosophers, which are not seen as very useful criteria by scientists themselves. To avoid filling up other people’s blogs with long replies, I’ve decided to do one of my own on this topic.

I’ve read, in various philosophical papers etc, claims that all sorts of theories, including Galileo’s parabolic trajectories, Newtonian mechanics and even thermodynamics, are “false”. Now, it’s true that we would say that strictly speaking, we cannot use the constant-acceleration equations because “g” varies with height, and Newtonian mechanics has been replaced by special relativity, which in turn needs to be replaced by general relativity in the presence of mass. (I don’t know what the complaint about thermodynamics was, although I have my own criticisms of it, probably best kept for another post). But there again, Newtonian mechanics is perfectly adequate for a wide range of uses including calculating the paths of most heavenly bodies and spacecraft; and the path of a projectile is as near as dammit a parabola anyway.

“Near as dammit” sounds like a very unscientific term, but it is highly relevant here, since all comparisons between theoretical and observed quantities must be made in terms of experimental uncertainty. If you cannot measure the difference between a parabolic and an elliptical trajectory, you might as well accept the parabola.

In fact most theory choice – at least in the physical sciences – boils down to making comparisons between numerical predictions and observations. Karl Popper used, as his example of how theories should be tested, the eclipse experiment by Eddington in 1919, which is widely hailed as confirming general relativity and falsifying Newtonian mechanics. But in fact the mean value for the bending of light at the surface of the sun did not match the predictions of either theory; it was nearer to the GR prediction, but it was still more than one standard deviation away. Popper does not actually tell us how to falsify a theory, clearly thinking that it is as obvious and clear-cut as if we were observing the colours of swans; even if you took into account his rather begrudging acceptance of uncertainty as something that “it is the custom of physicists to estimate” [LSD, 1959, p125], which eventually leads him to effectively propose a 1-sigma margin of error, you would still have to conclude that both theories are false; yet Popper clearly thinks the experiment vindicated GR.

In fact – and this is the important bit – it is not realistic to expect an observed quantity to exactly match the prediction of any theory, “true” or not; but at the same time, the discrepancy, even when compared with the experimental uncertainty and converted into a probability, cannot justify rejection of the theory either, without the imposition of some additional constraint such as a 3-sigma or 5-sigma threshold. This is particularly a problem for Popper, since he seemed to want the process to be entirely logical, without any arbitrary contstraints added in. (More on this when I get back to my MSc dissertation!)

Can we perhaps partition theories into those which make specific numerical predictions and those which do not? Then, if we decide that there are problems with the former, we can concentrate on the latter instead. I am thinking that this latter category would include all the important stuff – the “generic” theories such as “electric charge is quantised” (as opposed to “the charge on the electron is 1.602 × 10-19 coulombs”) or “the speed of light is the same in all inertial reference frames”. But these theories have still got to be falsifiable, and how are we going to do that? To directly falsify the Principle of the Constancy of the Velocity of Light we would have to measure the speed in various reference frames and see if it varies – but we are back in the business of comparing numerical values. Likewise with the quantisation of charge – and having had a go at the Millikan experiment, even with modern equipment, I can assure you that you are never going to get a “black and white” result there.

Rather than being concerned with whether theories are “true” or not, I think most scientists would prefer to think about how useful they are, and reserve the right to choose an appropriate one according to the problem at hand, as outlined above. For that reason, I prefer the term “model”, since this makes it clearer that we can have several of these in our repository, and use whichever one seems most appropriate.

Let’s have more concrete examples. Having spent many years calculating magnetic fields, it has always amused me that, whenever there is a discussion between physicists about magnetic fields, they start furiously drawing lines on a whiteboard, or waving their arms about if no drawing medium is available. The lines they draw are “flux lines”. These are equipotentials of the magnetic vector potential, but are better known as the paths that an isolated pole would follow, if isolated poles, or indeed any poles, existed. I have never met anyone who believes poles exist, but nevertheless flux lines are seen as a very useful way of visualising the field. (They are also, of course, the lines with which iron filings will align themselves in a magnetic field).

But do flux lines really exist?  Well, take a look at this quote from a modern astrophysics textbook: “If an external effect enforces the bulk motion of plasma perpendicular to the field lines … then the moving medium avoids crossing the field lines by dragging them along with it”. This suggests that the author believed the lines had some actual physical existence (as Faraday did in fact).

But of course the lines themselves can’t exist, as they are just contours, and will move if we change the units. And they are not conserved in any way when we change the magnetic environment, such as when a superconductor goes below its critical temperature. So maybe we should view flux as some sort of fluid of varying density? But then, what is this fluid made of? It sounds like something out of 17th century Cartesian philosophy. I don’t think you would find a modern scientist who believed that magnetic flux had an actual physical existence – yet it is a useful model.

What I am trying to get at is that scientific theories, or models, attempt to explain the behaviour of the natural world, and they do so by means of various constructions that may or may not have some correspondence with reality. There are certain things that I guess we would all go along with, such as the existence of atoms; but can we honestly say that we can describe what an atom is – especially if we are not allowed to use macroscopic analogues (such as hard spheres orbiting one another), which we know are not strictly applicable?

When thinking about this topic I often find myself wondering what would happen if we were to make contact with aliens who had reached the same level of scientific development as us. Would they have the same concepts as us – atoms, molecules, electrons, quarks, mass, energy, entropy etc? Or would they have a model based on different concepts but which explains reality equally well? I can’t put my hand on my heart and say they would necessarily evolve exactly the same theories as us. They might well have an alternative theory that approaches the “hidden reality” just as well as ours – in other words, equally approximately.

I cannot resist putting in one more example of a theory which is useful but which few would claim to be true. This is the hole theory of solid-state electronics – which was very useful in explaining how the first P-N-P transistors worked, and is I belive still used. Holes are absences of electrons; they are positively charged, and can move around the lattice in the same way as free electrons; but few would say they exist in the same way that electrons exist.