Nancy Cartwright’s 1983 book How the Laws of Physics Lie is a classic of contemporary philosophy of science. I have dipped into it once or twice in the past, but never managed to read it all. Now, however, as I anticipate studying for an MSc in Philosophy of Science at LSE, for which it is on the suggested pre-reading list, I thought I ought to work my way through it a bit more methodically. As I read it, I plan a series of blog pieces on what I find.

Cartwright’s main thesis is that laws such as the law of gravitation do not apply to the real world because in the real world there are always other forces acting, such as electromagnetic ones. Newton’s law of gravitation, and Coulomb’s law for the electrostatic force, are true only in isolation – they are idealisations. She points out that when these laws are stated, they should be subject to a ceteris paribus condition (“all other things being equal”) – but this is rarely done.

Let’s assume from the outset that the use of the rather provocative word “lie” in the title can be attributed to a publisher bending over backwards to boost sales in a competitive market. I don’t think even Cartwright is suggesting that laws can “lie”, since that implies saying something while knowing it’s not true, and thus suggests that it is the work of a conscious mind and not an inanimate intellectual concept. People can lie; laws can’t. Let’s assume, therefore, that the title merely sums up what Cartwright states frequently in the book: that the laws of physics (or, at any rate, most of them) are false.

One problem I have with her argument is that she does not properly explain her objection to these laws. There are two possibilities: one is that they are just wrong because they assume ideal conditions, without any consideration of their numerical accuracy; the other is that, because of the additional forces which can’t be entirely eliminated, their numerical predictions do not match up with the values obtained by actual measurement.

I think that most scientists and philosophers, if asked, would say that the main function of a physical law is to predict the value of some quantity which can be measured (to test the theory), or to produce a predicted value which can be used in some process (such as the design of a bridge). So I think it is the second of the above possibilities that we are really concerned with here.

Cartwright says that “for bodies which are both massive and charged, the law of universal gravitation and Coulomb’s law … interact to determine the final force. But neither law by itself truly describes how the bodies behave”.

Here I assume she is talking about the numerical predictions made by these laws, and implicitly comparing them with the actual value of the force between the bodies. Now, it is possible that all she is saying is that we cannot ignore either of the component forces, and as long as we remember to allow for both, everything will be all right. But then, two lines further on she says “These two laws are not true; worse, they are not even approximately true”.[1] She is implying here that they are both separately false, and not just that using one of them on its own and ignoring the other one is no good.

I will deal with “approximate truth” later on. For now, let us try to reconstruct what Cartwright means when she says that either of these laws (say the Coulomb law) is “not true”.

She quotes Coulomb’s law as follows: “the bodies … produce a force of size qqʹ/r2.” (She doesn’t actually define q and qʹ, but we assume that they have their usual meaning, namely the charges on the two bodies). This is effectively a rather old-fashioned statement of Coulomb’s Law which only applies in a particular system of obsolete units. To give it its modern, unit-independent format, we say that the force F between two charged bodies, a distance r apart, is given by the formula

Coulomb Equationwhere the term 4πε0 is a constant. To say that this formula is “false” in a numerical sense is to suggest that, when one has inserted the measured values of q, qʹ and r together with the value of the constant, the resulting value of F will not be the same as the measured value of the force.

But there are several things wrong with this statement; category errors have been committed. We do not have “measured values” for the charges, the distance or the force. We have an exact value for the constant if we use the SI system of units. But testing the law by comparing the values of the two sides of the equation is not a case of comparing two numbers with each other. The entities we compare with each other are intervals, consisting of a measured value and an uncertainty; for instance, we may judge that the distance between the bodies is (1.000 ± 0.001) metres, meaning we have measured the distance to the nearest millimetre. (The interpretation of the “±” is by no means straightforward, but to fully go into that would take us too far from our current topic. Suffice it to say, for now, that the uncertainty defines some sort of interval).

Given that, all we can say, when we compare the intervals together, is that they agree to within a certain margin, or that they agree at a significance level of X, where the value of X is something we choose – it is not an objectively defined quantity.

It is, I believe, a common misconception among philosophers of science that numerical laws can be tested by simply comparing numbers to see whether they are the same: whether they “agree”. Recognising, at some level or other, that this is not possible, philosophers have introduced the term “approximate truth” to cover the fact that the agreement is not precise, but may be “good enough”, but approximate truth has not in itself been defined, let alone a particular definition accepted by the entire community.

It might seem a major stumbling-block if we have to conclude that we cannot test whether certain hypotheses are “true”, since the concept of truth is so central to philosophy, and also, hopefully, to science. But all is not lost. We can simply work to a level of precision that is suitable for the task in hand.

Let’s look at an actual example from the history of physics. In 1919 Arthur Eddington photographed certain stars during a solar eclipse, in order to test the theory of general relativity. The theory predicted a deflection of starlight by the gravitational field of the sun, by an amount 1.75 seconds of arc if the light ray just grazed the surface of the sun. He compared his measured angles of deflection with this predicted value, and with two alternative predictions based on two versions of Newtonian gravitation. The results, when put into a modern format, are quoted by Earman and Glymour, in a paper about the experiment, as (1.61 ± 0.44) and (1.98 ± 0.18) seconds of arc for two separate series of observations [2]. Given that the various theoretical values being tested were 1.75, 0.87 and 0 seconds, it is clear that these experimental results were not conclusive, despite the fact that the experiment has been hailed as a great triumph for general relativity. Karl Popper, who saw the Eddington experiment as a good template for how science should be done – which, for Eddington, meant by trying to “falsify” theoretical predictions – commented on the result that “Even if our measuring instruments at the time did not allow us to pronounce on the results of the tests with complete assurance, there was clearly a possibility of refuting the theory”.[3] But this “complete assurance” is a myth.

The formula actually being tested in the Eddington experiment gave the deflection δϕ in radians in terms of the speed of light c, the gravitational constant G, the solar mass M and angular separation of the star from the solar surface r:

Eddington Equation

For each star being observed, the deflection could be reduced to the solar surface by multiplication by a factor with r/R where R is the solar radius. Thus the predicted value of the deflection depends on the values of four quantities, two of which are fundamental constants and two of which are solar properties which have to be measured separately. All of these quantities therefore have uncertainties (although, if we are using SI units, the value of c is nowadays taken as exact, though not then) and so, therefore, does the predicted value of δϕ. Eddington appears to have thought that by improving the measuring instruments, one could arrive at a situation where both the measured value and the predicted value could be calculated exactly; but as we have seen, neither can. All the instrumental improvements in the world (including improvements to measurements of G and the mass and radius of the sun) cannot deliver Popper’s “complete assurance”, since there is always a finite uncertainty, and hence always a requirement for scientists to decide for themselves whether the uncertainty is “small enough”. After the advent of radio astronomy (in which measurements could be repeated many times, as it was not necessary to wait for an eclipse) the prediction was verified to a high level of precision; but even now, we cannot say that the deflection exactly matches the theoretical value. It simply can’t be done, and never will be.

So what I am saying with this example is that we can never say that numerical laws are simply “true” or “false”. We can only say that they have been found to agree with measurements to within some level of uncertainty, and we can then comment on that uncertainty, and compare it with the predictions of rival theories. Going back to what Cartwright said – if, when measuring the gravitational force between two bodies, the electrostatic force can be shown to be significantly smaller than the uncertainty in the measured force (where “significance” is once again a subjective matter) then the electrostatic force can be ignored, and the gravitational formula regarded as “true” at that level of significance.

Finally, I should add that it seems reasonable to me to believe that a particular physical quantity does have a “true value”, even though we can never know what it is. Modern metrology guidelines recognise this fact. For example – it is quite possible that the distance between the bodies in the Coulomb’s Law example (assuming it can be adequately defined) is a precise multiple of the length of the standard metre, and that each body has a precise deficit or excess of electrons, so that it has a precise charge. Furthermore, it seems reasonable to assume that the constant of proportionality also has a “true value” and that the law is in this sense “true”, giving a value of the force which is also precise. We can surely believe that, as long as we acknowledge that we can never know these precise values.

If you would like to read a more in-depth treatment of this topic, see my MSc dissertation: Falsificationism, Science and Uncertainty. This blog is continued here.

[1] Cartwright p 57.

[2] Earman, J., & Glymour, C., Relativity and Eclipses: The British Eclipse Expeditions of 1919 and their Predecessors. Historical Studies in the Physical Sciences 11 (1) 1980, 49-85

[3] Popper, K., Conjectures and Refutations (Routledge 1963)  pp 7-8

Advertisements