I have read and responded to a lot of blog posts recently on topics in the history and philosophy of science. Many of these concern themselves with the question of the “truth” or “fasity” of a given theory.
My take on this is that “truth” and “falsity” are concepts beloved of philosophers, which are not seen as very useful criteria by scientists themselves. To avoid filling up other people’s blogs with long replies, I’ve decided to do one of my own on this topic.
I’ve read, in various philosophical papers etc, claims that all sorts of theories, including Galileo’s parabolic trajectories, Newtonian mechanics and even thermodynamics, are “false”. Now, it’s true that we would say that strictly speaking, we cannot use the constant-acceleration equations because “g” varies with height, and Newtonian mechanics has been replaced by special relativity, which in turn needs to be replaced by general relativity in the presence of mass. (I don’t know what the complaint about thermodynamics was, although I have my own criticisms of it, probably best kept for another post). But there again, Newtonian mechanics is perfectly adequate for a wide range of uses including calculating the paths of most heavenly bodies and spacecraft; and the path of a projectile is as near as dammit a parabola anyway.
“Near as dammit” sounds like a very unscientific term, but it is highly relevant here, since all comparisons between theoretical and observed quantities must be made in terms of experimental uncertainty. If you cannot measure the difference between a parabolic and an elliptical trajectory, you might as well accept the parabola.
In fact most theory choice – at least in the physical sciences – boils down to making comparisons between numerical predictions and observations. Karl Popper used, as his example of how theories should be tested, the eclipse experiment by Eddington in 1919, which is widely hailed as confirming general relativity and falsifying Newtonian mechanics. But in fact the mean value for the bending of light at the surface of the sun did not match the predictions of either theory; it was nearer to the GR prediction, but it was still more than one standard deviation away. Popper does not actually tell us how to falsify a theory, clearly thinking that it is as obvious and clear-cut as if we were observing the colours of swans; even if you took into account his rather begrudging acceptance of uncertainty as something that “it is the custom of physicists to estimate” [LSD, 1959, p125], which eventually leads him to effectively propose a 1-sigma margin of error, you would still have to conclude that both theories are false; yet Popper clearly thinks the experiment vindicated GR.
In fact – and this is the important bit – it is not realistic to expect an observed quantity to exactly match the prediction of any theory, “true” or not; but at the same time, the discrepancy, even when compared with the experimental uncertainty and converted into a probability, cannot justify rejection of the theory either, without the imposition of some additional constraint such as a 3-sigma or 5-sigma threshold. This is particularly a problem for Popper, since he seemed to want the process to be entirely logical, without any arbitrary contstraints added in. (More on this when I get back to my MSc dissertation!)
Can we perhaps partition theories into those which make specific numerical predictions and those which do not? Then, if we decide that there are problems with the former, we can concentrate on the latter instead. I am thinking that this latter category would include all the important stuff – the “generic” theories such as “electric charge is quantised” (as opposed to “the charge on the electron is 1.602 × 10-19 coulombs”) or “the speed of light is the same in all inertial reference frames”. But these theories have still got to be falsifiable, and how are we going to do that? To directly falsify the Principle of the Constancy of the Velocity of Light we would have to measure the speed in various reference frames and see if it varies – but we are back in the business of comparing numerical values. Likewise with the quantisation of charge – and having had a go at the Millikan experiment, even with modern equipment, I can assure you that you are never going to get a “black and white” result there.
Rather than being concerned with whether theories are “true” or not, I think most scientists would prefer to think about how useful they are, and reserve the right to choose an appropriate one according to the problem at hand, as outlined above. For that reason, I prefer the term “model”, since this makes it clearer that we can have several of these in our repository, and use whichever one seems most appropriate.
Let’s have more concrete examples. Having spent many years calculating magnetic fields, it has always amused me that, whenever there is a discussion between physicists about magnetic fields, they start furiously drawing lines on a whiteboard, or waving their arms about if no drawing medium is available. The lines they draw are “flux lines”. These are equipotentials of the magnetic vector potential, but are better known as the paths that an isolated pole would follow, if isolated poles, or indeed any poles, existed. I have never met anyone who believes poles exist, but nevertheless flux lines are seen as a very useful way of visualising the field. (They are also, of course, the lines with which iron filings will align themselves in a magnetic field).
But do flux lines really exist? Well, take a look at this quote from a modern astrophysics textbook: “If an external effect enforces the bulk motion of plasma perpendicular to the field lines … then the moving medium avoids crossing the field lines by dragging them along with it”. This suggests that the author believed the lines had some actual physical existence (as Faraday did in fact).
But of course the lines themselves can’t exist, as they are just contours, and will move if we change the units. And they are not conserved in any way when we change the magnetic environment, such as when a superconductor goes below its critical temperature. So maybe we should view flux as some sort of fluid of varying density? But then, what is this fluid made of? It sounds like something out of 17th century Cartesian philosophy. I don’t think you would find a modern scientist who believed that magnetic flux had an actual physical existence – yet it is a useful model.
What I am trying to get at is that scientific theories, or models, attempt to explain the behaviour of the natural world, and they do so by means of various constructions that may or may not have some correspondence with reality. There are certain things that I guess we would all go along with, such as the existence of atoms; but can we honestly say that we can describe what an atom is – especially if we are not allowed to use macroscopic analogues (such as hard spheres orbiting one another), which we know are not strictly applicable?
When thinking about this topic I often find myself wondering what would happen if we were to make contact with aliens who had reached the same level of scientific development as us. Would they have the same concepts as us – atoms, molecules, electrons, quarks, mass, energy, entropy etc? Or would they have a model based on different concepts but which explains reality equally well? I can’t put my hand on my heart and say they would necessarily evolve exactly the same theories as us. They might well have an alternative theory that approaches the “hidden reality” just as well as ours – in other words, equally approximately.
I cannot resist putting in one more example of a theory which is useful but which few would claim to be true. This is the hole theory of solid-state electronics – which was very useful in explaining how the first P-N-P transistors worked, and is I belive still used. Holes are absences of electrons; they are positively charged, and can move around the lattice in the same way as free electrons; but few would say they exist in the same way that electrons exist.
Michael Weiss said:
I’m glad you posted this, Jim. Since you didn’t plug your own paper, let me do it for you: “Can Theories be Falsified by Experiment?” (http://www.hep.ucl.ac.uk/~jgrozier/JG_philsci_essay2.pdf) gives a devasting blow to Popper’s falsificationism.
Let me start off with three groups, each with its own perspective:
(S) Actual Scientists
(HoS) Historians of Science
(PoS) Philosophers of Science
(Full disclosure, I am none of the above.)
I think you are suggesting that all that (S) need are a notion of “true enough”, where the meaning of “enough” depends on what you’re doing. (PoS) of course should be prepared to dilate on Truth all through the night. And (HoS) don’t really care about what *is* true, or even what Truth means (if anything); the key issue is what historical (S)’s believed, and why.
More later.
Michael Weiss said:
You rightly emphasize the notion of approximate truth, fundamental for modern physics and related fields, but mostly neglected in philosophy. Does modern physics boast any theories that a physicist would unhesitatingly brand Absolutely True? Probably not. GR has passed every experimental test without breaking a sweat, but until the longed-for unification with quantum theory takes place, it too has a provisional character. The same caveat applies to the Standard Model.
Go back to the 18th and 19th centuries, though, and you find a different state of affairs. Newtonian physics sat enthroned. Yes, I know, from time to time someone would proposed modifying the exponent in the inverse-square law slightly, to account for astronomical discrepancies. Faraday and some others disliked action-at-a-distance. The continental ‘energy’ tradition, from Huygens through Leibniz and beyond, remained strong.
But I don’t think this is anything like the modern attitude. The history of 18th and 19th century physics is a tale of expansion, resolution, reconciliation, and reformulation. So, physics expanded into new areas (heat, E&M, hydrodynamics)— these didn’t conflict with the existing Newtonian paradigm. [Yes, I used the dreaded word!] In almost all cases, apparent astronomical discrepancies were resolved in favor of Newtonian phyiscs, either by refining the mathematics (Clairaut), or by the discovery of Neptune. Deeper study reconciled the ‘energy’ tradition with Newton’s work: they were found ultimately to be equivalent approaches. And of course we have Lagrange, Laplace, Hamilton, and Jacobi, all of them reformulating Newtonian mechanics without correcting it.
Alas, we can’t poll all the major physicists of the period: “How much confidence do you have that Newton’s laws are 100% correct? (A) certain, (B) nearly certain, (C) somewhat confident, (D) not confident at all, (E) very doubtful, (F) sure that they are only approximate.” If we could, I wouldn’t be surprised to find lots of B’s and C’s, and not that many A’s. But I can’t think of any modern theory that would do as well.
To be continued…
Michael Weiss said:
We’re very comfortable nowadays with scientific theories as successively closer approximations to “the truth”. Some are happy even to dispense with the limit, contemplating a series of ever more accurate and comprehensive models, with no assumption that they approach anything ultimate or final.
How different, I claim, was Newton’s own view. True enough, he never made peace with action at a distance, but this discomfort over foundations didn’t bleed over into any wobbliness on the inverse-square law. (Or the three laws, or the rest of the mathematical edifice.)
It’s easy to see historical reasons. Aristotelian physics was in no sense an approximation to Newtonian physics. But the two relativities and quantum mechanics overthrew seemingly impregnable theories, while leaving them as useful as ever.
(You do bring up the example of Galileo’s parabolas. I don’t this is a parallel example historically, though scientifically it qualifies.)
So nowadays we’re more modest. Is this a good thing? Maybe not. Two examples.
In Weinberg’s The First Three Minutes, a pop-sci account of modern cosmology, he digresses for one chapter to address a historical conundrum. “Why was there no systematic search for [the cosmic background radiation], years before 1965?”
He offers a trio of causes, concluding with this:
Third, and I think most importantly, the “big bang” theory did not lead to a search for the 3 degree K microwave background because it was extraordinarily difficult for physicists to take seriously any theory of the early universe. (I speak here in part from recollections of my own attitude before 1965.) Every one of the difficulties mentioned above could have been overcome with a little effort. However, the first three minutes are so remote from us in time, the conditions of temperature and density are so unfamiliar, that we feel uncomfortable in applying our ordinary theories of statistical mechanics and nuclear physics.
This is often the way it is in physics — our mistake is not that we take our theories too seriously, but that we do not take them seriously enough.
In a different context (post-war quantum field theory), I.I. Rabi said this about Oppenheimer (quoted in Abraham Pais’s bio):
One often wonders why men of Oppenheimer’s gifts do not discover everything worth discovering…He saw physics clearly, looking toward what had already been done, but at the border he tended to feel that there was much more of the mysterious and novel than there actually was. He was insufficiently confident of the power of the intellectual tools he already possessed and did not drive his thought to the very end because he felt instinctively that new ideas and new methods were necessary to go further than he and his students had already gone.
Jim Grozier said:
Thanks for all this, Michael – especially this: ” the key issue is what historical (S)’s believed, and why”. A very timely reminder as I embark on my latest essay (on early 19th century electromagnetism). (I could have said “current essay” there but might have been accused of a pun). Sorry to be so slow in responding – you can understand why I never get anywhere with the Guardian blogs! Will be able to enter into the spirit of this a bit more after the deadline (June 7th).
Michael Weiss said:
Cool! I hope you post the essay, I would like to read it. (Just now I’m reading L. Pearce Williams’ bio of Faraday.)
Jim Grozier said:
And just to prove my point …..
After writing the above I thought I’d better check out “The H Word”, and just for entertainment value have a look at how Rebekah and others had responded to the rather bitchy remark about Steve Fuller on the ICHSTM thread. Now I know nothing about Steve Fuller, and so am prepared to accept that the person apparently known as “QSilverGhost” is probably right about him. But one phrase from his or her post stuck in the gullet:
“pick up any history of science work published by the University of Chicago Press in the last few years and compare its level of scholarly craftsmanship with something in the field from 40 years ago. These days the footnotes are tauter, the range of sources richer and deeper, the argument more subtle and self-aware and open to a diverse array of perspectives.”
I can’t agree with the statement that “the range of sources [is] richer and deeper”. I am going to ICHSTM, have surveyed the programme to the best of my ability, and have found very little of interest. It’s mostly what I understand is called externalist history. Where has all the actual History of Science stuff gone? I am fortunate enough to be reading the excellent “Electrodynamics from Ampere to Einstein” by Olivier Darrigol (pub. 2000) for my essay – now that’s what I call HoS. But there is precious little of that calibre available at Manchester.
So I could feel a comment coming on, and scrolled down … only to find “comments are closed”!
Hence venting my spleen here instead!
Jim Grozier said:
Thinking again about that comment on Rebekah’s blog, I think that the oft-repeated mantra about how much more scholarly and professional HoS has been in the last 40 years, is actually an insult to historians of science who were active before that time. I think particularly of the volume “The Annus Mirabilis of Sir Isaac Newton” (pub. 1969), slated by Margaret Osler because it was “full of mathematical formulae”, which I found very useful for my Newton essay, the paper I made most use of being definitely a very scholarly and detailed work. I think what happened 40 years ago is probably more like a lurch from one extreme of the discipline to the other, and in my opinion to be truly “professional” you need to embrace both – internalism and externalism. It seems ludicrous to me that some historians apparently think that it is a case of choosing one or the other (and also, as they would no doubt add, more “professional” to choose the externalist approach).
Michael Weiss said:
Hmm, “slated”, meaning what exactly? (Fumbles around on the internet.) Ah yes, here’s a website with a definition: “when you dis another person”.
Caught between British slang and hip-hop jargon.
I can’t argue with one basic point of externalism: scientists do science, and they live in society, busily being human. I am bothered though by the glibness with which some externalists slide from fact to speculation, often without even being aware of the transition. “Trangressing the boundaries”, you might say.
A couple of quotes, to elaborate this point.
From Jed Buchwald’s review of Energy and Empire. A Biographical Study of Lord Kelvin by Crosbie Smith and M. Norton Wise:
Smith and Wise’s Kelvin is an extraordinary and stimulating piece of work. It is also occasionally tendentious and frequently self referential…My principal objection to the course of their complex arguments concerns their evidence, for it seems to me that their powerful and exciting claims concerning intimate cognitive relationships between what would otherwise appear to be technical and nontechnical structures (e.g. between latitudinarian belief and hypothesis-free physics) demand rather more evidentiary support than they have usually provided…
From Richard Westfall’s review of Shapin and Schaffer’s Leviathan and the Air-Pump:
To bring Boyle into this discussion, the authors fall back on assertions, which are wholly gratuitous and without textual support in anything they adduce, that he considered disagreements among natural philosophers as a “scandalous” state of affairs that had to be eliminated.
(Incidentally, another review, by Harold Jones, picks apart their translation of Hobbes’ “Dialogus de Natura Aeris”. I can’t say how accurate or significant the criticisms are, but they don’t do anything to enhance the authors’ credibility.)
I might like to ask QSilverGhost, “Hmm, how come whiggish tales of progress in science must be dismissed as naive and self-congratulatory, but in the history of science should be uncritically accepted? Have you considered that your perceptions may have been colored by your social mileau?”
Jim Grozier said:
Yes, sorry, “dis” is also used over here, but I tend to think of it as a “young people’s term” and also am a bit wary of it because it is short for “disrespect”, whilst “slate” means to “criticise harshly”, and you can criticise without disrespecting … which I guess is what Osler was doing. I don’t know why I didn’t just say “criticise” …
Thanks btw for your kind comments about my essay. My lecturer was impressed with it and actually suggested publication, which has not happened yet though, probably because it has become the subject for my dissertation – which will use it as a basis for investigating the history of experimental uncertainty.
The Shapin/Schaffer book gets quoted a lot. We had one or two readings from it in our core course. I am now promising myself a proper read of it when all this madness is over.
Jim Grozier said:
Re whiggism in the history of science: well, the infamous Margaret Osler did say, to her credit, that
“avoiding presentism or Whiggish historiography, uncontroversial though such a strategy may appear, raises a further conundrum. If we assert that historiographical sophistication is increasing as we learn to take actors’ categories into account, are we unwittingly giving a Whiggish history of our own historiographical practice? Such an infinite regress of Whiggism can be avoided as long as we do not claim progress for historical method itself”.
[From “Rethinking the Scientific Revolution” (2000)]
Michael Weiss said:
Good for her!