It could be said that “intelligent design” is an attempt by creation-oriented apologists at creating a “dog-whistle term”. By this I mean a term which is understood by insiders as meaning something other than it sounds like and generally not understood to have that meaning by outsiders. The idea, which didn’t work, was to have a term that (nudge nudge) means “creation science” but doesn’t sound like “creation science” although certainly being understood to mean “god did it” to anyone with a theist agenda.
The people behind “intelligent design” as a term made the error of writing their thinking down in what is known as the Wedge Document. But even if they hadn’t, it’s pretty damn obvious … so it wasn’t a very good dog-whistle term.
I’m pretty sure that there are no serious biologists out there who are sincerely non-theist and also fully paid-up ID supporters. The closest we have come to that so far is Bradley Monton, a philosopher who is an avowed atheist but who argues that ID should be taken seriously. But by taking it seriously though, Monton means something along these lines:
I conclude that ID should not be dismissed on the grounds that it is unscientific; ID should be dismissed on the grounds that the empirical evidence for its claims just isn’t there.
In other words, so far as I can tell, he argues that we can (and should) investigate the knowledge claims of ID using science. I think I can agree with that. So long as those involved in the endeavour were intellectually honest, it’d not be a problem.
The problem though is that those who are tightly wedded to ID aren’t intellectually honest, they aren’t interested in any scientific refutation of their claims and those who speak for them simply ignore the fact that their claims have been refuted (here’s a quite recent example of the eye being raised as an example of irreducible complexity). As a consequence, ID has no place in the science classroom – it might have a place in the philosophy classroom, or the critical thinking classroom. Since they are already teaching nonsense in the (shudder) theology classroom, I guess they could teach it there as well, so long as it doesn’t get confused with proper science.
To the best of my knowledge there are no biologists who take ID seriously who are not already predisposed to a theistic world view – people such as Michael Behe who are trying to use biology to “prove” the god that they believe in for other reasons. Serious biologists wouldn’t touch ID with a barge pole.
So, I was thinking that there may be another similar dog-whistle term, perhaps a more successful term, being used in another area of science but not quite so obviously trying to say “god did it” to outsiders as “intelligent design” does.
What about “fine tuning”? It could be argued that fine tuning of a sort was raised by Thomas Aquinas back in the 15th century, so it’s not a hugely new thing. However, it does seem to have taken off in the past few decades. (Intelligent design’s rise in popularity has actually been more recent.)
Could “fine tuning” have become a dog-whistle term? I guess it depends.
How do scientists come across examples of fine tuning? And what do they do when they find one? I would suggest that the answers to these two questions will indicate whether the scientist in question is using the term descriptively or as a dog-whistle.
I’d suggest that proper scientists come across fine tuning as a bi-product of other research. An example would be dark energy. Observations of the universe lead to a modification of existing theory, introducing dark energy which acts against gravity, slightly speeding up the expansion of the universe. Physicists ponder just how much dark energy would be required and conclude that the answer is “not very much”. In fact, we need very, very little and if we had slightly more, then the universe would have expanded too fast for stars to form, the consequence of which is that life as we know it could not have developed.
Other scientists, however, would be looking for “fine tuning” in much the same way as Behe and his fellow intelligent designers (IDers) look for irreducible complexity. They aren’t just doing their jobs and stumbling on an example of fine tuning, they are going out of their way to find potential candidates for fine tuning. A recent article was about such an example (and this article is actually trying to explain why I have what is closely bordering on a fixation with Luke Barnes). Luke Barnes is searching for fine tuning and this is why I put him in the category of “other scientists”. Perhaps he may have found some, so the next question is what do scientists do when they find examples of fine tuning?
An intellectually honest scientist will, when discovering something that is unexplained, say something along the lines of “hm, this is interesting, I can’t explain this”. An intellectually dishonest scientist, particularly one who is a closet theist, will hand over the mystery to people like William Lane Craig and say (nudge nudge): “Here’s another example of fine tuning that cannot be explained”. For example (in the words of Craig):
This was a summer seminar organized by Dean Zimmerman, a very prominent Christian philosopher at Rutgers University, and Michael Rota who is a professor at St. Thomas University in St. Paul, and sponsored by the John Templeton Foundation. The Templeton Foundation paid for graduate students in philosophy and junior faculty who already have their doctorates to come and take the summer seminar at St. Thomas University, then they brought in some faculty to teach it. I was merely one of about four professors that was teaching this seminar on the subject of the fine-tuning argument for God’s existence – the fine-tuning of the universe. Joining me in teaching this were Luke Barnes (who is Professor of Astronomy at the University of Sydney in Australia). We had met Barnes when we were on our Australian speaking tour two years ago. He had introduced himself to me when I was at Sydney University and shared with me one of his papers on fine-tuning. I actually quote him in the debate with Sean Carroll on the fine-tuning issue. So it was great to see Luke again and have his positive scientific input. Then with me was the philosopher Neil Manson who is more skeptical of the argument from fine-tuning. Then David Manley who is a prominent metaphysician who also shared some reservations about the argument. So there were people on opposite sides of this issue, and so we had a very good exchange.
(The tradition at those summer seminars seems to have two people arguing for fine tuning [nudge nudge] and two who are mildly sceptical. Barnes wasn’t on the sceptical side; he was side by side with Craig.)
To the extent that fine tuning is a real thing, and not just a dog-whistle term meaning “god did it”, it presents an interesting mystery – a puzzle to be solved. Sadly, a web-search for “fine tuning” produces results which are tipped heavily towards the theistic version, not the puzzle to be solved version.
I don’t think we should give up hope of rationality though. The more reasonable among us can take the same basic approach as the biologists have with “intelligent design”. The IDers claim irreducible complexity, and the biologists show how the complexity is not irreducible. The FTers claim (inexplicable) fine tuning, and untainted physicists show how the fine tuning is not inexplicable. That was my ham-fisted intent in Is Luke Barnes Even Trying Anymore – Barnes claimed that αG is unnaturally small, making α/αG unnaturally large, and I explained how its value is not unnatural at all but is instead expected.
Note that I did write a comment on Barnes’ blog to this effect, but since the last comment never made it through his filter (and had to be reproduced here), I’d not be surprised to see the same thing happen with this one. Just in case:
I've made comment on your paper here - http://neophilosophical.blogspot.com/2015/12/is-luke-barnes-even-trying-anymore.html - but in brief:
Your argument, perhaps taken from Martin Rees is that αG is unnaturally small, making α/αG unnaturally large. However, this argument resolves down to a question of (in your definition of αG) the relative values of the proton mass and the Planck mass. Thus it's fundamentally a comment on the fact that the Planck mass is rather large, much much larger than the proton mass (and also the electron mass which is more commonly used to produce αG).
Therefore, what you have overlooked is what the Planck mass is, because what the Planck mass is explains why the Planck mass is so (relatively) huge. Unlike the Planck length and the Planck time, which both appear to be close to if not beyond the limits of observational measurement, the Planck mass is the mass of a black hole with a Schwarzschild radius in the order of a Planck length (for which the Compton wavelength and the Schwarzschild radius are equal).
If the Planck mass were supposed to have some relation to a quantum mass (ie being close to if not beyond the limits of observational measurement), then you'd have an argument for fine-tuning, but it's not and you don't.
And in any event, 9 orders of magnitude (between 10^-30 and 10^-39) is not a fine-tuned margin. And that's only if you use the proton mass variant of αG. If you use the more common definition of the gravitational couple constant, with the electron mass, there are 15 orders of magnitude (between 10^-30 and 10^-45).
I note that you didn't post my last comment (against a different blog post). I'm giving you the benefit of the doubt and assuming that this was an error or oversight on your part. I posted that comment on my blog, here - http://neophilosophical.blogspot.com/2015/09/another-open-letter-to-luke-barnes.html.