Wednesday, 30 December 2015

A Finely Tuned Critique of Fine Tuners

It could be said that “intelligent design” is an attempt by creation-oriented apologists at creating a “dog-whistle term”.  By this I mean a term which is understood by insiders as meaning something other than it sounds like and generally not understood to have that meaning by outsiders.  The idea, which didn’t work, was to have a term that (nudge nudge) means “creation science” but doesn’t sound like “creation science” although certainly being understood to mean “god did it” to anyone with a theist agenda.

The people behind “intelligent design” as a term made the error of writing their thinking down in what is known as the Wedge Document.  But even if they hadn’t, it’s pretty damn obvious … so it wasn’t a very good dog-whistle term.

I’m pretty sure that there are no serious biologists out there who are sincerely non-theist and also fully paid-up ID supporters.  The closest we have come to that so far is Bradley Monton, a philosopher who is an avowed atheist but who argues that ID should be taken seriously.  But by taking it seriously though, Monton means something along these lines:

I conclude that ID should not be dismissed on the grounds that it is unscientific; ID should be dismissed on the grounds that the empirical evidence for its claims just isn’t there.

In other words, so far as I can tell, he argues that we can (and should) investigate the knowledge claims of ID using science.  I think I can agree with that.  So long as those involved in the endeavour were intellectually honest, it’d not be a problem.

The problem though is that those who are tightly wedded to ID aren’t intellectually honest, they aren’t interested in any scientific refutation of their claims and those who speak for them simply ignore the fact that their claims have been refuted (here’s a quite recent example of the eye being raised as an example of irreducible complexity).  As a consequence, ID has no place in the science classroom – it might have a place in the philosophy classroom, or the critical thinking classroom.  Since they are already teaching nonsense in the (shudder) theology classroom, I guess they could teach it there as well, so long as it doesn’t get confused with proper science.

To the best of my knowledge there are no biologists who take ID seriously who are not already predisposed to a theistic world view – people such as Michael Behe who are trying to use biology to “prove” the god that they believe in for other reasons.  Serious biologists wouldn’t touch ID with a barge pole.

So, I was thinking that there may be another similar dog-whistle term, perhaps a more successful term, being used in another area of science but not quite so obviously trying to say “god did it” to outsiders as “intelligent design” does.

What about “fine tuning”?  It could be argued that fine tuning of a sort was raised by Thomas Aquinas back in the 15th century, so it’s not a hugely new thing.  However, it does seem to have taken off in the past few decades.  (Intelligent design’s rise in popularity has actually been more recent.)

Could “fine tuning” have become a dog-whistle term?  I guess it depends.

How do scientists come across examples of fine tuning?  And what do they do when they find one?  I would suggest that the answers to these two questions will indicate whether the scientist in question is using the term descriptively or as a dog-whistle.

I’d suggest that proper scientists come across fine tuning as a bi-product of other research.  An example would be dark energy.  Observations of the universe lead to a modification of existing theory, introducing dark energy which acts against gravity, slightly speeding up the expansion of the universe.  Physicists ponder just how much dark energy would be required and conclude that the answer is “not very much”.  In fact, we need very, very little and if we had slightly more, then the universe would have expanded too fast for stars to form, the consequence of which is that life as we know it could not have developed.

Other scientists, however, would be looking for “fine tuning” in much the same way as Behe and his fellow intelligent designers (IDers) look for irreducible complexity.  They aren’t just doing their jobs and stumbling on an example of fine tuning, they are going out of their way to find potential candidates for fine tuning.  A recent article was about such an example (and this article is actually trying to explain why I have what is closely bordering on a fixation with Luke Barnes).  Luke Barnes is searching for fine tuning and this is why I put him in the category of “other scientists”.  Perhaps he may have found some, so the next question is what do scientists do when they find examples of fine tuning?

An intellectually honest scientist will, when discovering something that is unexplained, say something along the lines of “hm, this is interesting, I can’t explain this”.  An intellectually dishonest scientist, particularly one who is a closet theist, will hand over the mystery to people like William Lane Craig and say (nudge nudge): “Here’s another example of fine tuning that cannot be explained”.  For example (in the words of Craig):

This was a summer seminar organized by Dean Zimmerman, a very prominent Christian philosopher at Rutgers University, and Michael Rota who is a professor at St. Thomas University in St. Paul, and sponsored by the John Templeton Foundation. The Templeton Foundation paid for graduate students in philosophy and junior faculty who already have their doctorates to come and take the summer seminar at St. Thomas University, then they brought in some faculty to teach it. I was merely one of about four professors that was teaching this seminar on the subject of the fine-tuning argument for God’s existence – the fine-tuning of the universe. Joining me in teaching this were Luke Barnes (who is Professor of Astronomy at the University of Sydney in Australia). We had met Barnes when we were on our Australian speaking tour two years ago. He had introduced himself to me when I was at Sydney University and shared with me one of his papers on fine-tuning. I actually quote him in the debate with Sean Carroll on the fine-tuning issue. So it was great to see Luke again and have his positive scientific input. Then with me was the philosopher Neil Manson who is more skeptical of the argument from fine-tuning. Then David Manley who is a prominent metaphysician who also shared some reservations about the argument. So there were people on opposite sides of this issue, and so we had a very good exchange.

(The tradition at those summer seminars seems to have two people arguing for fine tuning [nudge nudge] and two who are mildly sceptical.  Barnes wasn’t on the sceptical side; he was side by side with Craig.)

To the extent that fine tuning is a real thing, and not just a dog-whistle term meaning “god did it”, it presents an interesting mystery – a puzzle to be solved.  Sadly, a web-search for “fine tuning” produces results which are tipped heavily towards the theistic version, not the puzzle to be solved version.

I don’t think we should give up hope of rationality though.  The more reasonable among us can take the same basic approach as the biologists have with “intelligent design”.  The IDers claim irreducible complexity, and the biologists show how the complexity is not irreducible.  The FTers claim (inexplicable) fine tuning, and untainted physicists show how the fine tuning is not inexplicable.  That was my ham-fisted intent in Is Luke Barnes Even Trying Anymore – Barnes claimed that αG is unnaturally small, making α/αG unnaturally large, and I explained how its value is not unnatural at all but is instead expected.

---

Note that I did write a comment on Barnes’ blog to this effect, but since the last comment never made it through his filter (and had to be reproduced here), I’d not be surprised to see the same thing happen with this one.  Just in case:

Hi Luke,

I've made comment on your paper here - http://neophilosophical.blogspot.com/2015/12/is-luke-barnes-even-trying-anymore.html - but in brief:

Your argument, perhaps taken from Martin Rees is that αG is unnaturally small, making α/αG unnaturally large.  However, this argument resolves down to a question of (in your definition of αG) the relative values of the proton mass and the Planck mass.  Thus it's fundamentally a comment on the fact that the Planck mass is rather large, much much larger than the proton mass (and also the electron mass which is more commonly used to produce αG).

Therefore, what you have overlooked is what the Planck mass is, because what the Planck mass is explains why the Planck mass is so (relatively) huge.  Unlike the Planck length and the Planck time, which both appear to be close to if not beyond the limits of observational measurement, the Planck mass is the mass of a black hole with a Schwarzschild radius in the order of a Planck length (for which the Compton wavelength and the Schwarzschild radius are equal).

If the Planck mass were supposed to have some relation to a quantum mass (ie being close to if not beyond the limits of observational measurement), then you'd have an argument for fine-tuning, but it's not and you don't.

And in any event, 9 orders of magnitude (between 10^-30 and 10^-39) is not a fine-tuned margin.  And that's only if you use the proton mass variant of αG.  If you use the more common definition of the gravitational couple constant, with the electron mass, there are 15 orders of magnitude (between 10^-30 and 10^-45).

I note that you didn't post my last comment (against a different blog post).  I'm giving you the benefit of the doubt and assuming that this was an error or oversight on your part.  I posted that comment on my blog, here - http://neophilosophical.blogspot.com/2015/09/another-open-letter-to-luke-barnes.html.


-neopolitan-

Sunday, 27 December 2015

Is Luke Barnes Even Trying Anymore?

I have, more than once, accused the cosmologist Luke Barnes of being a closet theist or, at the very least, someone who knowingly provides support to apologists such as William Lane Craig.

He doesn’t go so far as to deny it, but instead protests that his beliefs aren't relevant and more recently he has admitted to being “more than a deist” which, in the context of other comments, makes him a theist.

So, my question here is: is Barnes’ almost certainly christian theism relevant?  It wouldn’t be if Barnes were a total nobody and I would suggest that Barnes would be a total nobody, if it weren’t for the fact that he is doing so much wonderful work for various liars for christ, oops, I mean apologists.  And Barnes’ work appears, to a certain extent, to be informed by his theistic leanings.  A quite recent example is his latest paper, Bindingthe Diproton in Stars: Anthropic Limits on the Strength of Gravity, which he announced on his blog, just in time for christmas.

That doesn’t sound very crazy religious though, does it? Well, it does when you break it apart and when you look at just what Barnes is trying to do.

What are “diprotons” and the “anthropic limits” on them?  Diprotons are, as Barnes explains, two protons bound together.  He postulates that if an evil genius were to find a way to make the binding of protons into diprotons possible and turned this power on our sun, this would be catastrophic.  The sun would burn through all its fuel and go out in less than a second.  So, if we want life in our universe, it’s a good thing that protons don’t bind to each other.  What a relief that that didn’t happen, right?

In effect, what Barnes is arguing is that, if gravity were stronger (or, rather, less weak) then diprotons could form, and we’d have what he terms a “diproton disaster”.  As Barnes puts it:
Regardless of the strength of electromagnetism, all stars burn out in mere millions of years unless the gravitational coupling constant is extremely small, αG<≈10-30
Ok, I can accept that. I do notice however, that gravity isn’t too strong.

The mediating constant (the one that is the crux of Barnes' paper) is this αG, the gravitational coupling constant, which Barnes’ gives as the square of the ratio of mass of a proton to the Planck mass (ie αG=(mp/mplanck)2) but is given elsewhere as the square of the ratio of mass of an electron to the Planck mass (ie αG=(me/mplanck)2). 

Note that the other constant that Barnes could have talked about with reference to gravity is known as the gravitational constant.  It might seem curious that this constant didn’t come up for review, but the value of this constant is precisely 1 when expressed in natural units (ie Planck units, such that G=(ħ.lplanck)/(mplanck2.tplanck) where ħ (known as h-bar) is the reduced Planck constant which has, in Planck units, a value of precisely 1).  There's no apologetic wriggle room with the gravitational constant.

It should be noted that the gravitational coupling constant as defined by Barnes is, effectively, a measure of the mass of a proton.  And, thus, it should be further noted that what Barnes, and his fellow seekers after fine tuning, cannot legitimately do is claim both fine tuning of the gravitational coupling constant (as defined by Barnes) and fine tuning of the mass of the proton, because they are the same thing.  In other words, it would be quite questionable to claim the masses of component elements of a proton (the up and down quarks, so mup and mdown) as a separate example of fine tuning.  But guess what!  This is precisely what Barnes does in the discussion section of his paper.

Naughty, naughty boy.

But leaving that aside, one might be interested in knowing just how finely tuned the gravitational coupling constant is. Barnes does go into this, he provides some formulae together with some funky graphs and arrives at the conclusion that, to be life-permitting, the gravitational coupling constant must be less than or equivalent to 10-30, a value that Barnes characterises as “unnaturally small” (note that this might have originally been the opinion of Martin Rees).  The point Barnes is making with this description is that the value 10-30 is much less than 1 – because the mass of the proton (and even more so the electron) is much less than the Planck mass.  This is true. The Planck mass isn’t that small at all, weighing in at about 2.176×10−8kg.  This is equivalent to the mass of five human ova, 20,000 normal human cells, 1/4 of a human eyelash, 1/10 of the dry weight of a fruit fly or approximately one flea egg.  Compared to a proton, that's ... um ... massive.

So, Barnes’ argument sort of resolves down to a question as to why the Planck mass is so huge, especially given that the Planck length, Planck time and Planck charge are all so tiny (at about 1.616×10-35m, 5.391×10-44s and 1.875×10-18C, respectively).

Well the answer to that can be found by considering what the Planck mass actually is.  The Planck length and the Planck time can both be considered as the smallest meaningful divisions of their dimensions (which might not actually be the case, but we can’t make sense of smaller divisions – Wikipedia merely states “According to the generalized uncertainty principle (a concept from speculative models of quantum gravity), the Planck length is, in principle, within a factor of 10, the shortest measurable length – and no theoretically known improvement in measurement instruments could change that” but there is no citation given to support this statement).

Note also that there is, in both the Planck length and the Planck time entries, a statement to the effect that “there is no reason to believe that exactly one Planck unit has any special physical significance”.  Again there is no citation to support this statement, and I do have my doubts with regard to it.  Particularly when there is no such statement in the Planck mass entry.

The thing is that Planck length and Planck time are linked by the speed of light, c=lplanck/tplanck (and they are basically the same thing anyway given that time and space are, to an extent, interchangeable according to general relativity - making them alternative measures of spacetime).  And Planck length and Planck mass are linked via black holes.  The Planck mass is the mass of a black hole for which the Schwarzschild radius is two Planck lengths.

rS=2GM/c2

rS planck mass =2Gmplanck/c2

rS planck mass =2G.√(ħc/G)/c2

rS planck mass=2.√(ħG/c3)=2.lplanck

The upshot of this is that if the Planck mass were lower, then black holes would be forming all over the place and the universe as we know it would not exist.  It makes sense for the Planck mass to be (relatively) large in comparison to the mass of a proton, it’s not “unnatural” at all.

But we still don’t know how “fine-tuned” the gravitational constant is (unless you, dear reader, have scooted away and read Barnes’ article).  Remember, he arrived at the conclusion that αG<≈10-30.  And what is the actual value?  According to Barnes … αG≈5.9×10-39.  This is a value that seems to have been taken from Martin Rees’ work, possibly via William Lane Craig since it’s only one significant place, which is a bit odd for such a figure quoted in a scientific paper.  Wikipedia has Rees in 2000 giving the figure as 5.906×10-39, during some discussion on the value of N.  N is one Rees’ six numbers and is given by “the strength of the electrical forces that hold atoms together, divided by the force of gravity between them” – or in other words, the fine structure constant [hereafter to be known as the James Bond Number for reasons which might shortly be made plain] divided by gravitational force, N=α/αG=0.007/5.906×10-39≈1×1036, which hopefully we can all agree is a Very Big Number given that there are estimated to be only about 1024 stars in the universe).  We will never know where Barnes got his αG number from, he just throws it out there in the discussion without discussing how it was arrived at.  Again, that’s a bit naughty.

Anyways … it’s a value that could be (about) a billion times larger without leading to a “diproton disaster” and could (at least notionally) be a small as you like.  That could hardly be considered “fine-tuned” if you ask me.

And that’s only when you use Barnes’ definition of αG and Barnes’ figure for αG.  Otherwise, you have values of 1.7518×10-45 or 3.217×10-42.  Clearly it doesn’t matter much what value this constant has …

Oh, and who funded Barnes’ paper?  Yep.  Templeton.

---

When doing a little research for this article, I looked at some of Barnes’ older fine tuning posts, given that he himself linked to some of them in the post announcing his new paper.  One was particularly revealing.  It was an attack on PZ Myers, a person I have never warmed to although I can’t quite put my finger on precisely why.  The details aren’t terribly important, because what I found revealing was this comment right at the end:

Not content with merely demonstrating his ignorance, Myers proceeds to parade it as if it were a counterargument, allowing him to dismiss some of the finest physicists, astronomers, cosmologists and biologists of our time as “self-delusional”.

This isn’t controversial, I suppose, but under “physicists, astronomers, cosmologists and biologists” were links to specific examples: Paul Davies (Templeton winner), Martin Rees (Templeton winner), John Barrow (Templeton winner) and Simon Conway Morris.  The last name is not yet that of a Templeton winner, but he is “a Christian, … most popularly known for his theistic views of biological evolution”, so he is quite likely to be a future nominee if not winner.  It’s also interesting to note this from his Wikipedia entry:

He is now involved on a major project to investigate both the scientific ramifications of convergence and also to establish a website (www.mapoflife.org) that aims to provide an easily accessible introduction to the thousands of known examples of convergence. This work is funded by the John Templeton Foundation.


What were the chances?

Monday, 21 December 2015

Luke Barnes Decloaks a Bit More

In Luke Barnes (Partially) Decloaks, I discussed how Barnes has provided hints that he is a theist.   What he hadn’t previously done is provide anything conclusive on whether he is an apologist, or intentionally giving succour to apologists.  But in a recent, very short blog post, he has now done so.

When I say "very short", I mean really, really short.  Here it is (my emphasis):

very interesting essay from Alex Vilenkin on whether the universe has a beginning and what this implies. If you want my opinion, "nothing" does not equal “physical system with zero energy”.

This was followed by a list of related articles written by Barnes (again, my emphasis)





When you search Vilenkin’s essay, you will find that he mentioned the word "nothing" eleven times, once in the section "Eternal Inflation" and ten times in the section "God's proof" (two are in footnote 18 which relates to this section).

So, of all that Vilenkin had to write, Barnes only objected to the section that contains an explicit defeater to William Lane Craig's cosmological argument from first cause:

Modern physics can describe the emergence of the universe as a physical process that does not require a cause.

Noting that this section leads inexorably to Vilenkin's conclusion in "An Unaddressable Mystery":

When physicists or theologians ask me about the BGV theorem, I am happy to oblige.  But my own view is that the theorem does not tell us anything about the existence of God.

And the objection that he raises?  Almost precisely the objection raised by William Lane Craig when discussing Lawrence Krauss' A Universe from Nothing: Why There Is Something Rather Than Nothing:

Now that is absolutely fundamental to this claim by Lawrence Krauss. He ignores the philosophical distinctions between something and nothing, and says science is going to define these terms; it's going to tell us what nothing is. And what he winds up doing is not using the word nothing as a term of universal negation to mean not anything, he just uses the word nothing as a label for different physical states of affairs, like the quantum vacuum, which is empty space filled with vacuum energy, which is clearly not nothing as any philosopher would tell you. It is something. It has properties. It is a physical reality.

So what we have is a couple of physicists on one side, explaining how something can actually come from "nothing" and, on the other side, William Lane Craig and Luke Barnes quibbling about the definition of "nothing".  I'd actually add a few more physicists to the list, for instance Barnes' late nemesis Victor Stenger:

Suppose we remove all the particles and any possible non-particulate energy from some unbounded region of space. Then we have no mass, no energy, or any other physical property. This includes space and time, if you accept that these are relational properties that depend on the presence of matter to be meaningful.

While we can never produce this physical nothing in practice, we have the theoretical tools to describe a system with no particles.


… many simple systems are unstable, that is, have limited lifetimes as they undergo spontaneous phase transitions to more complex structures of lower energy. Since “nothing” is as simple as it gets, we would not expect it to be completely stable. In some models of the origin of the universe, the vacuum undergoes a spontaneous phase transition to something more complicated, like a universe containing matter. The transition nothing-to-something is a natural one, not requiring any external agent.

Note that the physicists have (at the very least) theoretical physics on their side, equations with data taken from observation and experiment to support their case.  William Lane Craig and his ilk, despite the support afforded to them by Barnes, have nothing more than pseudo-sophisticated wordplay and equivocation over the term "nothing" (together with hidden equivocation over the term "everything" - which in their argument means "everything with the exception of god").

---

I'm pretty sure that I've made this point before, but it's worth making a few times.  In the article that Barnes addresses so briefly, Vilenkin provides a simple summary of the Borde-Guth-Vilenkin (BGV) theorem, the same theorem that William Lane Craig calls on all the time (my emphasis):

Loosely speaking, our theorem states that if the universe is, on average, expanding, then its history cannot be indefinitely continued into the past. More precisely, if the average expansion rate is positive along a given world line, or geodesic, then this geodesic must terminate after a finite amount of time.


Sure, the universe is expanding now, and there are indications that that expansion is accelerating now.  But what about on average across the whole history of the universe?  It's possible that this expansion rate has been positive throughout, but it's also possible that it hasn't or that another assumption of the BGV doesn't hold (note the comment "their model avoids singularities because of a key difference between classical geodesics and Bohmian trajectories", the BGV relies on classical geodesics).  William Lane Craig never addresses these issues; he takes it as granted that the universe has always been expanding and, to extend him some credit, the Ali-Das model was only published relatively recently (but, retracting the credit, I don't actually expect Craig to ever acknowledge the difficulties that this model presents to his argument).

Sunday, 20 December 2015

There's More than One Way to Slice a Pizza

In Bertrand the Shape-Shifter's Natural But Not So Obvious Pizza, I wrote about we could use a set of (ALL) chords in a meaningful and natural way, by slicing up a circular pizza into arbitrarily narrow slivers and determining their average length, which would then give us the width of a pizza of length 2R with the same area as the circular pizza.

Mathematician responded by saying that we could envisage a physician called Bertrand who lives on a circular atoll and uses his boat to attend to emergencies which occur at random locations on the atoll.  Each trip is notionally a chord (eliminating wind effects, any strange current effects and curvature of the earth) and this Bertrand can use data from trips over a sufficiently long period of time to arrive at an average trip length (we’d also have to assume that he would stubbornly use his boat even when walking would be more appropriate).

This is an intuitively appealing scenario.  The problem, to my mind, is that while we most certainly do get the average trip length I am not convinced that we get the average length of a chord within the circle defined by the atoll.  For example, as I pointed out to Mathematician, we could conceptually slice up the disc of water surface surrounded by the atoll into slivers/chords and arrive at the area of that disc using a similar process as with the pizza reshaping scenario (longest sliver/chord length x average sliver/chord length).  And the result would not be the same as the physician’s average trip length.

We could do something similar with the pizza slicing.  Imagine that Bertrand the pizzeria owner had a few padawans and a ridiculously large number of circular pizzas that he is willing to devote to the resizing research effort.

Each padawan chooses a different method to slice the pizzas to obtain representative samples of chords.  Because they are not as skilled as Bertrand himself, they must split each pizza with one cut and then use the length of the cut as a chord length.  Then they add up the lengths, divide by the number of pizzas and, voila, average length.

The first one thinks “a chord is the intersection of a line and a disc, the pizza represents the disc and I will therefore find a way to randomly intersect my pizzas with lines”.  He decides that what he will do is create a surface with a large blade which can be randomly set to one of 3,600 million orientations (each with a likelihood of 1/360,000,000) - think 3,600 different angles and 1 million parallel lines for each angle.  He then spins each pizza into position, randomises the position of the blade, engages the blade and measures the slice.  Eventually he will arrive at an average length of πR/2.

The second one thinks “all chords of length greater than zero cross the rim of the disc in two locations, so I can just pick two random locations and slice between them”.  She’s not particularly well trained, so she doesn’t see any problem with using vermin in her method and so decides to use her pet mice.  First she spins each pizza into position and then releases a mouse which then wanders over to the pizza, then onto it in a cartoonish search for cheese and eventually off the pizza again (at a random location).  Then she brings out a samurai sword, a la Kill Bill, and slices the pizza between the points at which the mouse mounted and dismounted the pizza.  Eventually, she will obtain an average distance between mount points and dismount points – each of which is a chord.  However, this average distance will be 4R/π.

The third one thinks “all chords pass through points within a disc and each point on the disc has a shortest distance between intersections with the circumference, making that point the midpoint of the resultant chord, so I only need to pick points and produce the shortest slice through each point”.  Being even less aware of hygiene considerations, this padawan brings a pet fly which is dipped in paint and allowed to fly around until it lands randomly on a pizza, leaving a dab of paint.  The pizza is then sliced to make the shortest chord through that point and the slice is measured.  Eventually, this poor excuse for a human being will arrive at an average length for a mid-point generated chord of something close to 4R/3.  Note that this is pure observation on my part, I ran a simulation and the figure I got seems to hover around 1.33 after 4000 iterations. I don’t have an actual equation to explain the value, it could just as well be 21R/5π or 17πR/40 – in any event, the value lies between 4R/π and πR/2, but is closer to the former.

My point here is that we can use all three of the standard methods to arrive at chords, and thus average chord lengths, but only one method produces the same result as slicing the entirety of one single circular pizza into arbitrarily thin parallel slivers.  Hopefully the reader will grant that a single circular pizza sliced into arbitrarily thin parallel slivers is representative of all possible orientations of the arbitrarily thin parallel slivers – if not, consider an arbitrarily large number of circular pizzas which are sliced the same way, but with arbitrarily small increments of rotation.  The average length thus determined will not be different from that arrived at via the slivering on one circular pizza.

Similarly, it is hoped that this method can be understood as obtaining a representative sample of all intersections of a disc (the circular pizza) with all lines that pass through that disc.

I have absolutely no problem with the average lengths arrived at via the mount and dismount points or via the midpoints (although I’d be loathe to eat any of the pizzas), but I cannot see them as representative of the average length of all chords.  They are merely the average lengths between the points at which the mouse mounted and dismounted the pizzas and the average length of the shortest slices passing through random flyspecks which, to me, seem to be different things.

And, in my opinion, Bertrand should sack two of his apprentices with immediate effect.

---


Hopefully, it can be seen that the mouse scenario is effectively the same as the physician scenario – it just seems a lot less impressive, because it’s a mouse scampering around a pizza rather than a servant of the people heroically heading off to save a life.

Thursday, 17 December 2015

Luke Barnes (Partially) Decloaks

I have, since March 2013, written seven articles that refer to Luke Barnes, to a greater or lesser extent.  Barnes can blame himself for much of these references because he responded to the first article which was only tangentially about him (so long as Barnes is not actually the person I was responding to, namely, faithwithreason).

I wrote in that article: "So, really, the universe could be 'finely tuned' in the sense that if it were different then life could not have arisen and this would provide no proof of Barnes' god."

In the comments, Barnes' denied being a creationist and denied being against evolution.  He said that he preferred to keep his theological leanings to himself.  This is all well and good, and I can fully go along with people keeping their beliefs (and non-beliefs) to themselves so long as their work and their opinions are not informed by underlying theological concepts instead of - or, perhaps more charitably, as well as - more rational concepts … and this is particularly the case if they are scientists.

And this is the problem.  Barnes is selective in his attacks on people engaging with the issue of "fine-tuning".  He took a physicist, Victor Stenger, to task for his thoughtful and detailed discussion, while giving a big thumbs-up to William Lane Craig for his nonsense ("well worth a read").

So, I've been trying ever since (from time to time) to work out if Barnes is a cloaked theist.

I think we have our answer, in part due to the efforts of Arkenaten.

For some bizarre reason, Barnes took it upon himself to attack science royalty in the form of Neil deGrasse Tyson - twice!  And the precise topic on which Barnes attacked Tyson was on the topic of god, specifically Tyson's thesis that a belief in god can put the brakes on scientific enquiry and that "intelligent" design people should "get out of the science room".  Note that Barnes didn't really attack the thesis, but he devoted two blog articles to attacking the example that Tyson used, namely the fact that Newton didn't develop perturbation theory despite it being right up his alley and it took another 200 years before Laplace arrived on the scene and did.

In the first article, Barnes arrives at the conclusion that "scientists suck at history".  This is probably true and probably even true for the reasons that Barnes gives - that scientists read history like everyone else, reading the bits that appeal to them, interpreting historical events in modern terms and picking heroes based on their similarity to the reader.  Barnes goes so far as to say that Tyson raises Laplace to the role of hero because, like Tyson, Laplace was apparently an agnostic.

Interesting, huh?  When challenged by Arkenaten, "Are you upset by Tyson’s take on this because you are Christian, or simply a deist", Barnes responded with "A Newtonian".

Following Barnes' own logic, Barnes is thus a theist, an heretical christian and occultist, like the scientist Barnes venerates - Newton.  (I don't actually think that Barnes is an occultist and as to being heretical, it seems that all the various subdivisions of christianity disagree with each other on various points, so they are perhaps all heretical to one extent or another, depending on who you ask.)

But even more telling, Barnes later responded to Arkenaten that he (Barnes) is "at least a deist".  Ok, now we are getting somewhere.  I don't have any real problems with deists, so long as they are real deists.  Functionally they are indistinguishable from atheists and non-theists with the exception of their answer to "how did the universe come about?"  We'll say "we don't actually know, but there are some tentative hypotheses that don't yet amount to fully fledged theories" (or in short "we don't know") while the deist will say "a god set it all in motion, then buggered off never to be seen or heard from again, leaving the universe to operate purely on the basis of the physics that it (the god) had established, but as to the nature of that god, we know nothing other than it created the universe" (or in short "we don't know").

However, a classical deist does not believe in a personal creator.  (Not even the modern deists do, despite having co-opted the notion of transpersonal relationships.)  This is of interest because, back in the comments to An Open Letter to Luke Barnes, Barnes responded to something I wrote:

Luke also fails, in his paper, to clearly indicate that the claims of apologists with respect to fine-tuning are overblown, thus allowing those apologists to interpret Luke’s work as supporting "strong fine-tuning".  In fact, he follows the standard apologist approach by presenting two "tidy explanations" (which he graciously admits are neither mutually exclusive nor exhaustive), one which is basically the anthropic principle and another which relies on a "transcendent, personal creator".  Note, he doesn’t say a "divine being" or "a god", Luke uses a term which could have been lifted directly from William Lane Craig or the Institute for Creation Research or Lee Strobel’s "The Case for a Creator".

writing:

""transcendent, personal creator". Note, he doesn’t say a "divine being" or "a god""   They mean the same thing. You really should learn to read the lines before you try to read between them.

If Barnes truly believes that "a god" = "a transcendent, personal creator" then he's not a deist.  To believe in a personal creator, which he's effectively done here, he must be a theist.  He might not define himself as a theist, he might consider himself a deist of some form, but all the signs are pointing to him falling into the generally understood category of theists.  As I have pointed out before, it certainly explains all the quacking.

---



I say "(partially) decloaks" because I still harbour suspicions that Barnes has apologetic leanings and an intention to support people like WLC with his pronouncements on fine-tuning, or more specifically "fine-tuning for intelligent life".

Friday, 11 December 2015

A Brief Pause for Levity

A young mathematician develops an interest in Buddhism and signs up to spend the summer as one of the novices at mountain temple retreat. Sadly, he quickly begins to annoy the monks - partly with his endless and enthusiastic search for answers to koans, disrupting the education of other novices, but mostly with his attitude when anything vaguely mathematical is discussed.  They decide that an example should be made.

During an early morning lesson, the new novice is summoned to kneel before the master who announces that he is to leave the class for the day to carry out a special task.

"You are to descend the mountain," the master tells the young mathematician, "at the foot of the mount, you will find a stream.  Be at one with the stream.  Once you are at one with the stream, collect N pebbles.  Then bring those pebbles to me."

"Aha!" exclaims the mathematician, "But that task is poorly defined.  What is N?"

The master nods sagely, claps one-handed and responds: "Indeed!  When you have the answer to that question, you will be the master."

Thursday, 10 December 2015

Bertrand the Shape-Shifter's Natural But Not So Obvious Pizza

Imagine that we have the owner of a pizzeria, let's call him Bertrand, who wants to break with tradition.  For centuries the pizzeria that he is the current owner of has made round pizzas (what Americans call "pizza pies" - thus allowing some sense to be made of Dean Martin's "That's Amore": When the moon hits your eye, like a big pizza pie - which always sounded to me like When the moon hits your eye, like a big piece o' pie … who throws around bits of pie, let alone big bits?)

But Bertrand is now heartily sick of circles and he wants to make the transition to rectangular pizzas.  However, he has a minor problem.  His customer base is accustomed to a pizza base based pricing scheme - and they they don't want their pizzas to shrink (or grow) as a consequence of this shape change.  Bertrand already has a range of boxes which fit his circular pizzas perfectly, so he knows how long his rectangular pizzas will be … all he needs to do is work out how wide they have to be to keep his loyal customers happy.

Here's a graphic to illustrate his conundrum:

 

This is reasonably easy to work out.  The area of the rectangle is 2R times the width (w) while the area of the circle is πR2, so we make those areas equal:

w.2R = πR2
w = πR/2

Thus we could say that the "average width" of a circle of length 2R (which is true of all circles of radius R) is πR/2.  All Bertrand needs to do is plug his values of R (10cm (bambino), 15cm (piccolo), 20cm (medio), 40cm (grandi), 60cm (ridicolo)) and Roberto's his uncle.

But let's say that Bertrand did not have a mathematician handy and he was casting around for another way to work out the area of his pizza in rectangular form.  How could he do it?

One way would be to use a form of integration.  Being a very precise person, and skilful in the ways of pizza, Bertrand could slice his pizza up into 1mm wide slivers, use those slivers to reassemble the pizza in rectangular form and then measure the resultant rectangle.  This is equivalent to how we find the area under a line (using Reimann sums).  By arranging the slivers in a 2R.w rectangle, Bertrand is effectively "adding them up".

Essentially, if not practically, Bertrand could do this with infinitesimally narrow slivers and doing so would only make his result more accurate (- see Wikipedia's article on Reimann sums which has animations that show something similar to my arbitrarily large value of N approaching infinity).  The infinitesimally narrow slivers would be equivalent to chords and as a consequence, the "average" length (or even width) of these chords would be πR/2 - where, by "average" I mean "mean", and this "average" would be the same as the "average" (mean) width of a circle of radius R … but not the "mean width" which means something else.

---

I have to go into an aside here.  Or rather two asides.  Or maybe three (and three asides do not atriangle make … oops that'd be four asides now).

The existence of the mathematical term "mean width" provides us with an example of how English could, at least occasionally, benefit from more concatenation.  Such concatenation would allow us to clearly distinguish between mean width in a general sense and what would become meanwidth … in much the same way as we can distinguish between Donald Trump's wetback (I think it's the second from the left) and Donald Trump's wet back (fortunately, no image was available).

Even so, I've effectively used the concept of "mean width" without referring to it.  The length of Bertrand's new rectangular pizzas is the mean width of a circle of radius R.  Because I am not a professional mathematician, just a person who uses the more useful aspects of mathematics on a daily basis, I tend to think of three dimensional objects having three features: length, width (or breadth) and height.  Height relates to the object's orientation with respect to gravity, width can be used for both the other dimensions under many circumstances (think of a tower with a rectangular base, we know how high it is, but it seems wrong to think of its longer base as defining its "length").  However, for things which are not particularly high (like rectangular pizzas), it certainly feels like there is a convention such that the longer side gives the length and the shorter side gives the width.

I'm not saying that people who think differently are necessarily wrong, but I would surely be forgiven for thinking of them as being as thick as two short planks.

Another little aside, I tend to over complicate this allusion by thinking that someone who is talking about short planks doesn't know how to use planks properly, say we have a 10 foot 1x6 plank.  To me that is a plank that is 10 foot long, 6 inches wide and 1 inch thick.  We could make these "short" planks by considering the 1 inch to be its length, but such a plank is at least 6 inches thick.  That's thick for a plank, right, and so is the guy who thinks you can legitimately think of a 10 foot 1x6 plank as being 1 inch long …

I would be tempted to challenge such a person with a tale about a chicken who crossed a road, on one side of which was London while Dover was on the other side - so the obvious answer to the ancient riddle would thus be "to go on holidays".  It just happens to be a fact that, in length, the A2 is ridiculously short (perhaps even less than 10m in places), but it makes up for this by virtue of its incredible width (in the order of 115,000 metres).

Then I'd snort contemptuously and go back to arranging my pencils.

Anyway … I did use the concept of mean width, but I used it as if it were "mean length".  Oops.

---

Bertrand the shape-shifting pizzeria owner has now been able to find out the necessary width of his new rectangular pizzas, so let's leave him behind now and look again the other Bertrand and his problem with chords.

It has been argued that the problem arises from the fact that there is no single obviously natural method to select (identify or get) chords and (either consequently or on the basis that) there is no single obviously natural probability measure.

I suggest that this mean chord length might be another useful if not exactly obvious (except perhaps in retrospect) was to arrive at a natural probability measure.  That is, if you arrive at a mean chord length which is not equal to πR/2, then you have a problem.

More specifically, I am suggesting that a method that arrives at a set of chords the mean length of which is not πR/2 then as a consequence, we have discovered that there may be something unnatural or skewed about that set, even if, prior to the discovery, the method appeared to be natural and unbiased.

---

I did do some modelling and proved (to my own satisfaction) that chords selected "at random" using the 1/2 method have a length, on average, of πR/2.

Chords selected using the 1/3 method have a length, on average, of 4R/π while chords selected using the 1/4 method have a length, on average, of what appears to be πR/3.


Note that these averages were calculated on the same basis - I generated a large number of random chords using each of the methods and then obtained the arithmetic mean of the resultant chords.

Monday, 7 December 2015

My Problems with Mathematician's Circular Argument

In the comments to Rectangular Circles - Yet Another Responseto Mathematician, I wrote what I thought (with some hubris) was a great argument, a killer argument:

I get the point that you are making here (or at least I think I do). It's why I've talked about the "set of ALL chords".

When you say (paraphrased) "there could be far more chords with c in [R/2,R] than in [0,R/2]" you are, I presume, assuming a "proper" mathematical circle/disc - we are talking about Euclidean space and not talking about curved space, or anything tricky like that. If so, I'd have to ask, on what basis, other than your selection process for problems like this, can you suggest that we might have more possible chords (and thus more possible lines defined by extending those chords out to infinity) passing through the interval [R/2,R] than any other interval of the same length? The circle/disc under consideration is essentially undefined as far as location, size and rotation go, so we should (reasonably) be able to change the locus and not have our answer change on us - but what you are suggesting is that if we shift the locus up by R, and rotate the circle/disc by π, then we'll change the number of lines passing through the intervals [0,R/2] and [R/2,R]. Ditto if we expand our circle/disc by a factor of 2 while retaining the locus at the notional (0,0).

We could even have two overlapping circles/discs, both of radius R, one with a locus at (0,0), the other with a locus at (0,R). This would mean that you'd simultaneously have more lines passing through [R/2,R] (as defined by chords in the first circle/disc) and more lines passing through [0,R/2] (as defined by chords in the second).

This seems odd to me. Does it not seem odd to you?

Mathematician replied (with my current responses interspersed):

> "set of ALL chords"

Wow, maybe I understand what you mean, but it would be odd.  We have a given circle, right?  When you say the "set of ALL chords", are you including the chords that are NOT inside the given circle (but inside another circle somewhere else ...)?

Was that your point all along for repeating "ALL chords" all the time? It would make sense with the rest of the argument:

I first talked about "ALL chords" in a response to a comment on Triangular Circles.  In Mea Culpa, I put some effort in to explain what I meant by "ALL chords" – there is hopefully no indication whatsoever that I had any thought about considering all chords in all circles, and thus including chords that are not in the circle being considered.  No, I meant "ALL chords" in the circle being considered.

My apologies for not making that sufficiently clear.

Sadly, the banks rarely have this sort of confusion, so when I go and tell them that I want to take out "all the money", they don’t pop out the back and give me every single dollar from everyone’s account … they just give me the money that that was in my account.

> on what basis can you suggest that we might have more possible chords (and thus more possible lines defined by extending those chords out to infinity) passing through the interval [R/2,R] than any other interval of the same length?

Since the beginning, you are thinking of chords as the intersection of a straight line with the disc. That's a great characterization and a good way to get chords. (but not the only one, as we both know)

So if I'm not mistaken, for your point of view, there is an existing set of all straight lines on the entire plane (like an infinite net), and you are just taking the intersection of this existing set of straight lines with a given disc. And you say that if you move the disc around, it will not cross the same straight lines, but the answer to Bertrand question should remain the same. Am I correct to assume that this is more or less your reasoning?

With this interpretation you are absolutely correct and agree with Jaynes argument. This is a mathematically correct argument.

At least I have that right!

But that's not the only natural point of view on this problem.

See, I'm taking another characterization for chords. For me a chord is a segment between two points on the circle. So there is nothing "outside" my circle. I have no reason to extend a chord out to infinity. The chords are not intersection of lines with the disc, they are segments inside the disc! There is no reason to consider objects (lines) that are not chords on the given circle, don't you think?

So, If I change the locus of the circle in the plane, the chords are moving with it. If I double the size of my circle, then the chords inside it will double their size. If I rotate the circle, the chords will rotate with it. So the final answer to Bertrand question will not change at all.
And with that point of view, it's perfectly natural to have more chords close to the rim than close to the locus. You only think it's odd because you are thinking of an existing "net" of straight lines on the plane, and you place your circle on that existing net of lines. But from my point of view, there is no "net" of existing straight lines.

I note that you write here "The chords are not (the) intersection of lines with the disc".  In one of your comments at Triangular Circles, you wrote (my emphasis):

A circle is a 1-dimensional curve in the plane. A disc is the 2-dimensional surface that is enclosed by the circle. So it's important to make the distinction, whether you talk about the endpoints (which are on the circle) or the midpoint (which is in the disc) of a chord (which is the intersection of a line and a disc).

Perhaps you can see why someone might get confused.

So my problem with this approach, and I’ve hinted pretty strongly at it (that is by writing it), is that as you have said yourself a chord is the intersection of a line and a disc – irrespective of how you select that chord – and a chord is thus also a segment of a line – again irrespective of how you select the chord (it just happens to be the segment of the line that intersects with the disc).

If you could have a circle and nothing else, then I guess I would have to agree with you, at least on the basis of my ignorance with respect to the implications and also in recognition of your authority as a Doctor of Mathematics.  However, immediately after invoking this free-floating circle you refer to something outside the circle – specifically when talking about changing the locus, altering the size and rotating it.  You have thus invoked an external reference plane on which the circle/disc rests.  Would you not agree that it is meaningless to talk about translational, rotational and scalar invariance if the circle is all there is?

It seems, therefore, to me, that you are attempting to have your cake and eat it too.

Perhaps there was something in Bertrand's original phrasing that leads you to think that we can talk about a free floating circle, rather than one embedded in a plane.  I just don't know.  Unfortunately, my French is that of a rather forgetful schoolboy, I remember bits and pieces, I get the general gist of a menu or a wine list, but I never got to the stage at which I might have deciphered Bertrand's original text.  If only there were some French speaking mathematician I could call on to interpret …

This paper, interesting, explicitly refers to a plane, or rather the plane: "Consider a disk on the plane with an inscribed equilateral triangle."  In the afterword, the author writes (rather pleasingly from my point of view):

In his pointedly titled paper The Well-Posed Problem, (Jaynes) applies this principle to the Paradox of the Chord with success, uniquely identifying the uniform distribution over the distance between the midpoint of the chord and the center of the disk as the correct choice of measure, which he then proceeds to verify experimentally.

The use of the term "correct" is particularly satisfying, although I'm sure that there's some reason why my understanding of the term "correct" is limited and that that lack of understanding will be swiftly addressed.

> We could even have two overlapping circles/discs, both of radius R, one with a locus at (0,0), the other with a locus at (0,R). This would mean that you'd simultaneously have more lines passing through [R/2,R] (as defined by chords in the first circle/disc) and more lines passing through [0,R/2] (as defined by chords in the second).

If I understand correctly, with your point of view, if you have two circles in the position you gave, there is as much chords in the interval [0,R] than in the interval [R,2R], right? With your point of view, the fact that we have one, two or seven circles, do not change the "density" of chords at all, is that correct? This seems odd to me.

With my point of view, each circle has its own set of chords, so the "density" of chords will be higher in the intersection of both disc. So there will be "twice as much" chords in the interval [0,R] than in the interval [R, 2R], because there are two set of distinct chords.

If we permit a single mathematical line to "carry" multiple, overlapping, distinct chords, then I don’t see a major problem with the density of chords changing as you overlay circles (ie you could conceivably have ten identical circles on top of each other, with an infinite number of chords, each of which is replicated ten times).  But in the context that you quoted, I was not talking of chords per se, nor was I really thinking about the chords in multiple circles (except obliquely).  Perhaps I should not have event mentioned overlapping circles at all, because this has only served to confuse.

You seemed to understand what I was saying earlier, when you talked of a "net of straight lines on the plane" (my recollection is that all lines are straight, but perhaps you were just clarifying this for my benefit).

There will be no line that passes through the circle (across the disc) which is not also coincident with a chord, right?  (By this I mean that there will be a segment of the line which directly corresponds with a chord, the ends of which lie on the circumference of the circle/disc - the segment shares the same length, gradient and endpoints as the chord such that one could almost consider them to have the same identity.  When I said "carry" above, I mean to imply that such a chord is, in a sense, lying on the line with which it shares a segment.  It is possible, with multiple circles, for there to be multiple segments of the line, perhaps overlapping, lying on top of each other … perhaps entirely identical and overlapping. This might not be standard phrasing, but I hope that you can understand my intention.)

There is an infinite array of lines that pass through the circle, and a(n infinite) subset of those will intersect with a notional y-axis.  With your preferred method of selecting (or identifying) chords, via their endpoints, it seems to me (and even to you, apparently) that there will be fewer lines that intersect the y-axis at the locus of the circle than at the circumference – this is what does not appear to be justified.  Spiriting the circle out of this universe does not appear to be justified either, but perhaps the problem as originally phrased does demand it and I am simply unaware of that aspect of it.

---

Would it help to move away from circles for a moment?  I understand that there are certain things about circles that might lead you to favour endpoint generation of chords.

I was thinking of a similar problem, but involving a square (squared circles!)  What is the probability that, on selecting at random an s-chord (my term for the equivalent of a chord within a square) the endpoints of which do not share the same side of a square, the selected s-chord is greater than the length of the longer sides of an isosceles triangle which has one of the sides of the square as its base (√5L/2 where L is the length of the sides of the square)?

This does, to me, seem a more complex question to answer.  I'm tempted to say that the endpoint approach will give, once again, a result of 1/3.  We don't seem have an equivalent of the other two approaches, because s-chords are not as constrained as chords - but perhaps there is a way of thinking about them using one side of the square as a reference.

But that's not what I was thinking about specifically.  What I was thinking about is the density of s-chords.  Would you expect to see them clustered around the edges of the square (or sides), or at the corners (or vertices), and more sparsely represented in the middle of the square?

---

Why did I call your argument "circular"?  I was going to weave that into my response above, but there was no obvious or natural place for it.  So my explanation will have to stand on its own.

My argument is that a circle/disc doesn't exist outside of the plane on which it rests.  This in turn means that the "net" of mathematical lines of which the plane (conceptually) consists cannot be naturally separated from the disc, and hence from the chords (each of which is the intersection of a line and a disc).  If you ignore this, and consider chords as merely what you get if you connect two points on the circumference of a circle, a circle which (in my view) is strangely divorced from mathematical reality, then sure, you can float this circle around, squeeze or stretch it and spin it around.  With such an independent circle there is no need for invariance of any kind (translational, scalar or rotational) - because the circle is the circle is the circle.  If this is not circular …


(Please do note that this is, at least in part, a joke - a poor excuse for a pun.  My more serious efforts lie above this section.)