Sunday 25 November 2018

Half-Integer Spin and the Free Space Constants


Most of the time that you see the Planck constant used, it is used in terms of angular frequency (ω), so it’s not so much the Planck constant as the reduced Planck constant (ħ).  This is because it’s referring to an entire cycle of 180 degrees or two radians or 2π.

Most of the time you see the permittivity of free space or permeability of free space use (also known as the electric constant and magnetic constant respectively), you see that 4π is involved.  It’s as though we could talk about the reduced magnetic constant (µ0) to remove that 4π term.  For example:

µ0=2α.h/(e2.c)=4π.α.ħ/(e2.c) <=> µ0-bar=α.ħ/(e2.c)

Similarly, we could have a naturalised version of the electric constant (ε0), which also almost always has a 4π involved:

ε0=e2/(2α.h.c)=e2/(4πα. ħ.c) <=> ε0-bar=e2/(α. ħ.c)

As discussed in What is the Planck Constant? these both resolve to unity in Planck units.  Note also, as discussed in Fine-Structured but not Fine-Tuned, α= e2/qpl2, meaning that µ0 and ε0 can be expressed in terms of ħ (unity in Planck units), c (unity in Planck units) and qpl (unity in Planck units), ie:

µ0-bar=ħ/(qpl2.c)=(ħ/qpl2)/c=1
and
ε0-bar=qp 2/(ħ.c)=(qp 2/(ħ)/c=1

This is quite useful, basically everything (at the Planck scale) resolves to unity and the only reason we have odd numbers is because of our units of length, time, mass, charge and temperature – or because of our arbitrary choice of reference mass (for αG) and slightly less arbitrary choice of reference charge (for α).

The question arises though, why 4π?  The 2π for the reduced Planck constant, ħ, makes sense because of the angular frequency, because it’s referring to a full rotation through 360 degrees, or 2π radians, but how could 4π make sense?  Of course, I’ve given the game away in the title of this post.

A photon has what you could call a “normal” spin.  After a spin of 360 degrees it is identical to how it started.  Sub-atomic particles however, like electrons, have a quantum level half-integer spin, or spin-1/2 – they need a spin of 720 degrees to arrive back at an identical state from which they started out, or 4π radians.

This suggests that the electric and magnetic constants might be linked to a characteristic peculiar to quarks and leptons, ie half-integer spin.

Wednesday 21 November 2018

What is the Planck Constant?

Recently, the Planck constant was mentioned in the news due to an agreement to change the definition of the kilogram.  Rather than returning to the reference kilogram mass, the kilogram is now to be determined based on the value of the Planck constant, which is now a defined value in the same way as the speed of light is a defined value.

The question that is a little difficult to find the answer to in all the new reports is “what precisely is the Planck constant?”  Of course you could go and ask Google, but when you’re reading an article, you sort of want all your answers in one place.

I did see one article which tried to address the question, with the following:

The Planck constant is the amount of energy released in light when electrons in atoms jump around from one energy level to another, explained physicist Tim Bedding of Sydney University.

Well, yes, sort of.  I am pretty sure that this is a journalist error rather than a physicist error.

The Planck constant (h) can be used to work out the energy of a photon, when we know its wavelength or its frequency: E=h.f=h/λ.  But that’s not all.  The Planck constant is often used as part of a sort of exchange rate, allowing you to convert from everyday units like seconds, metres and kilograms into quantum level units: Planck units like Planck time, Planck length and Planck mass.

A particularly useful thing about using natural units, like Planck units, is that fundamental constants resolve down to unity, that is, they equal 1.  For example, using Planck units:
  •         Speed of light, c = 1
  •         Gravitational constant, G=1
  •         Coulomb constant, ke=1
  •         Boltzmann constant, kB=1
As it is, the Planck constant itself is not a fundamental constant that resolves to unity, but the reduced Planck constant is and does – meaning that the value of the Planck constant in Planck units is 2π, so the reduced Planck constant is ħ=h/2π.  There are two other key constants, the permittivity and permeability of free space, or the electric constant and the magnetic constant, ε0 and μ0, which don’t quite resolve down to 1 either, but these resolve down to ε0=1/4π and μ0=4π, so you could have a raised permittivity and a reduced permeability of ε0-bar=1 and μ0-bar=1.

In an earlier article, I wrote about how it is possible to resolve even the fine-structure constant and the gravitational coupling constant (which are both dimensionless) to 1.

None of this would be possible without the Planck constant, so the added role of defining the kilogram should not be too much for it to bear.

---

I thought I might add here that for many purposes in physics what matters more is angular frequency rather than just frequency per se.  That is to say its not just how many whatevers per unit of time, but how many rotations per unit of time.  A full rotation is 2π radians so the relationship between vanilla frequency (f) and angular frequency (ω) is ω=2π.f which means that there is another equation for the energy of a photon, E=(h/2π).ω=ħ.ω.  There is an argument that the unreduced Planck constant is only used for historical reasons and that the reduced Planck constant is that one that we should be using primarily.

I'll go into this a bit more in a later article.

Thursday 8 November 2018

Fine-Structured but not Fine-Tuned

There has been a lot of fuss about the fine-structure constant (α), perhaps because it’s a specifically odd value, at very very close to 1/137.  And 137 is an odd number, both in that it’s not even and also in that it’s a prime.  And it’s a special prime, being a Pythagorean prime because 88*88+105*105 = 137*137, and the square root of 137 is the hypotenuse of a triangle with integer legs (4*4+11*11=137).  1/137 has a palindromic period number.

The value of the fine-structure constant is not, however, precisely 1/137.  It’s closer to 137.036, which is not as sexy.

This doesn’t stop some people from getting excited about, including our fine-tuning friends – for example Luke Barnes.  The reason for this (they argue) is that if the fine-structure constant were even slightly different then stars would either fail to produce oxygen (which I think we can all agree is important) or fusion could not occur at all – with a margin of about 4% either way.

The thing that’s a bit odd is that the discussion is all about this fine-structure constant, and yet the value of the elementary charge seems never to be mentioned.

What, you may ask, does the elementary charge have to do with the fine-structure constant?  If so, that means you didn’t follow the Wikipedia link regarding what the fine-structure constant is, because the very first two sentences are:

In physics, the fine-structure constant, also known as Sommerfeld's constant, commonly denoted α (the Greek letter alpha), is a dimensionless physical constant characterizing the strength of the electromagnetic interaction between elementary charged particles. It is related to the elementary charge e, which characterizes the strength of the coupling of an elementary charged particle with the electromagnetic field, by the formula (ε0).ħcα = e2.

So the fine-structure constant is proportional to the square of the elementary charge, because ε0, ħ and c are all constants (and 4 and π are also constant – note that I added the brackets above, they aren’t there on the Wikipedia site).  What I find interesting is that 4πε0, ħ and c are not only constant but, in Planck units, they all resolve to 1.  Note also that ħ is the reduced Planck constant, the Planck constant divided by 2π.  We could call 4πε0 “raised permittivity of free space” or the “raised electric constant”.

This might seem to be a little bit of a cheat, but it should be noted that µ0 has as similar but inverse relationship to Planck units, in that µ0/4π (“reduced permeability of free space” or the “reduced magnetic constant”) resolves to 1 in Planck units, so that not only does c2=1/ µ0ε0 but that relationship remains the same when µ0 and ε0 are replaced with their increased and reduced versions respectively.  Note also that the fine-structure constant can be expressed in terms of permeability, by the formula (ε0/4π).ħcα = e2.  And these two constants frequently appear with a 4π in the appropriate place, almost they are begging someone to normalise them in a similar way to how the Planck constant is normalised.

Normalisation removes the mystery of why, when all the other fundamental constants seem to resolve to 1 at the Planck scale, these two (µ0 and ε0) don’t.  They do when normalised.  What remains outstanding however is the fine structure constant.  It’s a dimensionless value, so how could we possibly resolve it down to 1?

The answer is hiding in those equations - (ε0).ħcα = (4π/µ0).ħα/c = e2.  Or, once reorganised - α = e2/(ε0).ħc = (e/((ε0).ħc))2.  So does ((ε0).ħc) have any meaning that we should be aware of?  You bet it does – it’s the Planck charge, or the charge on the surface of a sphere that is one Planck length in diameter and has a potential energy of one Planck energy.

So, put another way: α = e2/qpl2, the fine-structure constant is effectively an expression of the ratio of the elementary charge (e) to the Planck charge (qpl), in much the same way as the gravitational coupling constant is effectively an expression of the ratio of the rest mass of an electron (me) to the Planck mass (mpl), or αGe = me 2/mpl2.  (If you look up “electromagnetic coupling constant”, you’ll be redirected to the fine-structure constant.)

If you read about the gravitational coupling constant, you will note that there “is some arbitrariness in the choice of which particle’s mass to use”.  It appears less arbitrary to select the elementary charge when considering the electromagnetic coupling constant (ie the fine-structure constant), but it is still a little arbitrary.  There is a smaller charge that could be selected, that associated with quarks, which could be as low as e/3 (positive or negative depending on the type of quark).

Before I take the next step, I have to point out that while the gravitational and electromagnetic coupling constants (as commonly understood) are effectively an expression of the ratio between the relevant characteristic of an electron to the relevant Planck unit, this isn’t the meaning of these coupling constants.  They are both defined as “a constant characterizing the attraction between a given pair of elementary particles”, electromagnetic attraction in the case of the fine-structure/electromagnetic coupling constant and gravitational attraction in the case of the gravitational coupling constant.  There is also a definition based on the interaction of these elementary particles with the related field.

We could naturalise both of these constants by considering instead “the attraction between a pair of Planck particles” or the interaction of Planck particles with the relevant field, considering them to have both Planck charge and Planck mass.  When we do, the values both resolve to 1.

Another way of saying this is the fact that the coupling constants don’t have a value of 1 is merely because the electron mass and charge are both smaller than the Planck equivalents (the mass is much smaller, but the gravitational coupling constant is also much smaller than the fine-structure constant).  When people are talking about the range in which the fine-structure constant could be varied without affecting life in this universe (by preventing stars from doing what they need to do to create the basic building blocks of life as we know it), they are really talking about how much higher or lower the charge on the electrons and protons can be.  It’s actually a bit odd that fine-tuners don’t do this because when they say that the fine-structure constant can only vary by as much as 4% before we run into trouble, this is equivalent to saying that the charge on an electron or proton can only vary by as much as 2%.  If there is fine-tuning here, then there’s actually twice as much fine-tuning (on this single measure) as the fine-tuners are claiming.

Either way, it’s a bit unreasonable to point at the fine-structure constant as an example of fine-tuning in and of itself.  If the fine-tuners want to claim any fine-tuning here, they need to point to the elementary charge (and, if they can establish a correspondingly apocalyptic argument for gravity, the electron rest mass).  However, if they can explain why elementary charge is odd in some way or could be something else than it is, they are welcome to try.  There doesn’t seem to be anyone else looking into that and when people ask awkward questions there’s a lot of “we just don’t know”.  And, so far, the fine tuners appear to have steered clear of the elementary charge.

Thursday 4 October 2018

The Messiness of Layered Spheres

Below is what I wrote earlier in Layered Spheres (repeated because it was such a short thing):

---

Say you have a standard solid, rigid sphere like a ball bearing.  Surround that sphere with as many identical spheres as you possibly can.  Hint: the maximum number of equal sized spheres that you can put around a single sphere is 12, according to sphere packing geometry.


You can see how that works here, if you imagine removing the top orange and adding three oranges below, so you have three above, three below and six surrounding the central orange in the middle, for a total of twelve.

Call this the first layer, or Layer 1, and then keep adding more layers.

How many spheres in total will you have when you reach Layer 100?  For bonus points, how many spheres will there be in Layer 100?

---

When I originally wrote this, I had an inkling what the answer was but I thought that someone might have the staff answer, which despite looking for it, I could not find.  That may well still be the case, but I’ve not had anyone provide me with the answer (or a link to the answer).  I did have someone tell me about packing geometry which, while it wasn’t really my question, did lead me to conclude that my solution wasn’t right.

The answer I had was that for layer x surrounding a seed sphere with other spheres, there are ((2x)2-1).4 spheres.  The problem is that I wasn’t considering all the implications of packing, and how you can’t really fit as many spheres in the second layer, because it’s not smooth.  I suspect that what the equation might be telling me is that if I put putty in the gaps of each layer, making it a new smooth, larger sphere with a radius of 2x+1, then that new sphere could be surrounded by ((2x)2-1).4 spheres.  It seems to work for the second layer at least (when you get 60 spheres fitting nicely around a new sphere of radius 3r, where the standard sphere has a radius r).

If we work on that basis (which I accept isn’t what I asked) then there will be, in the 100th layer, 159996 unit spheres.

I was thinking, as you get more and more layers, that the created sphere becomes more and more smooth, even if I don’t fiddle around with hypothetical putty – because the deviation from perfectly smooth becomes relatively smaller – but in thinking that way, I was perhaps not properly taking into account the fact that the spheres that I am using in the new layer have the same sort of dimensions as the deviations from smooth that exist throughout the process.

That said, I do think that the deviation from ((2x)2-1).4 spheres in the outer layer will become increasingly small as the created sphere increases in size.  There will always be a deviation, but I suspect that when you get to thousands or millions of layers, that deviation will be neglible – even if it’s not zero.  In other words, as x approaches infinity the number of spheres in the xth layer divided by ((2x)2-1).4 approaches unity.

So now my answer to my original question is “approaching 1.6x105 but probably short of that by about 10-20%”.  I’ve hedged this quite a bit because it’s messy.

One of the problems is that what I asked and what I was thinking were slightly different.  I was actually thinking of an expanding notional sphere, and then thinking about how many unit spheres of radius r fit into that expanding sphere.  That made me think about layers, because I was basically thinking in steps of 2r.  However, if you do do that, you will find that with some increments of 2r, you can fit in more spheres than you would if you were merely adding a new layer – basically because a dodecahedron isn’t quite a sphere, and the shape you end up with when you slot in a more spheres into the gap isn’t quite a sphere and so on.  The shape you create this way becomes increasingly spherical – but never quite makes it.

Thinking about it another way, if you used very small increments on an expanding sphere (notionally infinitesimal), then you would just be putting in new spheres into dimples on the surface of the created shape whenever possible, which would create new dimples, which would be filled in turn.  You wouldn’t just add a whole new layer at a time.  And that’s messy.

Saturday 8 September 2018

An image to help with Spherical Layers

This relates to the previous post, Spherical Layers.


Each new colour is a new layer.

Of course this is just about layering circles, but the concept of circular layers applies also to spherical layers.  Imagine an incrementally larger circle and how many circles can fit into that circle, or an incrementally larger sphere and how many spheres can fit into that sphere.  As your circle or sphere gets larger, the resultant approximation of the layers gets closer and closer to a circle or a sphere.  In between approximations of circles or spheres, you do get approximations of hexagons or dodecahedrons (which in themselves could be thought of as rough approximations of circles and spheres).

Note that the red and light green layers are also approximations of a dodecagon.  Note also that, when considering polygons, the best approximation of a circle is a regular ∞-gon (or apeirogon), but do note that a circle doesn't have sides per se, it has one curved side (singular, not plural) and is not a polygon.

Thursday 6 September 2018

Spherical Layers


Say you have a standard solid, rigid sphere like a ball bearing.  Surround that sphere with as many identical spheres as you possibly can.  Hint: the maximum number of equal sized spheres that you can put around a single sphere is 12, according to sphere packing geometry.


You can see how that works here, if you imagine removing the top orange and adding three oranges below, so you have three above, three below and six surrounding the central orange in the middle, for a total of twelve.

Call this the first layer, or Layer 1, and then keep adding more layers.

How many spheres in total will you have when you reach Layer 100?  For bonus points, how many spheres will there be in Layer 100?

(And for extra extra points, is there a formula to calculate the number of spheres with N layers that is more than the just the summation of all spheres in all the layers plus one [for the first sphere]?)

Tuesday 4 September 2018

A Question of Cosmology

There is such a thing as Hubble time, which is simply the inverse of the Hubble parameter (H).  The inverse of the Hubble constant (H0) is the current Hubble time (because the Hubble constant is the current value of the Hubble parameter).

As I noted back in 2014, in Is the Universe Expanding at the Speed of Light?, the current value of Hubble time is interesting because it’s basically the same as the age of the universe.  Now, because the (current) value of Hubble time is the inverse of the Hubble constant, it varies with measurements of the Hubble constant.

In Is the Universe Expanding at the Speed of Light?, I referred to the values of Hubble constant that were currently available.  These were:
  • 2011 (Hubble) ~71.5 to ~76 km/s/Mpc
  • 2012 (Spitzer) ~72 to ~76.5 km/s/Mpc
  • 2012 (WMAP – after 9 years) 68.52-70.12 km/s/Mpc
  • 2013 (Planck – after four years) 67.03 to 68.57 km/s/Mpc
By happy coincidence, Skydive Phil released a video on “The Hubble Tension” at about the same time as my retrospective podcast listening of the Infinite Monkey Cage got me thinking about the Hubble constant and the age of the universe all over again.  What was bothering me was that there were constant references to the acceleration of the expansion of the universe, together with the assertion (claim, reminder, stating) of the fact that the universe is 13 point something (7 or 8) billion years old (might have been raised in this specific episode or this one).

If the rate at which the universe is expanding is accelerating, then surely the age of the universe comes into question.  The value given for age of the universe has not changed significantly since 2014, when I reported it as 13.8 billion years – the current values range from 13.772 (WMAP) to 13.813 (Planck 2015 data) km/s/Mpc.  The value of the Hubble constant on the other hand …
  • 2018 (Planck) 67.66 (67.24 to 68.08) km/s/Mpc
  • 2018 (Hubble and Gaia) 73.52 (71.90 to 75.14) km/s/Mpc
  • 2017 (LIGO and Virgo) 70.0 (62.0 to 82.0) km/s/Mpc
  • 2016 (eBOSS – after two years) 67.6 (67.0 to 68.3) km/s/Mpc
Skydive Phil’s video focusses on the difference between the eBOSS (baryonic acoustic oscillation) and Planck measurements and the measurement from Hubble and Gaia collaboration (also known as SH0ES, sometimes miswritten as SHoES, or SHoES).  The problem here is that the error bars no longer overlap, which indicates some sort of problem – either they are measuring different things or at least one of them is measuring the wrong way.

You occasionally read that the Hubble time is a useful estimate of the age of the universe.  In that case, ignoring the ridiculously large error bars on the LIGO/Virgo result, the age of the universe is between 13.01 and 14.60 billion years.  In some cases, it is suggested that the Hubble time indicates how long the universe has been expanding – but to all intents and purposes this is what is meant by the age of the universe (much in the same way as a baby is not strictly 0 days old at birth, usually having gestated in a womb for about 9 months, we just pick a nice convenient reference point and count from there).

However, we are now being told that, about 5 billion years ago, the expansion of the universe started accelerating.  If the Hubble time is a useful estimate of the age of the universe and the age of the universe is what we are being told (13.8 billion years, or near enough), then don’t we have a problem?  We can work out the value of the Hubble parameter at a Hubble time of approximately 8.8 billion years (let’s call it H-5, meaning H at now minus 5 billion years), and it works out to be about 1.5 times that of today – ie about 111.1 km/s/Mpc.  (Perhaps the acceleration started only 4 billion years ago, at the end of the matter-dominated era and the beginning of the dark-energy-dominated era.  The value of the Hubble parameter corresponding with a Hubble time of 9.8 billion years is a bit lower at H-4=99.8 km/s/Mpc, but still about 40% higher than today.)

What is going on here?

Is the Hubble time only coincidentally a good estimate of the age of the universe at the current time (but won’t be in the future and wasn’t in the past)?  This sounds like it would be a fair addition to the list of fine tunings, and that surely can’t be good.

If the Hubble time isn’t generally a good approximate of the age of the universe, then there’s no reason to suggest that it ought to be a good approximate today and maybe the age of the universe is not 13.8 billion years after all.  Not really, there are a multitude of ways in which the age of the universe is measured, so cosmologists don’t simply rely on inversing the Hubble constant (for example, they look at cosmic background radiation fluctuations).

Another possibility is that the Hubble parameter has been tracking the age of the universe faithfully and 5 billion years ago it actually was H-5=111.1 km/s/Mpc.  Would that mean that, since that time, Hubble expansion of the universe has decreased and some other expansion of the universe (due to dark-energy, apparently) has got involved?  If so, it still doesn’t really add up.  We have the Hubble constant because that’s what we measure as the current expansion rate of the universe.  If the Hubble component of that is lower than is required to account for that expansion rate, then H0 < ~70, which increases the estimate for the age of the universe.  Thinking about the notion that the universe has been accelerating in its rate of expansion for the past 5 billion years, let’s say at best case that H-5 was just a bit lower than today, say just under the error bar for LIGO/Virgo at 61 km/s/Mpc.  That would mean that at that time the Hubble time was 16 billion years, and today the universe would be 21 billion years old.  That’s a big error in measurement.  It just doesn’t seem right.

So, a question for the people in the know … if we were around 5 billion years ago and were measuring the Hubble parameter using the rate at which distant galaxies were receding, approximately what result would we have come up with?

Do I have a solution?  Yes, I think I do.  I just don’t quite understand (yet) why most cosmologists would likely tell me I am wrong.

Thursday 16 August 2018

The in this Sentence is Not Misplaced


But the “not” may be.

I have a long-term beef with a particular phrasing that is specifically American but which seems to be bleeding into English of other nations.  The most recent variant I have seen was from Australia’s ABC in an article on the misuse of the term “fascist”.  The author, Matthew Sharpe, is an Associate Professor at Deakin University in Burwood, Melbourne so he should know better, but I do note that he is affiliated with Continental Philosophy.  I also note that his About page has him as “membver” of an Australian Research Council Discovery grant (here are the specific grant details).

Anyway, the offending phrase is at the end of the first section, repeated twice, first as “… all movements that aim to do this are not fascist”.  Or rather I should say was, clearly someone was even more incensed than I was (or had more time on their hands) and complained about the obvious miswording leading to a correction.  This is excellent news, but the rot does seem to be starting and something should be done about it.

What I intend to do, as far as this noble cause goes, is analyse the language a little, apply a smidgen of logic and common sense and demonstrate why Matthew Sharpe’s original wording was wrong (and thus why the ABC was correct in correcting it).

Let’s break his phrase down a little:

(all) (movements that aim to do this) (are) (not) (fascist)

which can be thought of as conforming with the standard pattern.

quantifier subject existential-verb adverb adjective

Yes, “not” is an adverb meaning that it modifies a verb.  As an adverb “not” can take on a bit of an existential role, for example this is one of the characteristics that one might list against a cat: “not dog”.  Note that the term “not dog”, when applied to a cat, is effectively the same as saying “there is something X, such that … X is not a dog” – so there is an implied existential verb (ie is).  We can apply the same logic to other categories, like adjectives and adverbs: “beautiful”, “high”, “slow”, “(made of) gold”, “the same”, “fascist”, and “all”.

I don’t really want to address Sharpe’s argument about what is and what is not fascist here.  I only want to address his poor grammar (before it was corrected), so let’s use another version, a phrase that I have used before and will undoubtably use again (although I may be forced to change the subject if the rot continues):

(all) (Americans) (are) (not) (intelligent)

Compare this to another possible statement that we could make about Americans:

(all) (Americans) (are) (very) (friendly)

In the first instance, the poor speaker when asked “Who is intelligent?” could reply with “Not all Americans” or maybe “Some Americans” or even “A lot of Americans”.  And this is basically my point.  When you formulate a sentence, you generally indicate who (or what) you are talking about, then you indicate what sort of thing they are doing and then indicate in what way they are doing it (or to whom they are doing it).

This is what is happening in the second sentence, as clarification will draw out when asked “Who is friendly?”  “Americans, all of them are very friendly”.

Let’s use brackets differently to make this even more obvious:

(all Americans) (are) (not intelligent)

(all Americans) (are) (very friendly)

In the latter sentence you could easily imagine that we could drop the “all” and still maintain the meaning.  If you drop the “all” from the first sentence, then you keep the real meaning (Americans are unintelligent), but the meaning that the poor speaker is trying to get at is lost (“Americans are not intelligent” cannot be reasonably understood as meaning the same as “Not all Americans are intelligent”).

As I noted above, when you seek clarification, even the poor speaker may instinctively group the “not” with the correct word, ie “all”.  Note further that it’s not just about putting “not” where it should be, it’s also about picking the right word.  The poor speaker could fix the sentence by merely substituting “all” with “some”:

(some Americans) (are) (not intelligent)

This is clearly a true statement, there’s a spread of intelligence in all societies and there are going to be unintelligent people in each of them (although not all of them will get elected to high office).  What is totally bizarre is that some people might believe that you can have two sentences, one that starts with “some” and another that starts with “all” but which are otherwise precisely the same and nevertheless mean precisely the same thing.  Consider:

Some people think that that is totally bizarre

All people think that that is totally bizarre

See, it simply doesn’t work.

---

Finally, some (but not all) might point out that no less than Shakespeare wrote “All that glisters is not gold”.  This is true.  Rather it is true that Shakespeare wrote that but, in fact, some things that glister (or, in the modern vernacular, glitter) are gold.  So he’s wrong, or he’s just being poetic.  He also wrote “Not all the water in the rough rude sea/Can wash the balm from an anointed King”, which is also wrong since he really meant “Not even all the water in the rough rude sea”, otherwise he’d be implying that some of the water in the rough rude sea can wash the balm from an anointed King, just not all of it, for example there’s a bit over near France that’s hopeless at the job.  In this instance, he could even have said “All the water in the rough rude sea/Cannot wash the balm from an anointed King”.

Shakespeare was basically hopeless, except for the fact that he was writing 400 years ago, when the language was a bit different (if thou doth recall, thou flibbertigibbet), often in iambic pentameter which demands a different sort of grammar than an opinion piece about what does and what does not constitute fascism.

---

In the original phrasing Matthew Sharpe was saying that none of the movements that aimed to do what he was talking about (taking over the state in part by destroying liberal institutions like an independent media and individual rights) were fascist.  This is a dangerous sort of thing to be saying, even accidentally.  Sure, not all of them are fascist, but some of them most certainly are.

Grammar nazi out.

Thursday 9 August 2018

The Fat Man Retaliates


There’s a variation to the Trolley problem thought experiment in which, rather than throwing a switch to divert a runaway train from a track on which five people would be killed onto one where only one person will be killed, there is a lever which you can use to open a trap door to drop a fat man onto the tracks and thus avert a greater disaster.

Now, the lever and trap door arrangement is there to avoid you having to physically throw or push the fat man onto the tracks – so there is no visceral reaction to getting your hands metaphorically dirty with killing him, you pretty much have the same separation as you had with the switch and are again sacrificing one life for five.

A no-nonsense utilitarian response to the idea of opening the trap door and putting the fat man in front of the train should be that it is equivalent to switching tracks to kill the one person instead of the five that would otherwise die.  There is however a meaningful difference between the two scenarios – in the standard scenario (switching of the tracks) the single person dies as a consequence while in the fat man variation, the single person dies as a means – in other words the killing of the fat man is instrumental.

The general consensus is that it is wrong to kill a person instrumentally even if, by doing so, you can prevent the death of more people.  A related scenario is that in which a healthy person is sitting in a doctor’s waiting room and the doctor realises that she is the perfect donor for five other patients all of whom will soon die without donated organs.  It’s horrific to think that a healthy person might be harvested for organs, even if a large number of people might thus be saved (which is an expression of our intuitive take on the morality of the suggestion).

Then there is the notion of justified self-defence.  The general consensus here is that a person is justified in acting, even violently or lethally, in order to defend themselves, so long as their action is proportionate.  It should be noted that this is about defending oneself, not necessarily about saving oneself.  If you are set on killing me, and I can’t realistically disable you or run away, then I am justified in killing you.  On the other hand, if some natural disaster is imminent, and I could survive only if you died, then I am probably not justified killing you – at least not instrumentally.  I could be justified in doing something that only consequentially leads to your death but was necessary to save my life or many lives (like dumping CO2 that may well asphyxiate you but is intended to put out a devastating oil rig fire).

It gets a bit vague about how justified one is to act violently or lethally to prevent harm.  The legal system varies somewhat on how the victim of long term domestic abuse is treated when they crack and kill their abuser.  Sometimes there’s some leniency, sometimes there’s simply a formal continuation of the abuse.  Sometimes there’s a sentence of probation (ie a non-custodial sentence), sometimes it’s jail until you die (or effectively so).

Let’s just consider the right to defend your life.

Consider the fat man, who is on the trap door but who, for some reason, cannot move.  You are near the lever and the runaway train is heading for five people who, if you do not act, will die.  You’ve recently been listening to a podcast that was extremely positive about utilitarianism and you are therefore primed to do whatever you can to minimise suffering.  You are not without feeling however, and you tell the fat man that what you are going to do is for the greater good before you start moving towards the lever.

The fat man, while unable to move, is not entirely helpless though because he has a large shotgun.  To defend himself, he can’t risk merely wounding you because you could still operate the lever, he has to kill you to assure his survival.

Is the fat man justified in killing you?  Alternatively, if you were the fat man, would you be justified in killing me given that I am intending to open the trap door and drop you on the tracks in front of the train?

Note that if the fat man kills our hero, there will not only be that direct casualty but also the casualties due to the runaway train that will not be stopped and will instead kill the five people on the tracks.  So is it okay to kill six people to save your own life, if at least five of those deaths are only consequential?

My initial intuition is that it is justified for the fat man to shoot the person who is going to activate the lever.  The fat man is not responsible for the runaway train or the five people on the tracks and it is not justified to kill him instrumentally to save the five people.  The five who die only die consequentially and there is no other way to save the one life (that of the fat man) other than killing the wannabe lever operator.

Nevertheless, it seems odd to say that it’s okay to let six die to save one.  In an analogous situation, if a car with sabotaged brakes was careening towards a deep canyon and the driver could only save herself by sideswiping a car which was being driven parallel to the canyon, and which would fall into that canyon thus killing the six occupants, then it’s difficult to think that this defensive course of action would be justified – even if the saboteur was known to be in the second car.  We’d possibly further expect the driver of the sabotaged car to avoid unintentionally sideswiping the second car, even if that would (unintentionally) save her life.

Then there is another possible scenario, this time not related to the fat man but rather to the single innocent person on the section of track that the train could be switched to in order to save the five.  Let’s say that, for whatever reason, this person cannot simply avoid the train.  If you switch the tracks, he’s going to die.  Fortunately for him, he has a sniper rifle and the skills to use it.  He sees the runaway train and knows that the average person will decide to switch tracks to save the five.  He raises the rifle, puts his eye to the scope and sees you in his crosshairs, ready to change the direction of the runaway train.

Is he justified in shooting you?  Would you be justified in shooting someone else, if it were you on the tracks and they were about to consequentially kill you, even if you knew that by your actions six people would die?

---

There is a reason that the fat man is a fat man.  He’s not just fat, he’s a lot fatter than you are – no matter how fat you happen to be.  You can’t simply jump onto the tracks yourself and stop the runaway train.  This caveat is in place to prevent you from taking that option because too many people would choose to sacrifice themselves before sacrificing another person – perhaps because dying as a hero is more attractive than living as someone who killed a guy by pushing him off a bridge into the path of a train.

Ignoring the fact that the potential victims in the scenarios above have guns (or accepting that, despite them having guns handy, they are average decent people), it seems to me that the only difference between allowing someone else to create a situation in which I die and creating that situation myself is the question of agency.  I don’t have a choice if someone else does something to me and I am unlikely to like that, even if I would have chosen freely to put myself in precisely the same situation that I end up in due to another’s actions.

Put that aside for a moment.  Consider instead the moral judgment we would make on the person who, to save five, throws themselves in front of a runaway train.  Or the moral judgment of someone who is able to divert the train onto the track that they themselves are on, and thereby save five lives at the cost of their own.  This is equivalent to the brave soldier who throws himself onto a grenade and saves his comrades.  Or the mother who dies saving her children, or perhaps even the children of another.  Such people are heroes, which implies that they did the right thing.

But if this is so, then why is it not necessarily the right thing for the fat man, or the man on the track to abstain from defending themselves?  They have agency.  They can, if they so choose, prevent what is going to happen to them.  Choosing not to prevent their deaths, when such prevention is an option, is equivalent to choosing to sacrifice themselves actively – which is apparently the right thing to do.  How could they be thus be justified in doing the thing that is not right, which is by definition the wrong thing?

---

Going a bit further, it would seem that if you are at the lever, or the switch, and your potential instrumental, or consequential, victim has the ability to prevent you from acting, then you should act to save the five, and two variants become morally equivalent.  If you are not shot in the process, it is only because your victim chose their fate or maybe they missed when they tried to shoot you – but either way, they saved the five, not you.

Thoughts?