Monday, 5 July 2021

It's Hot Up Here, Eh?

Just in case you haven’t noticed, it’s been a bit hot in Canada.  Also in north western USA.

Apparently it’s all due to a “heat dome” and these are not new (for example the link there explaining what one is is from 2020).  Fundamentally it’s just a high pressure system that doesn’t move away, getting trapped in place by other weather systems around and heats up a location with no relief from a following cold front (that inevitably always comes).  High pressure systems do two things that increase the temperature in an affected area:

·  they take air from higher altitudes (which is initially at a lower pressure) and compresses it, which warms it up – you might notice that when you release a gas from compression it gets cold, which is how your air-conditioner and fridge work, and

·  they dampen wind that could otherwise bring in cooler air from other areas.

This is what North America looked like at 2200 UTC on 29 June of this year (with my addition of high and low pressure system markers and an x to indicate where Lytton is):


The white streaks indicate wind (and its direction by increasing width of the line).  There was basically no movement of air along the west coast and inland which meant that the air that was there was just baking in the clear, cloudless skies (which is another feature of the high pressure system).

Could this situation have occurred without climate change?  Maybe.  Could it have got quite so hot?  It’s unclear.  It’s basically impossible to say that any single event is a consequence of climate change, all we can do is look at the trends.  The question we need to ask is whether these heat domes are moving northwards, as a trend.  Unfortunately, that is hard to tell too, since the term only entered into public consciousness in 2011, as far as I can tell.  At that time, the heat down was a lot further south, but the sample isn’t large enough to tell.

All I can suggest is that, if you are interested in what is going on, and why, it’s probably better to visit a site that is interested in weather and climatology rather than one that is interested in denying climate change.  Even if the events in Canada can’t be definitively pinned on climate change.

Tuesday, 13 April 2021

Mathematics to Address an Apparent Problem with Imagining a Universe

Imagine a Universe contains only narrative with no equations.  Before I posted that narrative, I posted a piece that explained that I understood that there are at least three apparent problems with the narrative, which I archived for posterity before overlaying it.  Just as with the narrative itself, I tried to minimise the use of equations – which was a little tricky with regard to the glome.  What follows is a very brief explanation as to how mass/energy enters to the inner universe at a rate of one unit of mass/energy per unit of time.

---

Below is what I find most problematic to explain without recourse to equations (and even with them, a little):

·        To be nice and neat, it would be great if the inner universe receives one “unit” of energy for each “unit” of time during which the radius increases by one “unit” of length.  That does not initially seem to be the case though, it’s one Planck mass worth of energy for each two units of Planck time during which the universe expands by two Planck lengths, per Hubble volume (which is the sphere defined by the radius of the universe at that time, recalling that universe is a glome).  This gets a little confusing in four dimensions and I am not entirely convinced by people who say they can imagine what a four-dimensional object looks like, so let’s consider a sphere as an analogy.  We can get circles from a sphere by sectioning it.  The greatest circle we can create has the same radius as the sphere itself.  The sectioning effectively creates two hemispheres.  Note that I remain aware that the surface area of the curved section of the hemisphere is not equal to the surface area of the circle created by the section.  By analogy, the universe could be notionally sectioned by a spherical section, creating two halves, meaning two (three-dimensional) Hubble volumes, meaning that the nice one “unit” of energy for each “unit” of time during which the radius increases by one “unit” of length is obtained for the universe as a whole.

I described how, if the universe is spatially flat, the mass (or “mass-energy”) of the universe increases by one unit per two units of time in Mathematics for Imagining a Universe, under the rubric “Critical Density and Expansion”.  In that section I wrote (emphasis added):

Which means that, within a Hubble volume, mass increases at a rate of half a Planck mass per Planck time to maintain critical density.

The challenge is to understand how the universe might have a volume of two Hubble volumes, thus making the mass increase at a rate of one Planck mass per Planck time (to maintain critical density).

What I am describing above with the sphere is the perspective of a 2D character living on the surface of that sphere, let’s call him Fred.  Say that Fred occupies the x and y dimensions, while the sphere occupies the x, y and z dimensions.  The sphere can be described as x2 + y2 + z2 = r2, where r is the radius, but Fred cannot perceive the z dimension so as far as he is concerned the relevant equation is x2 + y2 = r2, which is a circle, with himself at the centre – or at coordinates (0,0).  Due to his position and dimensional limitations, however, Fred can only see one half the sphere on which he lives.

Say that Fred is actually at (0,0,r) and that a fellow sphere dweller, Freda, is at (0,0,-r), also perceiving herself to be at (0,0).  Freda too will only perceive a circle, but that circle has no overlap with Fred’s despite also being described, by Freda, as x2 + y2 = r2.  In three dimensions, the two circles are clearly different being (x2 + y2 = r2, z=r) and (x2 + y2 = r2, z=-r) and are descriptions of two separate halves of the sphere, the z-positive hemisphere and the z-negative hemisphere.

Note that any other (non-collocated) 2D observer in that universe will also perceive themselves to be in the centre of a circle that describes half of the sphere, but that hemisphere with overlap with both Fred’s and Freda’s.

Precisely the same logic applies with 3D observers, like ourselves, living in the surface volume of a glome described by w2 + x2 + y2 + z2 = r2.  We cannot perceive the additional dimension (w), so we see ourselves as being in an apparent sphere (a Hubble volume), but we cannot access the other half (the -w hemiglome, if you like).  The division between positive and negative halves, however, seems irrefutable making the 3D perceivable volume of the universe twice that of the Hubble volume.

Given that the rate of increase in mass is half a Planck mass per Planck time per Hubble volume, then the rate of increase of mass into the universe as a whole is one Planck mass per Planck volume – if the universe is spatially flat.

And is the universe spatially flat?  It very much looks like it.

Wednesday, 7 April 2021

Apparent Problems with Imagining a Universe

The text below was initially posted where Imagine a Universe is now to be found (see Internet Archive version).  It has been edited slightly to bring it up to date and some retrospective rewording for clarity.

--

This text (the content of the post "Imagine a Universe") was replaced shortly after it was posted, as I wanted to record it for posterity at archive.org.  The replacement content is the intended content, which is a narrative about a universe that undergoes expansion, until that expansion stops (for reasons that are unexplained at the time), then gravity takes over and the entirety of the universe eventually ends up in one ginormous black hole, which (effectively) shunts all the universe’s mass/energy into an orthogonal universe.  The incoming mass/energy (effectively) powers that inner universe’s expansion, until the mass/energy of the outer universe is entirely transferred into the inner universe, at which time the inner universe’s expansion stops.

The purpose of this “underlay” to that narrative, if you like, is to note that I am aware of at least five apparent problems with the story:

First, there is an implication of a meta-time.  As described elsewhere I have posited that the expansion of the universe is basically time as experienced in that universe.  However, if the expansion reverses, this is indicative of a sequence associated beyond expansion which in turn implies another sort of time, or a meta-time.  It is possible that this implication is a function more of how our brains work, immersed as they in actual time.  An alternative, at least as I see it, is block time which is linked to a form of hard determinism (or nomological determinism).  Maybe there’s a compromise position in between (that is a universe with no meta-time but without everything being effectively predetermined).

Second, which has two parts (neither of which is really a problem, more of an explanation):

  •    The inner universe is a glome which has a “surface volume” which is greater than the volume of a sphere of the same radius.  The idea of critical density that is mentioned is related to the radius of a sphere, specifically the sphere that defined by the distance that light could travel in the time that the universe has been in existence (and, relatedly, since it started expanding).  On the “surface volume” of glome, however, the radius is a little vexed, much the same as it is when considering drawing a (relatively large) circle on the surface of a sphere.  Is the radius that of the flat circle created by sectioning the sphere, or the arc length between the point on the sphere directly above the centre of that section and its outer rim?  That doesn’t really create a circle anyway, and its area is both greater than the circle created by the section and less than a circle as defined by a radius equal to the arc length.  Would two dimensional beings on the surface of such a sphere notice?  I don’t think so, since they would only be sensitive to two dimensions, which would notionally be aligned with a plane passing through the locus of the sphere. 

  •    To be nice and neat, it would be great if the inner universe receives one “unit” of energy for each “unit” of time during which the radius increases by one “unit” of length.  That does not initially seem to be the case though, it’s one half Planck mass worth of energy for each unit of Planck time during which the universe expands by two Planck lengths, per Hubble volume (which is the sphere defined by the radius of the universe at that time, recalling that the universe is a glome).  This gets a little confusing in four dimensions and I am not entirely convinced by people who say they can imagine what a four-dimensional object looks like, so let’s consider a sphere as an analogy.  We can get circles from a sphere by sectioning it.  The greatest circle we can create has the same radius as the sphere itself.  The sectioning effectively creates two hemispheres.  Note that I remain aware that the surface area of the curved section of the hemisphere is not equal to the surface area of the circle created by the section.  By analogy, the universe could be notionally sectioned by a spherical section, creating two halves, meaning two (three-dimensional) Hubble volumes, meaning that there is a nice whole single “unit” of energy for each “unit” of time during which the radius increases by one “unit” of length is obtained for the universe as a whole.

Third is the orthogonality.  What I am really talking about here is a orthogonal rotation of spacetime, such that while events in the outer universe could be described by the quaternion P = xi + yj + zk+ t, events in the inner universe would be described by a related quaternion P’ = x’l + y’m + z’n + t’(o).  Where the dimensions represented by i through to (possibly) o are all orthogonal to each of the others.  I say “possibly” about the dimension represented by o, which is linked to time, because of issue described below.

Fourth, and this is probably most obvious, at least to some: in my conception of this whole thing, time is equivalent to expansion.  Once expansion stops then, we have a problem.  Also, I do have a notion of gravity being linked to the expansion, since gravity is what happens where the expansion is resisted by a concentration of mass/energy, rather than being an attractive force per se.  Which means that once expansion stops … either something really bad happens, or nothing else happens.  Unless, that is, the process begins again but in a sort of reverse.  Just how that would play out is not something on which I intend to speculate other than to suggest that, if this is the mechanism, then conceptually the “reservoir” in which mass-energy resided during the inner universe’s expansion, or a similar if not the same reservoir, would begin filling with mass-energy that leaves the inner universe and, in the process, powers a contraction as the critical density is maintained.

The “really bad” is similar to something that I’ve not really considered to be a realistic option, until now – vacuum decay.  The effect, as I have envisioned it, would be to negate the structure of the universe, including not only time (via expansion stopping), but also space, possibly twisting the whole universe in an orthogonal direction and thus seeding a new, inner-inner universe that goes through the same process (with the expansion again being representative of time and thus not requiring the dimension o mentioned above).  I don’t, however, see this event as something that would initially happen in one part of the universe before spreading out at the speed of light, like some sort of cosmic cancer.  It would happen instantaneously across the entire universe as the mass/energy feed ends.  (For fans of the Marvel Universe, imagine a much more substantial snap of the fingers of an uncaring god.)

Fifth, it’s not obvious in the narrative because I’ve deliberately avoided – as much as possible – reference to equations, but the expansion of the universe is related to the critical density of a universe and this is mathematically linked to the mass and radius of a Schwarzschild black hole.  A black hole that contains the final mass of the inner universe will have the final radius of the inner universe.  We would not see, therefore, the entire universe being scrunched down into a tiny black hole.  Instead, as the black hole absorbed all the mass-energy of the universe, it would expand to fill the universe.  At that time though, relative to the outer universe, the entirety of that universe would be smeared over the surface of the universal black hole.

An image might make this slightly more comprehensible.  Say this is a standard universe:


There are some models which expect the universe to stop expanding, then reverse and trigger another universe in the opposite (notionally temporal) direction, setting up an oscillation (or bounce):


There there is a continuously cyclic universe (conformal cyclic cosmology), which is a bit like this:


The universe which is implied in “Imagine a Universe”, would look a bit like this, similar to the oscillating (or bounce) universe, but with the next universe rotating off into an orthogonal direction:

However, what I (currently) have in mind is more like this:

The universe expands out to its maximum, then a “really bad” thing happens (maybe like vacuum decay, maybe the entirety of the universe being absorbed into one final black hole) and the universe becomes, to the next universe in the sequence, a big bang-like event.  The universe that follows in the sequence but is not shown is also orthogonal, so if it helps you can imagine it emerging from the screen, but keep in mind that this is merely a representation of the notion involved.

---

In any case, the description of both universes in “Imagine a Universe” is wrong – but it was intentionally, knowingly wrong, while potentially pointing to an underlying truth.  A sort of lie-to-children.

Sunday, 7 March 2021

Mathematics for Imagining a Universe

Imagine a Universe contains mere narrative with only oblique reference to equations.  That does not mean that the equations don’t exist.

---

Time (and Space)

The idea I was playing with here was that time slows down as you approach the Schwarzschild radius of a black hole (it actually slows down as you approach any mass, just not as much):

 

to/tf = (1-2GM/rc2)

 

Where to is proper time between two events for an observer at a distance of r from the centre of a mass M, and tf is co-ordinate time for an observer not under the influence of that mass (strictly speaking not under the influence of any mass).  G is the gravitational constant and c is the speed of light.  Buried in this equation is the Schwarzschild radius rS = 2GM/c2, the event horizon of a non-rotating black hole.

Objects can’t get past this radius, but the narrative suggests that mass/energy (in the form of energy) might – however, when r < rS, we have a ratio of to/tf that is the square root of a negative number and as r approaches 0, we have a ratio of to/tf that approaches infinity times the square root of a negative number.

While the square root of a negative number is formally called an imaginary number, this type of number is used in a number of fields to denote orthogonality (for example in the use of quaternions) – the notion that two things (electric and magnetic fields, for example) are perpendicular to each other.  Time, as another example, is orthogonal to space.  With the four linked dimensions of spacetime, each is orthogonal to the others.  When, in the narrative, I talk of an orthogonal universe, that universe (notionally) has time that is orthogonal to all the dimensions of this universe and also to the three dimensions space in that universe, each of which in turn is orthogonal to all the dimensions of space in this universe.

There could be no interaction between orthogonal universes, and to each the eternity of the other would be a mere moment – with one at the end, and the other at the beginning.  This permits us to hypothetically link together a chain of effectively eternal universes.

The same logic applies to space due to length contraction as one approaches a mass although it’s not quite as simple, since the length contraction is only in one dimension (parallel to the separation with the centre of the mass).  However, masses that approach the Schwarzschild radius are ripped apart and the resultant energy is smeared across the surface of the black hole – acting as the “reservoir” for the inner universe.

 

Critical Density and Expansion

In cosmology, “critical density” is the density of the universe at which it neither expands forever nor collapses.  Such a universe is described as “spatially flat”.  Critical density is given by:

 

ρc = 3H2/8πG

 

where H is the Hubble parameter (note that H0 is also referred to as the Hubble constant, but that is just the value of the Hubble parameter today) and G is the gravitational constant again.  The Hubble parameter is the rate at which distant objects are receding, given as a ratio between that rate and their distance – usually given in km/s/MPc (kilometres per second per megaparsec).

There is also Hubble time, currently calculated from the measured value of the Hubble parameter – either 67.4 or 74.0 km/sec/Mpc, with errors of about 1-2 km/sec/Mpc (or about 2%) – to be either 13.2 or 14.6 billion years, with error margins of about 0.3 to 0.4 billion years, the mid-range of which is 13.9 ± 0.3 billion years – which entirely covers the range in which the age of the universe lies 13.77 ± 0.059 billion years).

The volume of a “Hubble sphere”, where rH is the Hubble length (c/H, where c is the speed of light), is

 

VH = 4πrH3/3 = 4πc3/3H3

 

The universe appears, very much so, to be spatially flat.  I make the simple assumption that that appearance is reflective of reality.

If the universe is spatially flat, then it always has been and always will be (according to Sean Carroll, who wrote “a spatially flat universe remains spatially flat forever, so this isn’t telling us anything about the universe now; it always has been true, and will remain always true”).  Consider then a “Hubble mass”, MH, which is the mass inside a Hubble sphere given that the density in that sphere is critical, so

 

ρc = MH/VH = 3H3MH/4πc3 = 3H2/8πG

 

So

 

MH = c3/2HG

 

Recalling that rH = c/H, and rearranging,

 

rH = 2GMH/c2

 

Which is the equation for the Schwarzschild radius of a Schwarzschild black hole of mass MH.  Note however that there is a direct relationship between the radius of a Hubble volume in a spatially flat universe and the mass contained within that radius.  This means that as the radius increases, so too does the mass, or

 

ΔrH = 2GΔMH/c2

ΔMH/ΔrH = c2/2G

 

Another assumption made in Imagine a Universe is that there is a limitation of the rate of expansion to “quantum of length per quantum of time”.  Using Planck units as our notional stand-ins, this would mean one Planck length (LP) of additional radius per Planck time (tP), noting that LP/tP = c, therefore ΔrH/Δt = c, and thus

 

ΔMH/Δt = c3/2G = ½c3/G

ΔMH/Δt = ½mP/tP

 

Which means that, within a Hubble volume, mass increases at a rate of half a Planck mass per Planck time to maintain critical density.  (More will be said about this in a later post.)  Note that this does not necessarily mean that the Planck time is the smallest increment of time, merely that even if there are smaller increments, the ratio of mass added per increment will be equivalent to half a Planck mass per Planck time (per Hubble volume).

 

Vacuum Energy and Critical Density

If, as discussed above, mass (or as I prefer, mass-energy) is being added to the universe at the rate of half a Planck mass (equivalent to half a Planck energy) per Planck time (per Hubble volume), then we could work out how much mass-energy would exist in the universe at this time.  At 13.77 billion years old, the universe has experienced 8.06x1060 Planck times and so the Hubble volume that we live in would contain 4.03x1060 Planck masses, which is 8.77x1052kg.  The currently estimated mass of the observable universe is “at least 1x1053kg”, which is in the ballpark (more on this in a later post).

Note that the energy equivalent of 8.77x1052kg is 7.88x1069J, in a Hubble volume of 4πrH3/3, where rH is 8.06x1060 Planck lengths, so 9.26x1078m3.  That makes the energy density 8.51x10-10J/m3, or ~10-9J/m3.  Which is the value of vacuum energy.  Converting back into terms of mass, we get 9.47x10-27kg/m3, or ~10-26kg/m3.  Which is our universe’s critical density (at this time).  Note the comment here with regard to average density (“including contribution from energy”).

Most of the mass-energy in the universe is sitting there in the vacuum, with only a small proportion manifesting as baryonic matter.

---

The upshot of all this is that if the universe is spatially flat, then the introduction of mass-energy itself would drive expansion of the universe.  It makes sense that this expansion would not be instantaneous – and instantaneous expansion is not what we observe.  What we observe instead is an expansion at a fraction of the speed of light which is proportional to the fraction of a Hubble length that the distant object (usually a galaxy) is from us.  Which is also to say that we live in the centre of a Hubble volume that is expanding at the speed of light (noting that this is not a special place for us, every point in the universe is the centre of its own Hubble volume, some of which overlap without ours).  This in turn means that the universe is expanding at a rate of one Planck length per Planck time (and, as calculated above, increasing in mass-energy at a rate of half a Planck mass per Planck time (per Hubble volume)).

Conversely, if the universe is spatially flat, and the universe is expanding (which we observe), then mass-energy must be being added to the universe.  It’s possible to imagine that mass-energy would be pulled into existence (in our universe) if 1) something else were driving the expansion of the universe while 2) some mechanism was constraining the universe to be spatially flat, noting that this raises the question of where the mass-energy comes from.  But this seems to be a less parsimonious hypothesis.

The far more parsimonious hypothesis is that mass-energy is being added to a spatially flat universe (which is observed) which explains the expansion (that we observe) at the rate at which we observe it, resulting in a density that we observe and a vacuum energy that we observe.  We are then left with only one question – where does that mass-energy come from?

This was the question (to one level) that Imagine a Universe was hinting at.

Saturday, 27 February 2021

Imagine a Universe

Imagine a universe in which the inhabitants, those sufficiently intelligent and motivated, and technologically advanced, can look out into space and observe that distant galaxies appear to be moving away at a speed directly proportional to their distance.  This expansion of the universe, as the inhabitants can determine, is sufficient to overcome the effect of gravity that, they calculate, would otherwise eventually draw all the galaxies together.

Then imagine that, after a certain amount of time has passed, at least a dozen or so billion years, that expansion stops.  And gravity takes over.

If perfectly balanced, the galaxies would remain where they are, because they would experience attraction from all directions, which cancel out, but the balance is not perfect.  So, slowly but inevitably, the galaxies coalesce, collapsing into black holes the like of which the universe, increasingly lifeless, has never seen.

Eventually, approaching effective eternity, all mass-energy in the universe is contained within a single black hole which has captured everything that universe contained.

---

Imagine a notional particle as it approaches the outer reaches of this final black hole.  Due to relativity, time dilation effects mean that the time experienced by that particle (relative to a notional observer outside of the influence of the black hole) approaches the square root of zero, before flipping into square root of a very small negative number, or a time that is orthogonal to the time in the universe.  Between the outer reaches of the black hole and its centre is an eternity of orthogonal time.  Length too extends inside the black hole, orthogonally to infinity.

Imagine that, inside this black hole then … is another universe.

From the standpoint of the inner universe, the outer universe is entirely in the past, the outer universe’s entire eternity occurring during the inner universe’s very first instant.  All the mass-energy of the outer universe was placed into “a channel” and began appearing in this universe – from that very first instant.

However, there are physical limits on the rate at which mass-energy can enter a universe, given by the critical density (hence the notion of “a channel”, of restricted dimensions).  This means that there is a tension which leads the inner universe to expand as mass-energy makes its way in.  In the first instant, the first quantum of time, the absolute minimum space fills with the absolute maximum mass-energy for that volume.  From then on, half that amount of energy appears each quantum of time, as space expands such that the radius increases one quantum of length per quantum of time – thus maintaining the critical density.

---

In the same way as, for the inner universe, the outer universe’s eternity has already occurred, from the notional perspective of mass-energy entering the inner universe from the outer universe, the entire expanse is as it were a single quantum of volume.  Mass-energy (in the form of energy) is spread between all “grains” of inner universe space.

Initially, this means that the inner universe is incredibly dense and hot, but that density decreases as the universe expands despite the constant feed of mass-energy because the volume increases with cube of the radius (where the radius is proportional to the linear rate of increase in mass-energy).

Eventually the density and heat decrease to the point at which photons manifest.  Then particles.  And eventually stars.  All the time mass-energy continues to feed into the universe, driving its expansion.

By the time that intelligent beings are considering the expansion of the inner universe, say a dozen billion years or so after the beginning, the rate at which energy is entering that universe (half the mass of the smallest possible theoretical black hole per quantum of time) verges on infinitesimal.  However, the accumulation of this energy over a dozen billion or so years does manifest when the energy of empty space (vacuum energy) is considered.

Because space is largely empty, with galaxies and even stars and planets being massive exceptions, the energy of space at any one time is roughly equivalent to the total amount of energy that has entered the inner universe divided by the total volume.

And so, the inner universe continues to expand, at a rate of one quantum of space per quantum of time, which is also the maximum speed at which anything can move in such a universe.  Intelligent observers with sufficient technological advancement notice this upper limit on speed (via the fact that photons, in vacuo, are limited to it) and come to understand relativity, gravity and the nature of black holes.

They develop sophisticated equations that explain how their universe expands (although perhaps not why), what the critical density of a universe is and how gravity affects time and space, including what happens to time and space as objects approach a black hole.  They develop notions of the granularity of space and time, which they come to understand as intrinsically interconnected.  Some notice that the equations for a black hole of a certain radius and the critical density of a universe with the same radius are linked.

But one day, the mass-energy that was effectively queued up by the final black hole in the outer universe runs out.  The total mass-energy in the inner universe is now equal to the total mass-energy that was in the outer universe.

And the inner universe stops expanding.

Sunday, 7 February 2021

Is QED Wrong, or Just Wikipedia?

There seems to be an error either with quantum electrodynamics (QED) and stochastic electrodynamics (SED) or Wikipedia.

 This is the section in question (archived):

 

Note the critical density of the universe is in the order of 10-26 kg/m3 which means that the critical energy density of the universe (given that E=mc2) is in the order of 10-9 J/m3.

Note also for a universe with a radius of one Planck length, at an age of once Planck time and a corresponding Hubble parameter value of one inverse Planck time, the critical density would be in the order of 1096 kg/m3 which corresponds with a critical energy density in the order of 10113 J/m3.

A vacuum energy of 10113 J/m3 beyond the spacetime origin of our universe is ridiculous.  As an example, the Earth has an average density of 5515 kg/m3, which is equivalent to an energy density of 5x1020 J/m3 – meaning that if the vacuum had a density of 10113 J/m3, it would swamp us.  We’d not even be a rounding error.

Whether QED/SED is wrong, or Wikipedia contains a misinterpretation of the writings of Peter Milonni and/or de la Pena and Cetto, I don’t actually know.  But anyone suggesting such a huge magnitude of vacuum energy density should really go back and check their figures.

I am certainly not going to stay awake at night worrying about the cosmological constant problem (or whether I need to worry about it being a slam dunk for Fine Tuners).

Saturday, 23 January 2021

Why the UK Variant (or Kent Variant) being More Lethal might be a Good Thing

Firstly, I must make it clear that I don’t want more people to die.  And I do realise that some people already have caught the UK variant (or Kent variant) of Covid-19, and it being more lethal is difficult to see as a good thing – for them.  However, there is a sense in which the new variant being more lethal, as well as being more transmissible, is a good thing.

 

Part of the argument is that the UK variant being more transmissible is a really bad thing.  There are reports that the UK variant is in the order of 50% more transmissible, which means that after 10 rounds (where a round is where those now infected have infected those that they are going to infect) about 40 times as many people will die (as would have with the original variant).  If it is only 30% more lethal, and not more transmissible, then it will kill 30% more people, so 1.3 as many people will die.  Combine the effects together and it’s about 50 times as many.  Which, at first blush, is not good.

 

However, the increased lethality of the disease only comes into effect once you have managed to catch the disease.  It’s entirely possible that people are going to be far more scared by the notion that they are 30% more likely to die with the new variant, and 50% more likely to get it, than they are by the notion that they will transmit it to 50% more people.

 

This might open up a channel into the mind of an otherwise unreachable and selfish individualist – especially if they can wrap their mind around the fact that it is not 50% more likely that they will get Covid-19 with the new variant.  That’s just 50% more likely to get it from each person with Covid-19 that they interact with, if they are interacting with more people (which will be the effect if infection rates are not otherwise stemmed) then there will be a compounding effect.

 

In other words, the increased lethality of the UK variant might just be the motivation that is needed for selfish people to act selfishly and, through acting selfishly, protect us all.


---


On the mathematics side, if the 30% lethality was enough to scare the non-cooperative types into better behaviour such that it reduces the replication number by 10%, then this could halve the number of deaths – or more – over the long term (assuming just the increase in transmissibility).

Thursday, 21 January 2021

Atomic Tea

There is at least one complaint made by my climate denialism curious friend, JP, that is valid.  Some elements of the media are guilty of catastrophising.

 

I can understand why they do it, to a certain extent – they are selling papers, or magazines, or advertising airtime, or clicks, or eyeballs, or bums on seats etc etc.  The complete and accurate truth is, in at least one sense, not primary.  If the complete and accurate truth got them the outcome there are after, then they would be all for it, but still only as a means to an end, not as the end itself.  Even non-commercial media has to report viewer numbers, or levels of engagement, making them more and more like the commercial outlets with their click-baity headlines and sensational content.

 

The most recent example that I have read, at time of writing, is a piece by the Australian ABC, talking about how bad 2020 was.  I think we call all agree that 2020 was pretty bad but this article was focussed on something that has not been foremost in our concerns for a while – the climate.  The climate is still there, even if it’s largely outside and many of us have been locked inside more than usual.  And it’s still on a bit of a slide.

 

The article in question did highlight something of considerable concern, namely that the “world's oceans absorbed 20 sextillion joules of heat due to climate change in 2020 and warmed to record levels” (referencing a paper that has “temperature” in the title, but not so much in the text).  For context, it’s worth considering what NOAA have to report.  They talk about this in terms of a “heat content anomaly” measured against the average for the period 1955-2006 in the top 700m of (ocean) water:

Note that a sextillion (or zetta) is 1021, so don’t let the fact that this is sitting just under 20x1022 confuse you.  Things are a little more clear in another chart (lower on the same page).


It does seem like 20 zettajoules in a year is a bit of a spike, but there are ups and downs, so we really should be thinking about the trend, which is a bit over 160/27 = ~6 zettajoules per year.  Still not great.

 

The most egregious statement, in my view, is that 20 zettajoules is equivalent to the release of 10 Hiroshima grade atomic bombs each second.  That’s more than 300 million bombs, or slightly over two bombs for every square kilometre of land surface on the planet which, even if it happened only once in a year, would be enough to make that a worse year than 2020.

 

(For those keeping score, the Hiroshima bomb, “Little Boy”, is calculated to have released between 50 and 75 terajoules.  If we say it was 63 terajoules [15 kilotons], then you would indeed need very close to 10 bombs per second for a year to reach 20 zettajoules.)

 

It’s very scary to think of so many atomic bombs going off and one could well imagine that this was intention of the writer.  It’s also probably pretty scary to imagine that amount of energy being added to the oceans, since it’s the heat in oceans that cause the sort of storms that can do so much damage to coastal regions and it’s simple to imagine that more energy equals either more storms or stronger storms (or perhaps both).  Whether there is such a simple link? … perhaps, perhaps not.  Oil extraction companies have upgraded their drilling platforms, so perhaps they think there might be.

 

I wondered, however, how much energy we expend each year on hot drinks?  To make it easier, I am going to limit it to tea and assume that for tea, one boils fresh water from room temperature (293K) to boiling (373K) without markedly affecting the density of said water.  I am also going to assume one drinks from a standard cup at 0.24 litres.  So, a litre of fresh water weighs one kilogram and the heat capacity of water is 4.2kJ per kg per K.  Conveniently, this is very close to 1kJ per cup of water per K and since the temperature difference is 80K, we expend 80kJ on boiling water for each cup of tea (assuming no wastage).

 

The question then is how many cups of tea do we, as humans, drink?  World Tea News (yes, there is such a thing) has the answer, conveniently in terms of cups per second – 25,000 cups.  That means that we expend about 2 million kilojoules per second on tea.  This is somewhat short of a sextillion, but we are thinking in terms of Hiroshima grade atomic bombs, each at 63 terajoules.   A terajoule is 1012 joules, so one bomb is equivalent to 31437 seconds of tea preparation, which is one bomb every 8.7 hours, or very close to 1,000 bombs a year.

 

I am reasonably certain that the average English person would be willing to accept 1,000 bombs a year in order to have a good cup of tea.  What, on the other hand, would be patently ridiculous is to express the energy used to prepare tea in terms of atomic bombs.  I have a nagging feeling that the same applies to the amount of energy absorbed by the oceans.

 

Perhaps the author of the article (and the original paper) could have assisted by expressing the temperature change that would be caused by the 20 zettajoules being absorbed by the oceans.  If we make some assumptions, we can get a rough idea.  The ocean surface is about 360 million square kilometres.  The paper was referring to a depth of water of 2,000 metres, but not all the ocean is that deep, and beyond a certain depth (the epipelagic zone) there really isn’t much variation in the temperature, so we could make a rough calculation using an average of 600 metres depth, which is in the middle of the mesopelagic zone, so we have an ocean volume of 2.17x1017 m3 and given that there are 1000 litres in a cubic metre and it takes 3850J to raise a litre of seawater by 1K, that’s about 0.024K a year or 0.24 degrees per decade.  Note that this is very rough calculation, but it’s in the same order as what NOAA are reporting (at climate.gov) where they say that temperatures have been rising at 0.18 degrees per decade since 1981.  If we use 800m, which is towards the bottom of the mesopelagic zone, the result is very close to 0.18K per decade.  I doubt that that figure is right though, since that’s the average since 1981, and today’s figure is likely to be higher (although maybe not as much as a full third higher).

 

If anyone knows what the actual average temperature increase of the ocean was, please let me know.  Looking at the heat content chart (1955 through to 2020) overlaid on the temperature chart (1880 through to 2016), it looks like it’s probably gone up:

Monday, 18 January 2021

NEWS FLASH: Tigers Not Part of Early Human Development

 I’ve being going through the back-catalogue of Tales from the Rabbit Hole, an interesting approach to conspiracy theories and conspiracy theorists.  In the most recent that I’ve listened to, Episode 32, Mick West spoke with the Skeptic of (should be “in”) the North.

 

During their discussion, at about 24:45, there was mention of Michael Shermer talking about the evolutionary benefit of making a Type 1 error (or a false positive) when the data is ambiguous and the example given was of one of our ancestors hearing a rustle in the grass.  The cost of making a Type 1 error and assuming a tiger (when there isn’t one) is minimal, but the cost of making a Type 2 error and assuming there is no tiger (when there is one) could easily be massive.

 

Michael Shermer did in fact raise this as an example, in a blog post called Patternicity: “Better to flee a thousand imagined tigers than be taken unawares by one real one”.  What Shermer isn’t suggesting, however, is that there was an evolutionary pressure placed, by tigers, on humans to make Type 1 errors in response to ambiguous data.

 

Peter Watts did though:

 

Fifty thousand years ago there were these three guys spread out across the plain and they each heard something rustling in the grass. The first one thought it was a tiger, and he ran like hell, and it was a tiger but the guy got away. The second one thought the rustling was a tiger and he ran like hell, but it was only the wind and his friends all laughed at him for being such a chickenshit. But the third guy thought it was only the wind, so he shrugged it off and the tiger had him for dinner. And the same thing happened a million times across ten thousand generations - and after a while everyone was seeing tigers in the grass even when there weren’t any tigers, because even chickenshits have more kids than corpses do.

 

It’s bad enough when apologists use tigers attacking our ancestors as an example when trying to prove that evolution doesn’t exist, but when atheists and skeptics start talking about our ancestors on the African plain being attacked by tigers, it’s a problem.

 

There are two problems really.  First and foremost, tigers have never lived in the wild in Africa (although there might now be feral tigers, that is tigers who have escaped from zoos after being transported to Africa by humans).  If we want to pretend that humans have been shaped into regular Type 1 error makers, by tigers, then we have two options.  First, this applies solely to people with ancestors who lived in the natural range of tigers – with the rest of us not being so inclined. Or, second, our understanding of where we evolved is wrong and we somehow evolved within the natural range of tigers despite the evidence that we evolved into hominid form on the African savannah.

 

The other problem is that if we, as evolving hominids, had not already developed the handy skill of overestimating the level of risk to our lives, we would not have been in the position to evolve into Homo sapiens.  Not that tigers would have eaten us, but lions or wild dogs would have, or any number of dangerous African creatures would have killed us without even bothering to eat us.

 

Making Type 1 errors and treating something benign as a possible threat is not unique to humans: 



I am quite confident that this cat is not going through a calm assessment of the potential for risk associated with the cucumber.  It’s just an instinct to run away from something that looks vaguely like it could be a snake.  Horses are similar, with them freaking out if they see a hose (where they aren’t expecting to see one).  Admittedly horses freak out about a lot of things: plastic bags, people wearing hats, trees, the wind, sand and so on.  Even fish seem to be in the Type 1 error club.  And coral.

 

The point is that this characteristic of being overly cautious about ambiguous data is not a human thing, it’s a characteristic of all higher order animals.  Maybe, it could be argued, even lower order animals are willing (on a metaphorical level) to make the occasional error in order to increase their chance of survival.  We as humans have not developed it, we have just retained it.

 

The idiocy that is involved in thinking that humans cleverly developed this overweening caution due to their interaction with tigers on the African plain and the arrogance to think that it was only humans who did so … both are, sadly, entirely human.