Friday 26 April 2024

Concepts in Taking Another Look at the Universe

In Mathematics for Taking Another Look at the Universe, I discussed the equation that I introduced but not fully explained in Taking Another Look at the Universe.  In that second post, however, I still sorted of skated over a couple of elements and I didn’t specifically explain how I came to the equation that I was notionally explaining.  That was not entirely unintentional.

I usually like doing things from first principles but, in this instance, it was more a case of inspiration.  And I was trying to solve a different problem.  Back in post, right at the end, I mention an internal struggle that I have (which also triggered A 4D Black Hole?) related to considering the universe as akin to both an expanding sphere and an expanding glome.  In it, I threatened to give this some more thought and that was what I was doing.

So, because it is what I have been doing for years, I eliminated one dimension from consideration and thought about a (spatial) circle moving towards me (in time), but one which was expanding (from nothing) as it went.  Then I was inspired to think about it shrinking back to nothing as it reached me and realised that this was equivalent to me being stationary and having a sphere move past me (so long as I could only see slices that were perpendicular to its motion).  Add a dimension and I had a way to consider a(n expanding) sphere that traced out a glome.

This is very similar to what I showed in the chart first shared in Taking Another Look at the Universe:


The major difference is that I have eliminated yet another dimension.  I am only considering one spatial dimension there.  And time.

Which brings me to an element that I skated over.  The blue curve represents potential events that we could see if we look in precisely one direction – at one specific moment in time.  Naturally, we can’t see multiple events in that way.  We would just see photons, and any that arrived simultaneously would be blended together.

While in Mathematics for Taking Another Look at the Universe, I suggested that events on the blue curve are maximally distant observed events (MDOEs) and all less distant events are below the curve, this should not be taken as meaning that we could see those less distant events simultaneously with MDOEs – photons from less distant events will have already past us by.  To work out when, you can draw a line from 13,787 million years ago, through the distance of the event from our location (at that time) and take the intercept with the y-axis.  Divide that by the speed of light and you have how long ago photons passed our location.  For observers at that time, the event being considered would have been an MDOE.

I actually got to the equation x'=(ct0-x).x/ct0 via making a mistake.  I was specifically after a circle, which got me to the x2 element but, because the image in Mathematics for Taking Another Look at the Universe came later, I kept getting stuck on the notion that there was a temporal component (inflation) and a spatial component (distance away from us at the time), which meant that I added them together.  This is part of the reason that I called the left-hand term x' which, I accept, is confusing.  I knew, at some level, that the x2 value had to be negative but, in the intermediate stage, I just had to make it negative, chart it and see how it turned out – and it immediately turned out perfectly, which gave me some not inconsiderable concern.

Why did the x2 value have to be negative?  In retrospect, it is bleeding obvious.  To turn a normal parabola (y=x2) upside-down, you need to make the right-hand term negative.  To raise it up, you need to add a positive value to the right (y=x2+c). To shift it to the right, you need to add a negative term to the squared value (y=(x-b)2).  Once I realised that I was looking for a(n inverted and offset) parabola rather than a circle, things fell into place quite quickly.

It would have all been simpler to have considered a photon reaching us from the approximate era of instanton, when the universe was at a Planck scale.  The vast majority of the transit of such a photon would have been due to expansion.  Photons generated from significantly later events would have a significant element of their transit due to the distance by which our location and the location of the event was separated (at that time).  Photons that reach us from very recent events are near enough to simultaneous and therefore their transit time is almost entirely due to the spatial distance the photons have traversed.

The key element is that unlocked the equation is the realisation that the events that reach us can all be laid out, moment by moment, into a sequence that is 13.787 billion years long.  Therefore, photons from an event that reaches us after, say, 12 billion years, cannot have travelled 12 billion light years, because the universe was only about 2 billion years in radius at the time of the event.  And this led to the chart below:

This chart locked in the equation and extinguished my doubts.

---

The charts are actually reduced more than by two dimensions.  I have only considered expansion of the radius – which in reality expands in all directions, so the charts are only half of what they should be.  Imagine that the observer implied in the chart, is not limited to seeing photons from the region of space that expanded upwards, but also from the region that expanded downwards.  That would imply a surface in the shape of an eye.  Go a step further and imagine that the observer can look around, swivelling around the y-axis (out of the screen to the left, and eventually back again).  This would imply a surface in a shape somewhat like a torus.  Finally, imagine that the observer can look up and down.  The resultant shape implied would be four dimensional and it is not possible for me to describe it, other than as a torus rotated around an additional dimension.  It is easy, but I suspect wrong, to think of it as tracing out a sphere.  However, as observers, we would naturally interpret what we see of the universe as being the inside of an enormous sphere the centre of which we occupy.

Monday 22 April 2024

Mathematics for Taking Another Look at the Universe

In Taking Another Look at the Universe, I blithely introduced the equation x'=(ct0-x).x/ct0 without given any derivation or real explanation as to what the terms mean.  I made the equation very slightly neater than it had been but, in the process, I might have made it less comprehensible for people like myself who like to work from first principles.

The central term is, of course, x – the distance to an event.  The other term, t, is the time of that event – but, in reality, it is more like the difference between the timing of the event and now, so it could be thought of as Δt.  Similarly, x could be more accurately described by Δx – but for reasons that may become clear shortly I dropped the “Δ” from both.

The term t0 is used such that the subscript aligns with H0, the current value of the Hubble parameter, and so t0 is the age of the universe while x0=ct0 is the current Hubble length (or the current radius of a FUGE universe).  By analogy, for an observed event at (x,t), x=ct.

Note that this follows from the notion that we can only observe an event if there has been sufficient time for the photons from that event to reach us.  However, while photons are travelling to us from the event, the space in between is also expanding.  Therefore, for any observed event, there are two components, a temporal one (due to how long ago it happened) and a spatial one (the distance from our location to where the event happened at the time).  The latter is what x' on the y-axis represents in this chart:



It may get a little complicated here.  I suspect this because I have already explained it incorrectly (possibly more than once) and at one point even got close to persuading myself that the equation was wrong.

Consider it this way, all maximally distant observed events (MDOEs) in the very distant past were, when they happened, not as far from where we are in space (than other events) because the universe had not expanded very much.  Note that the equation x'=(ct0-x).x/ct0 specifically considers MDOEs (with less distant observed events lying under the curve).

The most distant MDOE (in any given direction) would have occurred when the universe was half its current age.  For ease we can call this the ½t0 event, or “½t0e” (half-toe, which can be thought of as both a distance and a time). 

Since ½t0e, all MDOEs have by necessity been less distant because there has been less elapsed time for photons from those events to reach us.  Before ½t0e, all MDOEs were less distant because the universe was smaller. 

We can consider the universe as being divided into two eras, a pre-½t0e era and post-½t0e era, with events in both eras that occurred at locations that were equally distant from us (at the time they occurred), meaning that they notionally travelled the same distance in unexpanded space, but photons from the event in the pre-½t0e era will have experienced more expansion during transit.

A marked-up version of the image above may help to illustrate this fact:

Let us take the most extreme example, the instanton event happened about 13.787 billion years ago.  There is effectively no distance to where that event happened, because the maximum expansion one could consider to have happened at that time is one unit of Planck length.  As a consequence, the entirety of the distance between us and where the location of that event is now is due to expansion.

The next most extreme example illustrated above is an MDOE almost 2 billion years later, by which time the universe had expanded to a radius of 2 billion light years.  Photons from that MDOE were not at the full extent of the universe at the time however but rather at 1.565 billion light years.  Note that the location of that event (following the light green line up to the left) is currently 12 billion light years away, indicating the amount of expansion that has been incurred between our location and the location of the event at that time place is 10.435 billion light years.  Therefore, the time taken for a photon to reach us is the1.565 billion years due to the original separation plus 10.435 billion years due to expansion, or 12 billion years, precisely what we would expect.

The upright light green section can be calculated using the following:

Note that sinϴ=x'/√(x'2+(ct0-ct)2)=x/√(x2+(ct0)2), so, noting that x=ct:

x'2/(x'2+(ct0-x)2)=x2/(x2+(ct0)2)

x'2.(x2+(ct0)2)=x2.(x'2+(ct0-x)2)

x'2.(ct0)2=x2(ct0-x)2

x'.(ct0)= (ct0-x).x

x'=(ct0-x).x/ct0=(ct0-ct).t/t0

This should come as no surprise, since this is the equation that I charted.

---

Let us consider the (clean) chart again:


The apparent distance to any event is the length of the curve.  If we divide the curve into a relatively large number of finite elements and approximate the curve by summing the length of all those finite elements, we arrive at 15,800 million (light) years.  As mentioned in Taking Another Look at the Universe, this is well within the range used by Lineweaver and Egan (but they got it from integrating a(t) over the age of the universe, if I understand it correctly).

Note also that I asked a question about dark energy in Taking Another Look at the Universe.  We can use these finite elements to take a look at the apparent Hubble parameter value at all points along this curve as the universe expands, and we get a curve that looks like this (once normalised):


The “apparent” H is based on a set of calculations, using a change in the age of the universe by a millionth of 1% and the consequent change to the values of x'.  Normalisation involves a multiplication of the change in x' (so Δx') by x0/x0', noting that the distance is expressed in terms of x.  This is effectively equivalent to shortening of the blue curve in the first figure to a length of 13,800 million (light) years.

The shape of the “apparent H” curve is of particular interest.  Consider it with respect to the discussion in The Problem(s) with the Standard Cosmological Model and the eras discussed at the Scale Factor page at Wikipedia.

There really are only two eras observable in the chart, from about 7 billion years ago to now (to the left) corresponding to the “dark-energy-dominated era” and the period before that (to the right) corresponding to the “matter-dominated era”.  The radiation-dominated era and the purported era of inflation (plus era that preceded it) are not distinguishable at the scale used.

The chart indicates a very similar situation as that posited with the introduction of dark energy, but without requiring any actual dark energy.  The most recent era appears to have acceleration.  The only times that the apparent Hubble value is equivalent to the inverted age of the universe are at the transition between the “dark-energy-dominated” and the “matter-dominated era and maximally long ago/(apparently) far away.

Note that a lack of dark energy is consistent with the mass of the universe being ~1053kg (the mass one would expect in FUGE universe that is 13.787 billion years old).

---

I do acknowledge that, even after being normalised, the “apparent” values of H in the recent past/near vicinity are very high.  This may be worthy of further investigation.

Thursday 18 April 2024

Taking Another Look at the Universe

It might be a big claim here, but I suspect that we might be looking at the universe incorrectly.

Generally, we tend to think of the universe a little like this:

I’m not saying that this is entirely wrong (so long as the circle sort of represents a sphere), but this is not how we see the universe.  What we see off at the distance, ~13.787 billion light years away, is the cosmic microwave background that originated no more than 380,000 light years away from us.

It could be more accurate to represent how we view the universe as like this:

That’s not to say that we are outside the universe, per se, but we are certainly not in the universe that we see.  The universe that we see is in the past.  It is merely a trick of perspective that the universe appears to be all around us even though the furthest reaches of what we can see (apparently 13.787 light years away) only arose about 380,000 light years from where we are now.

However, not even this is correct.  There is a relationship between how long it took light from an event to arrive where we are and where it started from and where that location is now.  This relationship is given by the pair of equations: x'=(ct0-x).x/ct0, x=ct where t0 is time since the instanton, t is time since the event, while x and x' is the distance to where the event took place (actual and observed).

This graph illustrates the concept (noting that we are considering a FUGE universe):

Interestingly, the length of that blue curve from intercept to intercept on the x-axis … is ~1.148 times that of the length of the x-axis between intercepts – if laid flat against the x-axis, that would be 15,800 million years, or 15.8Gyr which, when multiplied by the speed of light, is well within the range of 15.7±0.4 Glyr as used by Lineweaver and Egan (see FUGE Entropy).

So, the question must be asked, is the apparent variability of the scale factor merely an artifact of our observation of the universe?  And if so, does it explain what is currently explained by the introduction of dark energy (noting that small values of Δx translate to values of Δx' that start off at about 1.4 times x and decrease towards the top of the curve to equivalence [at 6.9 billion years ago] and then increase again)? 

---

Note the section of the graph that relates to the “dark-energy-dominated era” (using the description from Space Telescope Science Institute's HubbleSite page on dark energy* – “About halfway into the universe’s history — several billion years ago — dark energy became dominant and the expansion accelerated”:

Is it possible that what appears to be dark energy could be an artefact of observation?

---

* Note that on the Wikipedia page on the scale factor, the section on the dark-energy-dominated era indicates that this era began when the universe was about 9.8 billion years old, but the reference is from a 2006 book whereas the HubbleSite page was updated in late 2022 so, while it has a more vague reference, it should be considerably more current.  There is also a claim by the Department of Energy (which seems to be paraphrased from a 2022 article by post-doctoral cosmology researcher Luz Ángela García at space.com) that “somewhere between 3 and 7 billion years after the Big Bang, something happened: instead of the expansion slowing down, it sped up. Dark energy started to have a bigger influence than gravity. The expansion has been accelerating ever since”.  Ethan Siegel has the dating at “about 6-to-9 billion years ago”.  Other sources of varying levels of authoritativeness give the figure as about 7 billion years ago (for example Eric Lindner from the Supernova Cosmology Project – but there is no date on the page, so it’s difficult to assess whether this is based on recent work or was just a good guess from as long ago as 2010).

Tuesday 16 April 2024

FUGE Entropy

So, I had toyed with the idea of trying to work out the entropy of a FUGE universe by considering states.  First, I went with a very rough approximation similar to this:


The first instance has only one possible state, noting that it’s representing an instanton (which, yes, should be a sphere of radius r=lPl, but this is just a rough approximation).  Then there’s the second instance and it gets tricky.

The simplistic (also known as “wrong”) way to look at it is to imagine placing two instantons in two of the available slots.  There are, therefore, Ω=8×7=56 different configurations or, more generally, Ω=n3×(n3-1)×…×(n3-n).  This is approximately equivalent to Ω=(n3-(n+1)/2)n – with the approximation error decreasing as the value of n gets larger (by the time that n=10, the error in the approximation is only 0.004%).  Note also that as n increases such that n3>>(n+1)/2, n3-(n+1)/2n3.

If we consider these as microstates (which we should not, because it is wrong to doso), then we could say that the entropy is:

S=kBlogΩ=kBlog((n3)n)=3n.kBlog(n)

Then if n=æ≈8.07×1060, because we are assuming a FUGE universe (see also below), S=1.5×1063.kB=2.04×1040J/K.

Casting around the internet, I find that the entropy of the universe is of the order of 10100-10104J/K.  Therefore, there is something wrong with my approximation.  I did note that the magnitude of the error is close to æ, 2.04×1040×8.07×1060=1.64×10101, hinting that Sn2.

Remember that I used a simplistic approach.  I used notional cubes, rather than a sphere that expands.  Also, you clearly can’t just place instantons into slots, or rather, once you get a volume that is greater than one instanton (or one Planck sphere), you also have the option to spread the energy out.  And this increases the number of possible states.  The question then is by how much is the number of possible states increased?

Using the same sort of approach as in The Conservatory - Notes on the Universe, it would appear that the number of arrangements of mass-energy distribution could be quantised, which means that we cannot just shave infinitesimal amounts off the mass-energy in an instanton and redistribute it.

Consider then that, for each value of n, it would be possible to split each instanton worth of mass energy into n components and then the number of states would be in terms of those components.  In the second instance above, assuming for ease that the components cannot be collocated, this would be Ω=8×7×6×5=1860.  For the third, it would be Ω=27×26×25×24×23×22×21×20×29=1.7×1012.  More generally, this is approximately equivalent to Ω=(n3-(n2-1)/2)n.n.  Again, when n3>>(n2-1)/2, we can remove the second term, so:

S=kBlogΩ=kBlog((n3)n.n)=3n2kBlog(n)

And so, after substituting n=æ≈8.07×1060, S=1.2×10124.kB=1.66×10101J/K.  So we end up in the right ballpark, but are we close enough?

---

Then I thought about it in a different way.  An instanton is, effectively, a black hole and the entropy of a black hole is given by:

SBH=kBA/4lPl2

The area on the surface of a sphere is A=4πr2, so for an instanton A=4πlPl 2 and so:

Sinstanton=kB4πlPl 2/4lPl2=kBπ=4.34×10-23J/K

A FUGE universe is equivalent to a blackhole at any time and when the age of the universe is t=æ.tPl the radius is r=æ.ctPl=æ.lPl, so:

Suniverse=kB.4π(æ.lPl)2/4lPl2=kBπ.æ2

Given that for t=13.787 billion years, æ≈8.07×1060, that would make the entropy Suniverse2.04kB×10122=2.83×1099J/K.  This seems to be a bit higher than normally calculated, where the value tends to be in the order of kB×10103 but, by strange coincidence, Charley Lineweaver (of The Mass of Everything fame) co-wrote a paper in 2010 with Chas A Egan – A Larger Estimate of the Entropy of the Universe – which has, in the abstract, the following (where k is kB): “We calculate the entropy of the current cosmic event horizon to be SCEH=2.6±0.3×10122k, dwarfing the entropy of its interior, SCEH int=1.2+1.1−0.7×10103k.”

The difference appears to be due to a different method of calculation which effectively uses a different radius, they used 15.7±0.4 Glyr, as compared to my 13.787 Glyr.  Note that (15.72/13.7872=1.482=1.297)*2.04=2.64.

The question then is, why was the cosmic event horizon set at 15.7±0.4 Glyr?  Looking at equation (46) in the paper, it seems to be based on a variable and/or non-unity scale factor, noting that Figure 1 includes text that indicates an age of the universe of 13.7Gyr.  (See also next post.)

---

The bottom line is that, if this is a FUGE universe, then the current entropy is ~2.83×1099J/K.  It is also worth noting that, when expressed in terms of Planck units, entropy of the universe at any time t=n.tPl has a magnitude of π.n2.

We could introduce a “Planck entropy”, being the entropy of a Planck black hole (also known as an instanton), SPl=kBπ(J/K).  In terms of such a derived unit, the current entropy is effectively the square of the age:

Suniverse 2.SPl

Sunday 14 April 2024

A Dark Question

Dr Becky Smethurst put a video out last week about a possible resolution to the “Hubble Tension”/“Crisis in Cosmology”.  The work has not yet been published, but instead is covered in a talk by Wendy Freedman, but it is interesting to note that the result that the JWST people arrived at is H0=69.1±1.3km/s/Mpc (which corresponds with a Hubble Time of TH=14.15+0.27 billion years).

It was quite timely because I was already thinking about expanding on something I was talking with someone about in the past week.

Imagine that soon after Erwin Hubble had identified the redshift of distant objects (in the 1920s), sufficiently advanced telescopes were developed and used to determine the value of the Hubble parameter to be close to 70km/s/Mpc (didn’t happen until the 1990s).  Say then that someone had quickly worked out that ~70km/s/Mpc is the inverse of ~14 billion years (fitting excellently with the age of the oldest known star, although its age was only determined to fit nicely after revision to models in 2015 and 2021).  Then, a short time later, someone else was fiendishly clever enough to use the technology available at the time to measure the geometry of the universe and determine that it is flat, meaning that the density of the universe is critical (this wasn’t really determined until 2000 with analysis of the BOOMERanG experiment results from 1997 and 1998).

So, in this hypothetical world we would have had, in about 1930, all the details necessary to conclude that our universe is a FUGE universe.  A FUGE universe starts out as an “instanton”, effectively a Planck black hole of half a unit of Planck mass-energy with a radius of one unit of Planck length, adding half a unit of Planck mass-energy and expanding its radius by one unit of Planck length every unit of Planck time.  Such a universe has a Hubble parameter that is the inverse of its age and has critical density throughout its life (meaning that it is, has always been and will always be flat).

Now say that in this hypothetical world, about 30 years after the FUGE universe model was established, someone discovers the cosmic microwave background (CMB).  Analysis of this raises bit of a mystery because the CMB has an unexpectedly high level of isotropy.

Under these conditions, would it be reasonable to posit inflation (about 15 years after the discovery of the CMB)?  Note that one of the motivations for inflationary theory would be missing in our hypothetical world, because the flatness problem would not exist – critical density (and thus flatness) of the universe is perfectly explained by the FUGE model.  The other motivations also have other potential explanations: gravity may suffice to explain the homogeneity of the horizon problem and the magnetic-monopole problem only relates to the absence of hypothetical particles (the standard approach, when finding that your hypothesis predicts the existence of some non-existent thing, is to reassess your hypothesis rather than engage in a form of special pleading – especially after 90 years have passed with no observational evidence).

Note also that in a hypothetical world which has accepted the FUGE model, we have a very simple chronology – with smooth expansion of the universe over ~14 billion years to arrive at a Hubble parameter value that is the inverse of ~14 billion years and a density that matches the observed (critical) density.  In order to arrive at the value of the Hubble parameter, after having introduced inflation, we have to posit  a much more complex chronology at least three phases: smooth FUGE-like expansion for a fraction of a second (grand unification epoch), inflationary expansion for a fraction of a second (during which mass-energy would have had to have been added at a much higher rate if critical density were to be maintained) and an approximately 14 billion year-long phase in which the expansion was precisely that necessary to make the universe today look like it had only undergone FUGE-like expansion.

Personally, I don’t think it would be reasonable.

Our situation is actually worse than described above because, in the Standard Model, there are five phases: FUGE-like expansion (grand unification epoch), inflation, two periods of reduced expansion (less than FUGE-like: radiation dominated and matter dominated) and a current period of accelerated expansion (greater than FUGE-like) at a rate necessary to make the universe today look precisely like it had only undergone FUGE-like expansion – a situation that would not have been the case since a fraction of a second after the instanton arose and won’t be the case ever again (because the explanation for observed accelerated expansion is that we are in a dark-energy-dominated era [other explanations are available] and such domination by dark energy is unlikely to suddenly dissipate in order for us to return to FUGE-like expansion on an on-going basis and we are unlikely to return to the conditions of earlier putative eras of reduced expansion [the radiation dominated and matter dominated eras]).

Is it truly reasonable to have such outrageous fiddling of the universe, given the option of the FUGE model (or something like it)?

Wednesday 10 April 2024

Thinking Problems - Transmission

This is a follow-up to Thinking Problems – Lab Leak.  One could have thought that, by now, the issues of COVID would have faded into the background, but no.  Misinformation about the COVID vaccine is still circulating.

In discussions with JP, there was a common claim that “they” had said that the vaccine would prevent transmission.  For example, in August 2021, JP was housebound because he was worried about a local outbreak.  I asked about his vaccine status and his reply indicated that there would have been little consolation in having a second jab if he could still spread it.  A month later he was claiming that the “initial focus was on preventing spread”.

The problem is that the issue is very complicated.  I know that I am going to overly simplify things here, but I do so with the intent of getting past an apparent blockage on the part of some of the more conspiracy minded among us.

For a virus-based disease, there is sequence of events somewhat like this:

When thought of like this, it is clear that having a vaccine cannot help with certain stages.  You are either exposed or you are not.  With infection, that’s more a question of whether you ingested the virus or not.  Here things are a bit blurry because there is you and there are your cells.  There is also the virus and there are virions.  It’s possible that a virion (one particle of the virus) got into you, but did not enter a cell (thus infecting it) before being excreted or destroyed.  Did you (the human) get infected by the virus?  You certainly got closer than if you were merely exposed to the virus (ie sitting in a room in which virions were floating around in the air that you breathed, but you didn’t happen to breathe in one).

What about if one or a few of your cells did get infected, but your immune system immediately identified the threat and destroyed the infected cells before they could set up their virus replication process?  You didn’t contract the disease, your body as a whole didn’t get infected, but you were partially infected.

What about if you did get widespread infection of cells by the virus, your immune system swung into gear mounting an effective response, but you never got any symptoms – meaning that, strictly speaking, you never developed the associated disease?  This is non-symptomatic infection, which in hindsight appears to have happened with considerable frequency.  Usually, being non-symptomatic means you are not contagious.  But not always (as the Typhoid Mary case demonstrates).

I am going to just highlight a grey area between infection by the virus and development (or contraction) of the disease.  For the purposes of this argument, I am counting disease as including the non-symptomatic who produce enough virions to be contagious.

If viruses didn’t cause disease, we probably wouldn’t care about them.  It is worth noting though that not all the symptoms of an infection are due to the pathogen (virus or bacteria) per se – some of them are the immune system fighting against the infection (fevers for example).

The job of a vaccine is to prepare the immune system for fighting a specific pathogen (or suite of pathogens).  The better prepared the immune system is, the less likely it is that the disease will take hold.  This can range from preventing symptoms entirely, making the symptoms less severe and reducing the time that it takes for the immune system to eliminate the disease.

Viruses are particularly nasty because they take over the cells of hosts and redeploy them to replicate virions.  It’s rarely a friendly take-over, with the replication machinery set to keep working until the cell bursts, releasing thousands of virions which go on to infect new cells.  Quickly, the body is riddled with virions which then get into various liquids in the body, including those in the lungs, meaning that when an infected person breathes out, there are virions lurking in droplets that we inevitably spread about us.

This is transmission in the schema above.  The virus effectively uses us to spread itself around us in a fog of about 1.5-2 metres (as is most visibly noticeable on a cold day).  But note that transmission does not mean reception (exposure or infection).

If you have a viral disease, the only way to prevent transmission is to prevent droplets getting out and to another person.  The right sort of mask, when worn properly, can do that.  Or keeping away from others (social distancing).  Or not going out in public (isolating).  The vaccine will not help you, if you already have the disease.  Your having been vaccinated will also not help you if it is not you who has the disease, it won't stop stop someone else transmitting.

What the vaccine will do is increase the likelihood, if you get virions into your body, that your immune system will prevent an infection progressing to disease, reduce the seriousness of the disease if you can't prevent it (and possibly reduce the number of virions you produce that can then be transmitted to someone else) and shorten the period in which you have the disease (and are contagious).

In that sense, the vaccine can certainly minimise spread of the disease.

But it will never prevent contagious people from transmitting the virus, nor will it necessarily prevent you developing some form of the disease if you are infected (although it's much more likely to be mild, or even asymptomatic, rather than severe).

That’s not to say that there aren’t sterilising vaccines or other treatments –that are hugely effective and prevent you from producing virions if treated.  It’s simply that the covid vaccines were never advertised as those.  The effort was all about preventing severe disease, which is why they are described as COVID-19 vaccines not SARS-CoV-2 vaccines.

Tuesday 9 April 2024

A Tiny Error in All Objects and Some Questions

 

In The Mass of Everything, I referred to the paper All objects and some questions and most specifically the image below:


That figure had a long text below it which includes the following statement: “The smallest possible object is a Planck-mass black hole indicated by the white dot labeled ‘instanton’ (Ref. 20). Its mass and size are (m,r)=(mP,lP).”  Ref 20 is a paper by Carr and Rees called “The anthropic principle and the structure of the physical world” published by Nature in 1979 and not easily accessed.  I do note that Carr wrote a later paper, Does Compton / Schwarzschild duality in higher dimensions exclude TeV quantum gravity?, in 2018 which includes this image:

The paper also includes this statement: “The Compton and Schwarzschild (radius) lines intersect at around the Planck scales, RP = √ħG/c3 10−33cm, MP = √ħc/G 10−5g”.  Note the lack of precision in Carr’s paper.  This is, as it turns out, fully justified.

The Schwarzschild radius is given by rS=2GM/c2, so where the radius is the Planck rPl=ħG/c3 length we get:

rS=2GM/c2=√(ħG/c3)

So,

M=√(ħG/c3).c2/2G=√(ħc/G)/2=mPl/2

Therefore, the instanton must be (m,r)=(mP/2,lP) or, possibly, (m,r)=(mP,2lP), but it cannot be (m,r)=(mP,lP).

Alternatively, Lineweaver and Patel could have written “The smallest possible object is in the range of a Planck-mass black hole indicated by the white dot labeled ‘instanton’ (Ref. 20). Its mass and size are of the order of the Planck mass and length.”  And the chart would have to be updated to put approximations against the mP and lP that intersect on the instanton.

Or, and this would be more appropriate, put a “/2” after mP in both instances and reword to “The smallest possible black hole has a Planck-length radius indicated by the white dot labeled ‘instanton’ (Ref. 20), and mass of one half of a Planck-mass.”

Sunday 7 April 2024

Another Fine Mess You Have Got Yourself Into, Luke Barnes

In his New Atlantic article, The Fine Tuning of Nature's Laws, Luke Barnes provided this image:

Below it was explanatory text:

“What if we tweaked just two of the fundamental constants? This figure shows what the universe would look like if the strength of the strong nuclear force (which holds atoms together) and the value of the fine-structure constant (which represents the strength of the electromagnetic force between elementary particles) were higher or lower than they are in this universe. The small, white sliver represents where life can use all the complexity of chemistry and the energy of stars. Within that region, the small “x” marks the spot where those constants are set in our own universe.”

While he doesn’t specify clearly, I think he is making an error here.  Unfortunately he doesn’t talk much about charge (both mentions are in reference to what electromagnetic force is not about charge per se), but it seems like he might be suggesting that it could be possible to change the fine-structure constant without changing the elementary charge.  I have a vague recollection of having even read it, make it less of a suggestion and more of an explicit statement, but for the life of me I can no longer find it.

Even if he is not making such a claim, he is still putting the cart before the horse.  Electromagnetic force between elementary charges is going to be proportional to the magnitude of the elementary charge, full stop.  It has nothing to do with the value of the fine structure constant which is merely a representation of the ratio between the elementary charge and the Planck charge (α(e)=e2/qPl2, where the (e) subscript highlights that the calculation of the electromagnetic coupling constant [also known as the fine structure constant] is calculated on the basis of the elementary charge).  He should be talking about the value of the elementary charge perhaps, and not the fine structure constant.

Sure, if a hypothetical elementary charge were z times that of the elementary charge (ehyp=ze), then (at the same separation, r) the repulsive electromagnetic force between two protons would be z2 times as strong – and the attractive force between an electron and a proton would also be z2 times as strong.

For two protons, it seems that the maximum value of ehyp may well be the Planck charge, noting that it was calculated in SI World and Planck World that the strong force is more than sufficient to hold two Planck charges together at a distance of femtometre.  This means that there is another way to look at the fine structure constant – that is as a representation of how finely tuned the strong force is not.  It is quite bit stronger than it needs to be (perhaps in the order of a thousand times).  Alternatively, if there is a natural limit to the possible charge on subatomic particles to the Planck charge, then it is possible that the elementary charge could have any non-zero magnitude below that of the Planck charge, giving the fine structure constant any non-zero value below unity.

Consider then an electron and a proton bound in a hydrogen atom.  The electron can be thought of as being prevented from spiralling into the nucleus by the balance of forces (there is also an argument from the basis of the kinetic/potential energy balance but note that this argument, as presented, uses a leap that is not explained and is thus not accounted for adequately – see also the Bohr model which co-incidentally points towards the nature of the leap). 

For hydrogen, (note the subscript used here is to emphasise that we are talking about an electron):

meve2/re=e2/4πε0re2

meve2=e2/4πε0re

Note that 2πre=λe so

meve2=2π.e2/4πε0λe

e2/4πε0= meλeve2/2π

but note that pe=meve and λe=h/pe, so meλe=h/ve and so

e2/4πε0=(hve/2π)=ħve

Thus

ve= e2/4πε0ħ

Note that this lines up with the value calculated here for a hydrogen atom where the principal quantum number n=1.

Now consider that α(e)=e2/4πε0ħc and qPl=(4πε0ħc), we have

ve=(e2/qPl2).c=α(e).c

It follows, therefore that

vhyphyp.c

This clearly places a natural limit on the charge of an electron such that 0<αhyp<1, meaning that 0<ehyp<qPl (assuming non-zero mass, otherwise “≤” might apply at the upper end).  This is a form of mathematical confirmation of the intuition obtained from consideration of the case of two protons.

Note the immutability of this equation.  If you change the magnitude of the elementary charge, you change the magnitude of the electromagnetic coupling constant and therefore you change the speed of the electron.

Looking back at an earlier equation and multiplying through by ħc/ħc

meve2= ħc.e2/4πε0ħcre

Recalling that α(e)=e2/4πε0ħc and that ve(e).c

me(e).c)2= ħc.α(e)/re

me(e).c = ħ/re

Rearranging for re

re=ħ/(me(e).c)

Compare this with the Bohr radius which is given by

a0=4πε0ħ2/mee2

Multiplied through by c/c and noting that α(e)=e2/4πε0ħc

a0= ħ.4πε0ħc/e2/(me.c)

a0.1/α(e)/(me.c)= ħ/(α(e).me.c)=re

By extension, we see that

rhyp=ħ/(mhyphyp.c)

---

The implication here is that there is some flexibility in the related values for our hypothetical electron, αhyp, mhyp and rhyp.  We have already established that 0<αhyp<1.  Looking at the extremes:

If αhyp0, and mhyp has any value, then

rhyp→∞

If αhyp1, and mhyp=mPl=(ħc/G), then

rhyp→ħ/(√(ħc/G).c)=√(ħG/c3)=lPl

If αhyp1, and mhyp>mPl=(ħc/G), then

rhyp<lPl

If αhyp1, and mhyp→0, then

rhyp→∞

In other words, the orbital radius of the electron could (depending on the choices for the other two values) be anything greater than the Planck length.  The mass of the electron has a soft limit at the Planck mass, but could have higher values if the fine structure constant were sufficiently low.  Note that we would eventually run into problems with low values of the fine structure due constant gravitational attraction swamping the electromagnetic repulsion meaning that there is another soft limit hidden in there.  Also, there is a limit due to the size of the nucleus, meaning that rhyp would have to be significantly above the femtometre scale. The magnitude of the fine structure constant is limited to between 0 and 1, as explained above.

In reality, the only possible driver of any fine tuning here, if anything, is the value of the strong coupling constant, but this only affects particles in the nucleus, and it is more than sufficient to bind two particles with a unit of Planck charge each as close as a femtometre to each other.  The tuning, such that it is, is with respect to the separation at which the force is maximised – but even then, this is about 104 tighter than the electron orbit.

It appears, therefore, that the much vaunted “fine tuning” is, in fact, pretty damn coarse.

---

I don’t know enough about the (residual) strong force to work out if there are any natural limitations to its strength.  But the valid magnitude for the fine structure constant is certainly limited to between zero and unity, so Barnes’ image should at least look like this:


Since nothing happens in the region above where the strong constant has a strength of 1, we could safely ignore that space – which Barnes sort of does with his squeezing 10- in a region that is about half that of 1-10.  Just keep in mind that this is arguably a hypercorrection:

Barnes’ scale is strangely pseudo-logarithmic, centred on unity with more than one half of it taken up by values, on both axes, between 0.1 and 10.  At first glance (especially prior to correction) it seems that the line above the “carbon-impossible” region might have a shape that is merely an artifact of his selection of scale.  But with the cut-down version, we can see clearly that this isn’t the case, the point [0.1,01] sits on the line, but neither [0.01,0.01] nor [1,1] do.


While there is very limited data, I suspect that what Barnes did was use a combination of logarithms and square roots of the offset from unity to construct his scale.  I don’t know why he did that.  If he’d not used such a scale, he could still have made his point, perhaps even more strongly.  If he didn’t use his strange logarithm scale, he could have had something more like this:

Note that I’ve just used his scale and plotted the intersections, I’ve not tried to reproduce the curves.  I am not commenting on his claims in the coloured sections per se, but I have added my caveat to try to prevent the image being misused.  Remember, you cannot change the fine-structure constant without affecting the elementary charge – by definition.

Why didn’t Barnes present his case more like this?

I suspect that the problem is that it would appear to make his case too strongly and that would have attracted closer, unwanted sceptical scrutiny.  There is good reason to label apologists as “liars for Jesus”, Barnes among them.

---

Interestingly, in a later paper, Barnes does not mention the fine-structure constant at all (although he does use the symbol α without saying what it means, it is used in a claim about the mass of a proton).