Sunday 12 March 2023

Problems with Accelerated Expansion

This is a continuation of the post Accelerated Expansion, which followed on from Extended Consistency Principles. In the latter, I argued that we should not expect to be in a privileged era, which is to say we should not – when looking out on the universe – observe a situation that has never happened before and will never happen again, other than as strictly required for us to exist so that we may make those observations.  But there are what I consider to be major contraventions of these principles.

The age of the universe appears to be the simple inverse of the Hubble parameter, at this time.  If there had been an extremely short period of inflation only, there would be such a small deviation from this situation that inflation would be within the noise, indetectable given the uncertainty inherent in our measurement techniques. If there had been a relatively short period of deceleration in the radiation-dominated era, for about 47,000 years, then the effect of this would also be within the noise and thus indetectable.  It should be noted however that the age of the universe would never have been equal to the simple inverse of the Hubble parameter once inflation had commenced.  (That is to say that although it is likely true that the Hubble parameter may have been related to the age of the universe, that relationship would not have been simple at that time.)

If there was then a period of less extreme deceleration from the end of the radiation-dominated era until about four billion years ago, as described in Decelerated Expansion, then the currently measured values of the Hubble parameter would have been expected to have occurred when the age of the universe was about 9 or so billion years old, or as much as 5 billion years ago (for H=74km/s/Mpc). At no point during the matter-dominated era would the Hubble parameter have been the inverse age of the universe.

Then, as the chronology of the universe is understood, something happened to make the value of the Hubble parameter, right now, extremely close to the inverse of the age of the universe (with the average of the measured values, along with the most recent measure, lying pretty much precisely on the inverse age of the universe.  And, unless the acceleration of the universe stops, right now, the universe will never again have a Hubble parameter that is equal to the inverse of its age (in fact, looking at the value of the Hubble parameter, it would look increasingly younger were we to assume that t=1/H.

There are a few ways, given the assumptions above, that the Hubble parameter could be at its current value:

  • the matter-dominated era deceleration was as given (dH/dt=-2H2/3) but ended about 4.6 billion years ago, rather than 4 billion years ago, and thereafter dH/dt=0, meaning that w=-1 and that the Hubble parameter has been about 71km/s/Mpc ever since – and thus only coincidentally is precisely the inverse of the age of the universe.  That would make this a privileged era and there would be no accelerated expansion.

  • the matter-dominated era deceleration was as given above and ended about 4 billion years ago, and thereafter there was accelerated expansion, at a higher rate of accelerated expansion than provided by the Planck Collaboration, so that the Hubble parameter (which naturally tends to decrease as the universe ages), would be raised from about 66km/s/Mpc to the current value of about 71km/s/Mpc, rather than the Planck Collaboration’s value of 67.4km/s/Mpc.  As calculated above, that would require an equation of state parameter of about w=-1.13.  This would make this a privileged era and retains accelerated expansion … but requires some fancy footwork by the universe to make things perfect just in time for us to observe it.  Alternatively, we happen to have a Hubble parameter that is just coincidentally within about 5% of the inverse age of the universe.

  • the matter-dominated and the dark-energy-dominated eras had neither deceleration nor acceleration and dH/dt=-H2 throughout.  This would mean that, at 13.77 billion years of age, the Hubble parameter within our universe would be the inverse of that, about 71km/s/Mpc.  This would mean that we do not live in a privileged era (at least not with respect to the Hubble parameter).

There is the question of the density of the universe.  If the universe is flat and dH/dt=-H2, then it follows that the mass (or rather mass-energy) of the universe is increasing with time because M=(c3/2G).t (equivalently E=(c5/2G).t)  This means that for every unit of Planck time (tPL=√(ħG/c5)), M changes by half a unit of Planck mass (mPL=√(ħc/G)), which I’ve mentioned before.  (Planck energy is a derived value, using EPL=mPLc2=√(ħc5/G)).

There’s a certain elegance to an invariant increase in the mass of the universe, which as described here may be twinned, at a rate of one unit of Planck mass-energy per unit of Planck time.  While I cannot claim that the universe needs to be elegant, if the universe has not increased its mass-energy this way, then we are – again – in a privileged era in which our universe’s mass-energy (our half of it, if you like) is precisely what you’d get if half a unit of Planck mass-energy had been entering every unit of Planck time for the entire age of the universe.

Note that this invariant process would ensure that the universe would remain flat throughout.  For the universe to remain flat during periods of deceleration and acceleration, the rate at which mass-energy enters the universe would be variant.  This is not necessarily a problem, since the precise mechanism by which mass-energy is forced into, channelled into, sucked into or generated in the universe is unclear.  It could just be an artefact of a total zero energy balance as the universe expands, so that more or less mass-energy would appear depending on the expansion rate.  But, unarguably, such a universe would be less elegant.

There is also the mathematics that gets us to dH/dt=-H2(multiplied by some constant) which implies very strongly that H=(constant)/t.  I don’t know how we get away from the problem that, if the value of the constant changes at various times, even if due to conditions that change over time, due to the varying levels of (or domination by) radiation, matter and dark energy, then you have a situation where, in reality, (constant)=f(t).  In which case, the implication that, at t=9.8 billion years, H=66.5km/s/Mpc (implied by Planck Collaboration’s w=-1.03) is highly questionable and would require another level of fancy footwork by the universe.

Then there is dark energy.  According to the standard model, we are in a dark energy dominated phase.  Note that “dark energy” doesn’t necessarily mean that there’s some sort of light-challenged energy, but more that there is a phenomenon that needed some explanation - and the term “dark energy” was assigned to the as yet unknown cause.  Both of the two most common descriptions of dark energy resolve down to the notion that there is a distribution of very low density energy throughout the universe, in the order of 6×10−10 J/m3.  But note that this density is invariant.

Consider a universe in which there is a constant amount of mass-energy, it would look a bit like this:


With dark energy (which can be considered an overlay), it is thought to look more like this:

 

(Note that these images are snipped from a larger image at bigthink.com which credits them to E. Siegel/Beyond the Galaxy.  I’ve not used that larger image because it seems to make an error in the charts that appear to the right of the second of these images.  It could just be a simplification for the purposes of explaining a larger [or different] truth, or what is known as a “lie to children”.)

Now, if the entirety of the standard model is correct, meaning that the entire universe is currently 93 billion light years across (so having a radius of about 46.5 billion light years), and that there is a density of dark energy of 6×10−10J/m3 which makes up about 68% of the total energy, which would be about 8.8×10−10J/m3, then the total mass of the universe would be 3.2×1071kg, a mass for which the Schwarzschild radius is 550 billion light years.

Note that the Schwarzschild radius that correlates with a density of 8.8×10−10J/m3 is 13.5 billion light years.

Think about that for a moment.  The notion is that there is dark energy (the density of which is invariant), dark matter (the density of which relates to the inverse cube of the scale factor), baryonic matter (the density of which also relates to the inverse cube of the scale factor) and radiation (the density of which, at least during the radiation era, related to the inverse fourth power of the scale factor).  These all combined just perfectly so that, now, when the universe is about 13.7 billion years old, the density is pretty much precisely that of a Schwarzschild black hole of a radius of 13.7 billion light years, making it look like the universe had been expanding at one light year per year, and maintaining a critical density (ρc=3H2/8πG where H is and has always been and always will be the inverse of the age of the universe), which it would need to do if it were flat (and, if Sean Carroll is correct, if the universe is ever flat then it always has been and always will be flat).  How incredible is that?  Or should I say, how credible is that?

Penultimately, cosmic acceleration was raised to explain some observations and there exist other potential explanations to account for those observations (also here).  I am not saying that any of those explanations are correct, merely pointing out that the lack of acceleration is not an insurmountable problem in itself.  Its removal would just mean that we would need to look for other, more satisfactory explicatory mechanisms for what has been observed.

And, finally, there is evidence that the methodology that led to the notion of dark energy in the first place was problematic.  Basically, we should expect the universe to homogenous and anisotropic only at sufficiently large scales (because the universe is lumpy at lower scales, such as stellar systems, galaxies, galaxy clusters and so on) – about 200-300 megaparsecs, or one billion light years.  The work that lead to the notion of dark energy assumed homogeneity and anisotropy at 100 megaparsecs, which doesn’t sound too far a stretch, but the analysis showed that the apparent acceleration of the universe disappears once the data is rescaled.  We should not rest on that too much though, because research in the future may reveal that the data is significant at the correct scale and some explicatory mechanism may still be required.

No comments:

Post a Comment

Feel free to comment, but play nicely!

Sadly, the unremitting attention of a spambot means you may have to verify your humanity.