Wednesday, 20 April 2022

Digesting a Paper on Flatness (Part 3)

 See Part 1 to understand what this is about.

--

The partition function of a statistical ensemble can be represented by a path integral over configurations that are periodic in imaginary time.

The mention of a path integral indicates that we are talking about the partition function in relation to quantum field theory here rather than statistical mechanics, or as Wikipedia helpfully puts it:

In quantum field theory, the partition function {\displaystyle Z[J]}Z[J] is the generating functional of all correlation functions, generalizing the characteristic function of probability theory.

Um, ok.  That doesn’t really help.

I think it’s entirely possible to get hung up on this sentence overly long.  For the moment, therefore, suffice it to say that we are talking about a partition function.  A partition function is the normalising constant applied to the probability equation for a microstate.

Think about statistical mechanics of, for example, a gas.  A gas consists of a multitude of particles (molecules usually, atoms if it’s a noble gas).  Knowing the microstate would require an exhaustive cataloguing of all the molecules, how many there are, what they are, where they are and what momentum they all have.  Knowing that would give you other characteristics of the gas (pressure and temperature for example).  But we never know all that and the microstate is changing all the time.  Instead, we can think of the likelihood of an ensemble of microstates (a bunch of them).  The sum of all probabilities has to be 1 – hence the normalising factor.

What I think they are getting at here is that some microstates are more likely than others and as you integrate more microstates, you arrive at a more likely overall state (or “path”) for the system to be in an eventually you can say “I don’t know what the precise microstate is at any one time, but the temperature of the gas is this and it’s pressure is that”.

Go even deeper and think about things at the quantum rather than molecule level, and you have the same sort of situation.  It is not possible to know the precise microstate (quantum state, maybe) of a system, especially when you have superposition of different states that don’t manifest until there’s been a measurement, but you can integrate the likelihood of states to arrive at things that you can say about the overall state.  The sum of the probabilities of all quantum states is going to be 1 again, and you therefore need a normalising factor, which would be the partition function.  The partition function would, therefore, tell you something about the likelihood of each ensemble of states (or configurations).

What I expect happens is that there is a peak of probability at what constitutes reality (which is a bit fuzzy across a relatively narrow range) – other states are not impossible, but they are very unlikely.

Why this occurs in imaginary time is not immediately obvious.  It might be worth noting what imaginary time is though.

There is what is called a Wick rotation, which converts the geometry of spacetime from Lorentzian to Euclidean.  This converts the metric for spacetime from ds2 = - (dt)2 + dx2 + dx2 + dx2 to ds2 = (i.dt)2 + dx2 + dx2 + dx2 (because i2 = -1).  Note that this sort of metric is implied in On Time where I talk about an invariant “spacetime speed” (although I deliberately collapsed down the spatial components to just one x dimension for ease of calculation).

The phrase “periodic in imaginary time” is a reference to the alignment of temperature (in statistical mechanics) with oscillations in imaginary time (in quantum mechanics) – in that the related equations are of the same form.

Hawking and collaborators first used these methods to investigate black hole thermodynamics, a topic which has since burgeoned into holographic studies [10, 11] and experimental tests using analog systems [12, 13]. We shall follow Hawking et al.'s original approach here [14, 15, 16, 17].

This is a reference to the idea that the entropy of a black hole relates to its surface and not its volume (this seems rather obvious when it’s clear that time and space go to zero at the surface of a black hole – there isn’t anything “inside” a black hole in terms of our universe, everything that is going on with it is going on either at or above the surface [where particles, if there are any, are ripped apart and thus give off radiation]).

In the cosmological setting, it is an excellent approximation to treat the radiation as a relativistic fluid, in local thermal equilibrium at a temperature declining inversely with the scale factor.

I think this is just talking about radiation (composed of photons) being conceptually similar to a fluid that moves very fast.  If you’ve ever heard a term like “awash in photons” (probably in science fiction), then you have the idea of a relativistic fluid.  It should be noted that there are equations for fluid flow, which are analogous to equations for electricity, because electrons in a wire act a bit like molecules of oil in a hydraulic system and the same analogy can be made to radiation.

The instantons we present are saddle points of the path integral for gravity: real, Euclidean-signature solutions to the Einstein equations for cosmologies with dark energy, radiation and space curvature.

We’re talking about four-dimensional manifolds again.  I do not believe that the saddle points mention are in those manifolds but are rather points of high likelihood (and thus reality) among the possible (micro or quantum) states and the argument is that those states manifest gravity of the sort that we observe in our universe.

The associated semiclassical exponent iS/ħ is real, large and positive, and may be interpreted as the gravitational entropy.

The exponent iS/ħ appears in the equation for the “generating functional Z(J)”, where a generating function is a way of representing the sum of an infinite sequence of numbers, but in this case, we are representing a space of functions – it’s basically a description of the machine (think of a black box with an input and an output) that does something to something.  For what it’s worth, this is that equation (taken from here) – note that they have an erroneous close bracket after the final x, but I've removed it for the replication below:

or

It’s written slightly differently here (with no erroneous close bracket):

Note that eix cycles eternally between 1, i, -1 and -i as x increases in magnitude from zero (1 where x = 2nπ, i where x = (2n+1/2)π, -1 where x=(2n+1)π and -i where x = (2n+3/2), where n is any integer).  Therefore, with either equation, iS/ħ is a phase offset.

In these equations, Φ refers to a scalar field – which is our manifold again (the d4x tells us that we are using a four dimensional scalar field) – and the is telling us to “integrate over all possible classical field configurations Φ(x) with a phase given by the classical action S[Φ] evaluated in that field configuration”.  J or J(x) is an auxiliary function, referred to as a current.

But what does all this mean?

In the words of Vivian Stanshall, about three o’clock in the morning, Oxfordshire, 1973: In layman's language, “It's blinking well baffling. But to be more obtusely, buggered if I know. Yes, buggered if I know.”

--

A couple of clarifications:

Z(J) is a functional integral (space of functions) while S is an action functional.  An action functional is related to the stationary-action principle, which when put very, very simply (perhaps to the extent of being simplistic) is about things not doing more than they have to.  If an object is moving in a particular direction at a particular speed (so, with a velocity) it will continue to move at that velocity unless acted upon by a force.  We know about a lot of forces that act on objects to change an object’s velocity, gravity, resistance (air, friction), other objects colliding into it, etc, etc.  Digging into the concepts of relativity, each object has its own frame (of reference) in which it is stationary (with respect to itself).

In a sense, every single object in the universe has a spacetime speed (note, not velocity) which is invariant.  The actions between objects could be worked out by considering how the spacetime speed is distributed between their relative temporal speed and their relative spatial speed (see On Time).  Effectively, every single object in the universe could be considered as stationary (different types of “stationary”, admittedly, but stationary all the same).

The statement “(t)he associated semiclassical exponent iS/ħ is real, large and positive” is interesting in itself.  Given that i is the imaginary marker and ħ is a real, small and positive constant (about 10-34J/K), that would make S itself imaginary (not complex), large (or even just average) and negative.  Negative is understandable, if it’s representing gravitational entropy which is understood to be negative … but imaginary?  Not quite sure about that.

--

Hopefully it’s quite obvious that this series is more of an open pondering session rather than any statement of fact about what the authors of Gravitational entropy and the flatness, homogeneity and isotropy puzzles intended to convey.  If I have misinterpreted them, then I’d be happy to hear about it.

Thursday, 7 April 2022

Digesting a Paper on Flatness (Part 2)

See Part 1 to understand what this is about.

--

I skipped “gravitational instantons” in Part 1 because they seem to warrant their own post.  They were raised in the following context:

Here, in an appropriate time slicing we find the solutions describe new gravitational instantons, one for each value of the macroscopic parameters, allowing us to calculate the gravitational entropy.

Clearly there’s more in there than just gravitational instantons.

An instanton is described as “a classical solution to equations of motion with a finite, non-zero action, either in quantum mechanics or in quantum field theory”.  A gravitational instanton is an extension of the concept (or notion): “a four-dimensional complete Riemannian manifold satisfying the vacuum Einstein equations”.

So, while an instanton is sometimes referred to as a pseudoparticle, a gravitational instanton is more like a 4-dimensional topological space.  For a manifold, think of a surface, that’s a 2-dimensional manifold.  It can be flat (as in a plane) or curved (as in the surface of a sphere).  So spacetime, which is four dimensional, could be described by a gravitational instanton (or perhaps a range of gravitational instantons).

The term “macroscopic parameter” is not specific to this topic, but could be referring to temperature, pressure and entropy – as these are features of many particles in aggregate, as opposed, for example, to a quantum level parameter such as spin or charge.  This would make sense as the sentence finishes with a mention of gravitational entropy.

What is “gravitational entropy”?  As I understand it, there is a delicate ballet between entropy in general and the contribution to total entropy of a system as provided by gravity.  Imagine a free gas in empty space – it will tend to expand, and thus entropy is increased.  However, we know that that doesn’t happen at macro scales, because gas (and dust etc) will clump into galaxies, stars, planets, asteroids and comets (and everything in between).  So the idea is that gravity itself is effectively an entropy increaser such that a clump of gas in a star represents more entropy than the same gas just floating around dispersed. How much entropy there is in the universe (or a system) – due to gravity – is the value that the authors of the paper appear to be trying to calculate.

It's worth returning to the term “instanton” here for a moment.  Another description of an instanton is a saddle point, where there are two equally valid directions that something can go (in reference to quantum tunnelling between equally valid states).  There’s an implication of a saddle point with the notion of gravitational entropy – there are two potential, diametrically opposed ends to the universe that are posited.  Either it clumps back up, with all the mass/energy crushed down to a singularity (the big crunch) or it expands forever becoming increasingly disparate (the big rip).  As a saddle point, the universe could be balanced between those two options – if it is flat, which it appears to be.  But note: those two options both represent increasing entropy, so the question might be – is gravitational entropy sufficiently large as to drive the big crunch or sufficiently small as to allow the big rip?  Or just right so that neither happens?  It seems to be that if the universe is flat (remembering that the definition is such that the universe is balanced on a knife edge between those two options), then that would point towards the third option.

In reading about negative entropy (see if there was a possibility that the total entropy is balanced out by twinned universes – there’s isn’t), I noted that where there is local negative entropy it relates to an energy increase.  For a salient example, to lift a mass in a gravitation field, you need to add energy to the system (effectively giving the mass potential energy).  If mass-energy is entering the universe – what effect does this have on entropy?  Is the expansion more than enough to cover it such that the second law of thermodynamics is not violated?  Or would the fact that the universe would not be a “closed system” (due to energy coming in) be sufficient?

--

Hopefully it’s quite obvious that this series is more of an open pondering session rather than any statement of fact about what the authors of Gravitational entropy and the flatness, homogeneity and isotropy puzzles intended to convey.  If I have misinterpreted them, then I’d be happy to hear about it.

Monday, 4 April 2022

Digesting a Paper on Flatness - Introduction (Part 1)

I was pointed to the paper Gravitational entropy and the flatness, homogeneity and isotropy puzzles by the secondary author, Latham Boyle, in response to my question as to whether his team had got any further with considering flatness in respect to an earlier paper The Big Bang, CPT, and neutrino dark matter (ignore the date in the vertical text to the left of the pdf and consider instead the date of the first version, 23 Mar 2018).

This paper is a tough, maybe even totally impenetrable read for a non-professional, interested amateur like myself.  Therefore, if I have any hope of working through it, I will need a structured method for extracting meaning from the text.  This, along with following posts, is that structured method.

--

Introduction:

Pictures of the earth from space reveal it to be remarkably round and smooth, with a curvature radius much larger than the distances we explore in our everyday lives. Hence, on those scales, it is often a good approximation to treat the earth’s surface as flat. The explanation of this flatness involves thermodynamics. First, gravity pulls matter inwards, so its potential energy is minimised in a spherical configuration. Second, relative motions of the earth’s constituent atoms generate friction: as a mountain is pulled downwards or a rock falls, gravitational potential energy is converted into heat. Even if the earth is regarded as a closed, conservative system, there are vastly more ways of distributing its internal energy among its ∼ 1050 atoms as heat than there are of creating more complicated, less spherical geometries. Taken together, gravity, friction and the earth’s many atoms explain why it is locally flat [1].

This seems fair enough, massive bodies in space tend towards flatness (although not necessarily perfect sphericality – due to the effects of rotation).

In this Letter, we provide a similar, entropic explanation for the observed flatness, homogeneity and isotropy of the cosmos. Our argument rests on a new calculation of gravitational entropy, along the lines advocated by Hawking and others in the context of black hole thermodynamics [2 3 4].

An entropic explanation is one that rests on the notion that in a closed system entropy tends to increase – per the second law of thermodynamics.  That is that “potential energy is minimised” as per the first paragraph.  Gravitational entropy, therefore, is a tendency to the minimum gravitational potential energy state.

Flatness, in cosmological terms, means the absence of curvature – which isn’t particularly useful.  Per COSMOS at Swinburne University, a flat universe in one that is perfectly balanced between eventual collapse and eternal expansion – and this is a question of density.

Homogeneity, in cosmological terms, is smoothness at a large scale.  While it is clear the universe has lumps in it at our scale (the universe, galaxies, solar systems, planets, individual people, molecules and so on), at a grand scale, the universe is surprisingly smooth.  Together with isotropy, this constitutes the cosmological principle – which is the notion that we humans are not privileged observers.  No matter where one is in the universe, things look pretty much the same (at a grand scale).

Isotropy is invariance to orientation.  Again, at large scales, it doesn’t matter what direction you look in and observe the universe, things look the same in every direction – as if we were in the centre of smooth, monochrome bubble.

Note that the combined flatness, homogeneity and isotropy of the universe is not an assumption about it, it’s an observation, and therefore something to be explained.

I don’t believe that it is necessary to delve into the references here as they are just acknowledgements by the authors that the work that follows has been presaged by earlier physicists.  The links are provided in case anyone else wants to look at that preceding work.

The calculation is made possible by our new approach to the boundary conditions for cosmology, implementing CPT symmetry and analyticity at the bang, quantum mechanically, to solve many puzzles [5 6 7 8].

The term “boundary conditions for cosmology” appears to be a reference to the balancing act that the universe is engaged in – its flatness for example – that have been referred to by others as “fine tuning”.  More broadly however, boundary conditions are the constraints which permit solutions to equations – in this case those equations would be those that describe the universe.

CPT symmetry is the symmetry associated with charge (positive-negative), parity (see the following) and time (future-past).  Parity is related to chirality or handed-ness but more broadly related to a change in sign (and hence direction) – but it should be understood as different to a rotation.  A parity transformation is akin to creating a mirror image rather than turning something around.  Note that a consequence of the Standard Model is the implication that parity is not a symmetry of our universe – which is why there is an implication of a second, paired universe in which antimatter predominates.

The term “analyticity” is again about descriptive equations which, in this case, are such that the related “Taylor series (for each) about x0 converges to the function in some neighbourhood for every x0 in its domain”.  What I take from this is the universe is contiguous and complete, meaning that it doesn’t have locations in which there are sudden breaks or discontinuities.  Another way to say that is that the universe is “smooth” (in addition to being flat).  Yet another way, perhaps less formal and precise, and at the danger of being simplistic, is to consider what it takes to analyse something with confidence – basically you would not want to have elements of the relevant data missing or uninterpretable.

The term “the bang” appears to be a stylistic affectation on the part of the authors – most of us call it “the big bang”.  Realistically, given the current size of the universe (and the amount of energy in it now compared to then), the big bang was not really that big a bang.

The puzzles referred to appear to be:

  • apparent CPT symmetry breaking, 
  • dark matter, 
  • the arrow of time, 
  • and vacuum energy and the Weyl anomaly (both have links to symmetry breaking)

 --

We outline the connection to Penrose’s classical Weyl curvature hypothesis [9] in the conclusion.

This will be addressed in the conclusion (as by the time I work on that I will almost have certainly forgotten that I covered it here) but … the Weyl curvature hypothesis “is rooted in a concept of the entropy content of gravitational fields” and assumes a very low initial entropy of the cosmological gravitational field.  Note that Weyl curvature is related to the curvature of spacetime.

It might be worth noting that Penrose has a conception of the universe in which is cyclic (incorporating the Weyl curvature hypothesis) in which there are separate aeons or instances or (timelike) sectors of the universe.  Effectively, the “future timelike infinity” is the big bang of the next – as our future eternity is functionally equivalent to the zero point (or singularity) of the next universe from which it emerges.  This has some aspects of similarity to my model, since an event horizon, although it appears to be a surface from the outside is effectively a point from which all the mass/energy that passes through it would emerge into another universe.  The main difference we have is that I don’t think that it would be possible to reach the “future timelike infinity” because by that point the universe would have infinite mass/energy … and that probably comes with some attendant problems.  What makes more sense to me is that when the pressure on the universe to expand evaporates (because the total mass/energy from the “previous” universe is expended), this leads to an effective collapse, a rescaling and thus the initiation of a new universe (or rather a new pair of universes).

Note that the event horizon concept is such that mass/energy that enters our universe does so across its entirety, making the universe inherently homogenous – negating the need for inflation as an explanation for homogeneity of the cosmic microwave background.  (Penrose’s conformal cyclic cosmology also implies no inflation.)

Using these new boundary conditions, we showed that cosmologies with radiation, dark energy and curvature are periodic in imaginary proper time, with a Hawking temperature given by that of the corresponding de Sitter spacetime [7]. This is a strong hint that the solutions should be interpreted thermodynamically.

The term “periodic in imaginary proper time” makes sense to me, but I might be misunderstanding it since it aligns so closely with my model (so I might be motivated to misinterpret).  Imaginary time is time that is perpendicular to real time.  Remember there are “real numbers” and “imaginary numbers”, where an imaginary number is i with a value attached to it, so 2i is an imaginary number of magnitude 2 – there are a few examples in physics and engineering where imaginary numbers have very real usefulness, so don’t think of them as “made up” – the thing to remember is that an imaginary number represents orthogonality to the real number plane.

The term “proper time” has some meaning in relativity, but the use here is a bit perplexing.  In a sense, time is already imaginary, being orthogonal to the spatial dimensions (look at complex Minkowski spacetime for example) – but proper just implies the time a clock following a timelike world line would record, or “dilated time”.

I’m tempted to think that this is a reference to “actual” imaginary time, and that what is being referred to is a sequence of time sectors which are all imaginary in relation to each other (that is, orthogonal to each other).  But I accept that this might well be wrong.  It may become clear as we get deeper into the paper.

A Hawking temperature is a reference to the amount of Hawking radiation from a black hole, the smaller a black hole is, the higher its Hawking temperature – meaning that micro black holes dissipate quickly, and supermassive black holes don’t.  The reference to a Hawking temperature does seem to imply the involvement of a black hole.

Here, in an appropriate time slicing we find the solutions describe new gravitational instantons, one for each value of the macroscopic parameters, allowing us to calculate the gravitational entropy. For a given, positive dark energy density, with radiation included the gravitational entropy can be arbitrarily large. As the total entropy is raised, the most likely universe becomes progressively flatter. Furthermore, inhomogeneous or anisotropic perturbations are suppressed. Hence, to the extent that the total entropy exceeds the de Sitter value, the most probable universe is not only flat, but also homogeneous and isotropic on large scales.

I am going to skip over “gravitational instantons” – not because they are easy but because they are hard and appear important.  The “macroscopic parameters” should become clear as we get into the body of document.

The remainder of the paragraph basically just indicates that as the universe ages, it should – due to increasing entropy (and thus disorder) – become increasingly homogenous and isotropic, implying that we should not be surprised to find ourselves in such a universe.  What the de Sitter value of entropy actually is (other than its notation, SΛ) … is unclear.  It may become clearer as we get deeper into the paper.  Some casting around various paper indicates that it might be a reference to a bound below which de Sitter spacetime is not stable, but as far as I can see the fame of de Sitter entropy is quite limited.  It’s not clear what the ramifications on the early universe (of low entropy) would be.

--

With that we are a whole two paragraphs into the paper – or about half a page.  With the exception of “gravitational instantons”, which I will look into next.  And these were the easy paragraphs!

--

Hopefully it’s quite obvious that this series is more of an open pondering session rather than any statement of fact about what the authors of Gravitational entropy and the flatness, homogeneity and isotropy puzzles intended to convey.  If I have misinterpreted them, then I’d be happy to hear about it.