See Part 1 to understand what this is about.
--
The mention of a
path integral indicates that we are talking about the partition function in
relation to quantum field theory here rather than statistical mechanics, or as
Wikipedia helpfully puts it:
In quantum field theory, the partition function Z[J] is
the generating functional of all correlation functions, generalizing the characteristic function of probability theory.
Um, ok. That doesn’t really help.
I think it’s
entirely possible to get hung up on this sentence overly long. For the moment, therefore, suffice it to say
that we are talking about a partition function.
A partition function is the normalising constant applied to the
probability equation for a microstate.
Think about
statistical mechanics of, for example, a gas.
A gas consists of a multitude of particles (molecules usually, atoms if
it’s a noble gas). Knowing the
microstate would require an exhaustive cataloguing of all the molecules, how
many there are, what they are, where they are and what momentum they all
have. Knowing that would give you other
characteristics of the gas (pressure and temperature for example). But we never know all that and the microstate
is changing all the time. Instead, we
can think of the likelihood of an ensemble of microstates (a bunch of
them). The sum of all probabilities has
to be 1 – hence the normalising factor.
What I think they
are getting at here is that some microstates are more likely than others and as
you integrate more microstates, you arrive at a more likely overall state (or
“path”) for the system to be in an eventually you can say “I don’t know what
the precise microstate is at any one time, but the temperature of the gas is
this and it’s pressure is that”.
Go even deeper and
think about things at the quantum rather than molecule level, and you have the
same sort of situation. It is not
possible to know the precise microstate (quantum state, maybe) of a system,
especially when you have superposition of different states that don’t manifest
until there’s been a measurement, but you can integrate the likelihood of
states to arrive at things that you can say about the overall state. The sum of the probabilities of all quantum
states is going to be 1 again, and you therefore need a normalising factor,
which would be the partition function.
The partition function would, therefore, tell you something about the
likelihood of each ensemble of states (or configurations).
What I expect
happens is that there is a peak of probability at what constitutes reality
(which is a bit fuzzy across a relatively narrow range) – other states are not impossible,
but they are very unlikely.
Why this occurs in
imaginary time is not immediately obvious.
It might be worth noting what imaginary time is though.
There is what is
called a Wick rotation, which converts the geometry of spacetime from Lorentzian
to Euclidean. This converts the metric
for spacetime from ds2 = - (dt)2 + dx2 + dx2
+ dx2 to ds2 = (i.dt)2 + dx2
+ dx2 + dx2 (because i2 = -1). Note that this sort of metric is implied in On
Time where I talk about an invariant “spacetime
speed” (although I deliberately collapsed down the spatial components to just one
x dimension for ease of calculation).
The phrase “periodic
in imaginary time” is a reference to the alignment of temperature (in statistical
mechanics) with oscillations in imaginary time (in quantum mechanics) – in that
the related equations are of the same form.
Hawking
and collaborators first used these methods to investigate black hole
thermodynamics, a topic which has since burgeoned into holographic studies [10, 11] and experimental tests using analog
systems [12, 13]. We shall follow Hawking et al.'s original approach here [14, 15, 16, 17].
This is a reference
to the idea that the entropy of a black hole relates to its surface and not its
volume (this seems rather obvious when it’s clear that time and space go to
zero at the surface of a black hole – there isn’t anything “inside” a black
hole in terms of our universe, everything that is going on with it is going on
either at or above the surface [where particles, if there are any, are ripped apart
and thus give off radiation]).
In
the cosmological setting, it is an excellent approximation to treat the
radiation as a relativistic fluid, in local thermal equilibrium at a
temperature declining inversely with the scale factor.
I think this is
just talking about radiation (composed of photons) being conceptually similar
to a fluid that moves very fast. If you’ve
ever heard a term like “awash in photons” (probably in science fiction), then you
have the idea of a relativistic fluid. It
should be noted that there are equations for fluid flow, which are analogous to
equations for electricity, because electrons in a wire act a bit like molecules
of oil in a hydraulic system and the same analogy can be made to radiation.
The
instantons we present are saddle points of the path integral for gravity: real,
Euclidean-signature solutions to the Einstein equations for cosmologies with
dark energy, radiation and space curvature.
We’re talking about
four-dimensional manifolds again. I do
not believe that the saddle points mention are in those manifolds but are
rather points of high likelihood (and thus reality) among the possible (micro
or quantum) states and the argument is that those states manifest gravity of
the sort that we observe in our universe.
The
associated semiclassical exponent iS/ħ is real, large
and positive, and may be interpreted as the gravitational entropy.
The exponent iS/ħ appears in the equation for the “generating functional Z(J)”, where a generating function is a way of representing the sum of an infinite sequence of numbers, but in this case, we are representing a space of functions – it’s basically a description of the machine (think of a black box with an input and an output) that does something to something. For what it’s worth, this is that equation (taken from here) – note that they have an erroneous close bracket after the final x, but I've removed it for the replication below:
or
It’s written slightly differently here (with no
erroneous close bracket):
Note that eix cycles eternally between
1, i, -1 and -i as x increases in magnitude from zero (1 where x =
2nπ, i where x = (2n+1/2)π, -1 where x=(2n+1)π and -i where x = (2n+3/2),
where n is any integer). Therefore, with
either equation, iS/ħ is
a phase offset.
In these equations, Φ refers to a
scalar field – which is our manifold again (the d4x tells us that
we are using a four dimensional scalar field) – and the DΦ is telling us
to “integrate over all possible classical field configurations Φ(x) with
a phase given by the classical action S[Φ] evaluated in that field
configuration”. J or J(x) is
an auxiliary function, referred to as a current.
But what does all this mean?
In the words of Vivian Stanshall, about
three o’clock in the morning, Oxfordshire, 1973: In layman's language, “It's
blinking well baffling. But to be more obtusely, buggered if I know. Yes,
buggered if I know.”
--
A couple of clarifications:
Z(J) is
a functional integral (space of functions) while S is an action
functional. An action
functional is related to the stationary-action principle, which when put very, very simply (perhaps
to the extent of being simplistic) is about things not doing more than they
have to. If an object is moving in a
particular direction at a particular speed (so, with a velocity) it will continue
to move at that velocity unless acted upon by a force. We know about a lot of forces that act on
objects to change an object’s velocity, gravity, resistance (air, friction), other
objects colliding into it, etc, etc. Digging
into the concepts of relativity, each object has its own frame (of reference)
in which it is stationary (with respect to itself).
In a sense, every single object in the
universe has a spacetime speed (note, not velocity) which is invariant. The actions between objects
could be worked out by considering how the spacetime speed is distributed
between their relative temporal speed and their relative spatial speed (see On Time). Effectively,
every single object in the universe could be considered as stationary
(different types of “stationary”, admittedly, but stationary all the same).
The statement “(t)he associated
semiclassical exponent iS/ħ is real, large and positive” is interesting
in itself. Given that i is the
imaginary marker and ħ is a real, small and positive constant (about 10-34J/K),
that would make S itself imaginary (not complex), large (or even just average)
and negative. Negative is
understandable, if it’s representing gravitational entropy which is understood
to be negative … but imaginary? Not
quite sure about that.
--
Hopefully it’s
quite obvious that this series is more of an open pondering session rather than
any statement of fact about what the authors of Gravitational entropy and the flatness,
homogeneity and isotropy puzzles
intended to convey. If I have
misinterpreted them, then I’d be happy to hear about it.