Thursday, 7 December 2023

A 4D Black Hole?

One feature of a universe undergoing Flat Universal Granular Expansion is that the density of the universe is critical and that density is equal to the density of a Schwarzschild black hole with the radius equal to the age of the universe multiplied by the speed of light.

An obscured assumption there is of a 3D black hole, but the universe has at least one more dimension, since time and space are interchangeable under general relativity (see On Time for an example of this interchangeability).  So … the question arises, what happens with a black hole in more than 3 dimensions?

There’s some disagreement as to whether black holes are already 4D, because they exist in a 4D universe (with three dimensions of space and one of time) and they persist over time.  While most say yes, of course, I have seen the occasional person say no, of course not – basically because there is an inherent assumption that we only talk about spatial dimensions in relation to objects.  We can’t imagine a .  That’s not really the question I was pondering though.  I was thinking about how mass would be distributed in a spatially 4D black hole, working from the intuition that because there is more volume in the 3D surface of a 4D glome than in a sphere, then there would need to be more mass.

This is going about things the wrong way though.

Consider a standard Schwarzschild black hole, a mass (MS) that is neither charged nor rotating, in a volume defined by its radius (rS=2GM/c2).  This is the point at which the escape velocity is the speed of light, meaning that nothing, not even light can escape.

We can show this easily – to escape, a body must have more kinetic energy than its gravitational potential energy, or

[K+U(r)=½mv2-GMm/r]>0

 

So, at the limit where the kinetic energy is not quite enough,

½v2=GM/r

 

Since we are talking about where the limit is the speed of light at rS, and rearranging we get:

rS=2GM/c2

 

Note that we are effectively talking about vectors here, a body must have a velocity, not just a speed, in order to escape.  In other words, the body must have a velocity greater than v=c in the direction of the separation from the mass M.  Every point on the sphere defined by the radius rS around the centre of mass MS defines a vector from the locus of the sphere.  The sphere, however, is not really relevant to each body that is able or is not able to escape.  All that matters is the vector from the centre of the mass and the body’s own centre of mass.

The same applies to a 4D black hole and also a hypothetical 2D black hole.  The limit will always be given by rS=2GM/c2, meaning that mass does not change.  A 4D (along with a hypothetical 2D) black hole has the same mass as a 3D black hole.

---

Note that if we talk in terms of natural Planck units, the equation resolves to rS=2M, because both G and c are effectively just exchange rates between units, and it can be seen that the radius is two units of Planck length for every unit of Planck mass.  This could have been intuited from the Mathematics for Imagining a Universe, where the conclusion that “mass … is being added to the universe at the rate of half a Planck mass … per Planck time” is reached, noting that in the scenario, that universe is expanding at the speed of light.

---

Some may note that there is a problem.  In On Time, I noted that the equation for kinetic energy is an approximation.  It is noted elsewhere that Ek=½mv2 is a first (or second) order approximation.

As v->c, the approximation becomes less valid, as Ek=mc2.(1/(1-v2/c2)-1) tends towards infinity.  The derivation above explicitly uses v=c, so … it doesn’t work.  (Other than the fact that, obviously, it does.)

So, perhaps it’s better come at it from the other side.  From outside of the black hole, using the relativistic effects of gravitation.

Time is dilated and space is contracted (according to normal parlance) in proportion to the force of gravity.  The relevant equation is this:

t0=tf.(1-(2GM/c2)/r)

 

Note that t0 is “proper time” between two events observed from a distance of r from the centre of the mass M, and tf is the “coordinate time” between the same two events observed at distance of rf such that the gravitation acceleration experienced (af=GM/rf2)0 (and assuming that no other effects are in play).  This can be made even more complicated by considering that the events must be either collocated or equidistant from both observation locations, but we could just be a little less stringent, and say that tf is “normal time” and that t0 is “affected time” (that is, affected by gravitation due to proximity to the mass M).

As r decreases with increasing proximity to the mass M, affected time between standard events (in the normal frame) gets shorter until, when 1-(2GM/c2)/r=0, all normal events have no separation at all.  This represents a limit as you can’t get less than no separation (and what you get instead if something were to approach more closely to the mass M is the square root of a negative number).

The value of r at this limit is, of course, r=rS=2GM/c2.  Note that while we have used the value r, we were not specifically talking about a radius, merely a separation.  Therefore the same principles are mentioned above apply.

---

But, of course, that equation is predicated on “the Schwarzschild metric, which describes spacetime in the vicinity of a non-rotating massive spherically symmetric object”, so it already assumes rS=2GM/c2.

---

So … we have to go deeper.  Looking at relativistic kinetic energy, we can see that it’s not quite so simple.  At relativistic speeds, kinetic energy (K, to remain consistent with above) is given by:

K=(p2c2+m2c4)-mc2

 

Where p is the linear momentum, p=mγv, where γ=1/(1-v2/c2) which approaches infinity as v->c.  Substituting and rearranging the above then, we get:

K=((mγv)2c2+m2c4)-mc2=(m2γ2v2c2+m2c4)-mc2

K=mc2((γ2v2/c2+1)-1)

But since γ=1/(1-v2/c2), and just focussing on the term within the square root for the sake of clarity:

γ2v2/c2+1=v2/c2/(1-v2/c2)+1=v2/(c2-v2)+1

=v2/(c2-v2)+(c2-v2)/(c2-v2)=c2/(c2-v2)=1/(1-v2/c2)

And so:

K=mc2((1/(1-v2/c2))-1)= mc2.(γ-1)=mγc2-mc2

 

Which is the same equation as shown in On Time (just the terminology is different), which puts us back to where we were before.

---

So, while the intuitive solution has some procedural difficulties, we really need to go via the derivation of the Schwarzschild metric to see that rS is the radius at which the metric is singular (in this case meaning that there is a division by zero).

It is interesting that, in this case, using an approximation that is based on the assumption that v<<c works perfectly in a situation where v=c.  Why that is the case is currently unclear to me.

---

Update: Having thought about it a little more and looking at some other views on why the 2 factor is in the Schwarzschild radius, my suspicion is that the curvature of space in the region around a black hole is such that the escape velocity at any given distance from the black hole’s centre of mass (where r>=rS) is given by v=√(2GM/r), even if that is the result you might reach simplistically (and by making incorrect assumptions about the value of kinetic energy at relativistic speeds).

Here's my thinking – the escape velocity at the Schwarzschild radius is, due to completely different factors, of a value such that the following equation is true: ½mv2=GMm/r, where m is irrelevant due to being on both sides of the equation, M is the mass of a black hole, v is the escape velocity (in this case c) and r is the distance between the body in question and the centre of the mass M (in this case the Schwarzschild radius).  At a nominal distance r very far from the centre of the mass M, the kinetic energy of a mass m at escape velocity v is indistinguishable from ½mv2 and is equal in value to the gravitational potential energy GMm/r, so the equation ½mv2=GMm/r is true (noting that, again, m is irrelevant).

Since we have picked a nominal distance, we could conceivably pick another distance which is slightly closer to the centre of the black hole and have no reason to think that ½mv2=GMm/r is no longer true.  As we inch closer and closer to the black hole, there is no point at which we should expect the equation to suddenly no longer hold, particularly since we know that it holds at the very distance from the black hole at which escape is simply no longer possible because we bump up against the universal speed limit of c.

I accept that it’s not impossible that the escape velocity might gradually depart from the classical equation (becoming lower in magnitude due to the fact that we must consider relativity), only to pop back up to agreement with classical notions right at the last moment.  I don’t have a theory to account for that, and have no intention to create one, but my not having a theory for something doesn’t make it untrue.  It just seems unlikely.  To me.

Sunday, 5 November 2023

The Mass of Everything

There is a paper in the American Journal of Physics that got some news attention in mid-October 2023.  The version that I first saw excitingly implied that it provided evidence that the universe is a black hole, which naturally caught my eye. (Anton Petrov also put out a video about it.)

 

This is the image that was being heralded:


 

Up in the top right corner, we can see the Hubble radius, which is the age of the universe times the speed of light (which is also the speed of light divided by the Hubble parameter).  Above that and to the left is a region marked as “forbidden by gravity”, basically indicating that anything in this region of the chart would be denser than the densest of black holes (a non-rotating black hole).

 

We will return to that, but the first thing that leapt out to me about this chart was the fact that atoms, the Covid virus, an unspecified bacterium, an unspecified flea, the (average) human, an unspecified whale, planets, moons, the Earth, main sequence stars and the Sun are all in a straight line.  That line seems to have the gradient log10M/log10r3 with little or no offset.  This makes eminent sense since there’s an established relationship between the mass of something and its volume, mediated by its density.  What might not be so immediately obvious is the question of the offset.

 

Let us introduce the density as ρ, and note that we are talking about a radius, such that the associated volume is 4πr3/3, so M=ρ.4πr3/3= (ρ.4π/3).r3 and therefore

 

log10M=log10((ρ.4π/3).r3)=log10(ρ.4π/3)+log10(r3)=3log10r+log10(ρ.4π/3)

 

If there were little or no offset, this would imply that log10(ρ.4π/3)≈0 and so ρ.4π/3≈1, or ρ≈0.25.  However, in the paper, it’s noted that the line is consistent with the density of water (1g/cm3).  This represents an offset of log10(4π/3)=0.6.

 

I looked at the background data (provided in a zip file) and noted that they estimated the radius of spherical humans, blue whales and fleas using the following process – To get the radius, take the length, divide by 2, that gives you a "ball", then divide by 2 again because we want the radius, not the diameter of the ball.  The figures that they used were (where yellow fill means that the values are received, and white means they are calculated, noting that the virion here is a Covid virus particle):

 


I went into a bit more effort and got (using the same colour coding):

 


Even though the virion and human densities is very wrong from their estimate, and the whale was somewhat wrong, there was very little effect on the relationship:

 

 

Note that the blue line in my chart represents black holes of different mass and the thin red line is log10M=3log10r+0.6 with the dots near it coming from the table above.  The other dots, left to right are, the Milky Way galaxy, the current Hubble radius and the “observable universe” (inflated radius of 46.5 ly).

 

We’ll get back to the “observable universe” later.

 

The second thing that leapt out at me was that there are other apparent lines on the chart, albeit shorter. These lines have the same gradient but a different intercept with log10M=0.  There’s a vague line created by globular clusters, a more distinct line for galaxies and clusters of galaxies, and then a line for super-clusters right up against critical density (the density of the Hubble radius).  The offset for galaxies seems to imply a density in the order of about 10-25g/cm3.  Note that there are voids illustrated that are below the critical density, which makes sense given that if some areas are higher than the critical density overall for a larger region, then some areas must be lower than this density – and these would be voids.

 

---

 

So … the observable universe.  According to calculations performed by Ned Wright (although he might not be the original), the observable universe has a radius of about 47 billion light years – if one includes the assumption of accelerating expansion from about 5 billion years ago, as required to get the appearance, today, of constant expansion at the speed of light since 13.787 billion years ago, due to the disturbances caused by a period of inflation and two different periods of slower expansion. In The Problem(s) with the Standard Cosmological Model, I explained why I have issues with that explanation but if we accept that the observable universe is that big and note that the argument for that radius is based on the assumption of a critical density which is pretty much the density that we observe, then we have a problem.

 

The large orange dot in my chart above shows where the observable universe plots to, given its density and its radius (and thus its volume).  It can be seen to sit above the black hole line and thus in the “forbidden by gravity” zone, or (a little less clearly):

 

 

And, if you read Ned Wright further, you will note that he goes on to say that the universe, as a whole, is more than 20 times the volume of the observable universe.  This is a low-end estimate if space.com is to be believed, given that they report a measurement of 7 trillion light years across, or 3.5 trillion light years as a radius.  They also suggest that another possible figure is 1023 light years, but this is clearly muddled since the universe would have expanded faster than the speed of light during any inflation event (not at the speed of light).  Plotting these three values as well, maintaining the same critical density (which is an assumption that is common to all three), we get:


 

So, something seems wrong here.  It does make me wonder, given that the paper says “we plot all the composite objects in the Universe: protons, atoms, life forms, asteroids, moons, planets, stars, galaxies, galaxy clusters, giant voids, and the Universe itself”, they don’t plot the universe itself, not even the “observable universe” … unless the authors’ view is that the Hubble radius is the radius of the universe.

 

---

 

One issue raised by various people, including Anton Petrov, is that for the universe to be (inside) a black hole then there could be nothing outside of it, which introduces the issue of how the density would suddenly plummet to zero at the boundary.  Generally, it is thought that the density outside of the Hubble radius is the same as inside (because we are not privileged, our part of space in not special) – and this is the basis on which I chart the “observable universe” above the black hole line.

 

Note that Anton at one point misrepresents the Hubble radius as the radius of the “observable universe”.  The Hubble radius is 13.787 billion light years, not 93 billion light years.

 

Note also that, just after 11:00, he says that when a black hole gets big enough you can technically go inside it and feel nothing, so I am not convinced by the argument that outside the Hubble radius would have to have zero density for us to be in a black hole.

 

If the region inside the Hubble radius constitutes a black hole (which it does, since it has the radius, mass and density of a black hole of its radius, mass and density), then it’s possible to have black holes inside of black holes, since there’s a supermassive black hole at the centre of many galaxies and probably many other, smaller black holes in other locations.  Consider then the possibility that the Hubble radius itself is inside another, larger black hole.

 

Say, for example, that there’s a greater black hole the size of the “observable universe” at 93 billion light years.  What would its density be?  The calculation for the density is ρ=3c2/8Gπr2, which for 93 billion light years is 8.3×10-31g/cm3, or a little under one tenth the critical density of the universe of ρc=9.4×10-30g/cm3 while having a volume that is 38 times greater.

 

Then consider the nesting of many effective black holes between the Hubble radius and the radius of the “observable universe” and think of one that is only slightly larger than the Hubble radius – 15 billion light years.  The density of that black hole would be 8.0×10-30g/cm3 or about 85% as dense as where we find ourselves, with a total volume that is 30% greater.

 

So then the question is, how dense would it have to be beyond the Hubble radius to still constitute a black hole at 15 billion light years out?  I estimate that it would be in the order of 30% of the critical density.  And for a 93 billion light year radius, the average would be 6%.  The bottom line is that density doesn’t need to suddenly drop to exactly zero, even though we do introduce an apparent problem with privilege since the region inside the Hubble radius is special (due to being towards the centre of a zone that is of higher density than the surrounding area).

 

This is not, however, the only solution.  The other solution, more consistent with the notion of FUGE, and the implication of the paper’s writers, is that there’s nothing outside of the Hubble radius – not even empty space.  The problem with that is that it also seems to indicate that we are in a privileged location, because we appear to be at the centre of such a universe - unless the geometry is such that all points within appear to be central.

Wednesday, 4 October 2023

Light that is faster than light?

In the paper Double-slit time diffraction at optical frequencies, the authors describe using “time slits” to demonstrate inference between two pulses of light that are separated by time.  This was interpreted by Astrum as indicating that light can travel faster than the speed of light.

 

I’ve not been able to find anyone else who believes this, nor any paper mentioning time slits that follow up on the one indicated above that concludes that light travels faster than the speed of light.  Nor does the paper make clear that the set up described by Astrum is what they had.

 

However, it’s an interesting thing to think about.  The question that immediately came to mind for me was: what was the temporal separation between time slits and how does that compare to the spatial separation between the source of the light pulses and the location where the time slits were instantiated (using a metamaterial that swiftly changes from mostly transparent to mostly reflective and back – or as they say it “creating” time slits by inducing an ultrafast change in the complex reflection coefficient of a time-varying mirror”)?

 

This is the image that Astrum uses to illustrate the concept (noting that none of the light illustrated here is claimed to travel faster than light, that bit comes later in the video):


We actually have enough information to work out approximately how far the transmitter and target must be from time-varying mirror.  The slits are separated, in one instance and according to the paper, by 2.3 picoseconds.  The transmitter is at very slightly more than 4 picolightseconds from the time-varying mirror, or a little over 1mm.  There is a mention of separations of 800 femtoseconds, which would reduce all by a factor of four, and 300 femtoseconds (when the slits begin to merge) by another factor of about 2.5.

 

I suspect that this is not actually the case.  I suspect that the source-mirror separation is going to be in the range of 10cm, at least.  This is two orders of magnitude greater.  It could be as much as a metre or more, adding another order of magnitude.

 

Note also that the period of increased reflectivity is in the order of about 0.5 picoseconds (or 500 femtoseconds):

 

The implication is not trivial because Astrum has created an image in which the second pulse is initiated after the first pulse has already been reflected (watch the video for clarification, the image has been simplified to illustrate his point) and the metamaterial has gone back to being transparent.

 

I think it’s more likely to be the case that, when the second pulse is transmitted, the reflection-state for the first pulse has not even commenced.  Revisit the image about and move the source away by a factor of 100.  Even a factor of 10 would put the second pulse below the period in which the metamaterial is reflective for the first pulse.

 

Why does this matter?

 

First, we need to think about another problem.  Let’s pretend that it’s ok to do what Einstein did and have a thought experiment in which we imagine riding on a beam of light.  Some physicists don’t like you doing this, so we may need to be careful.

 

Say we are travelling in a vacuum parallel to an enormous ruler that is L0 = 1 light second long. How long is that ruler in our frame?  Consider the ruler to be stationary (and pretend for the briefest moment that the question “relative to what?” doesn’t come up) so that we, riding on the beam of light, are traveling at v=c, relative to it until we hit a target at the far end.

 

The equation for length contraction is L=L0(1-v2/c2), meaning that the length of the ruler, in our frame, the frame of the beam of light (or photon), is 0 light seconds.  The time taken to travel the full length of the ruler is 0 seconds.  The same applies if we double the length of the ruler, and keep on doubling it, or halve it and so on.  Irrespective of how long the ruler is, as soon as the beam of light starts travelling along it, within its own frame, it has already finished travelling along it.  It’s like beam of light simply teleported from the source at one end of the ruler to the target at the other.

 

Now remember that we are on a beam of light.  A beam consists of a multitude of photons, each travelling through the vacuum at the speed of light, c.  And imagine that there are some motes of dust in the way, halfway along the ruler, some of which are struck by photons which therefore only travelled 0.5 light seconds (in the ruler’s frame), in a travelling-frame period of 0 seconds, getting to the mote as soon as it sets off.

 

How does this happen?  How does each photon “know” to travel only halfway along the ruler (which has no length anyway in its frame) and not the full length (or to just keep going)?

 

One possibility (in the weak sense of that word) is that each photon does in fact teleport from starting position to final position – with a delay due to the maximum speed at which information propagates.  But this implies an ability to predict the future, since photons only hit the motes of dust that are there at the time that the path of the light intersects them, so they would have to predict where to teleport to.  We can put that idea aside.

 

The idea that comes to mind is that the photon is effectively smeared across the entirety of its path until it is caused to decoheres by an interaction with something (hence the need to specify “speed in a vacuum”).

 

The consequence of this is that so long as there is spacetime path from source to target, some element of the photon takes it.  And there’s no limitation on whether that path is time-like (Δx/Δt<c), space-like (Δx/Δt>c) or light-like (Δx/Δt=c).  What it won’t do, however, is go back in time, as the imagery produced by Astrum inferred, when he presented this:

 

 

I would understand it more like this:



Note that the dashed horizontal lines are there to emphasise that the source events are in the past from the reflection events (the tall red boxes) and the reflection events are in the past from the capture events (the screens).  I have also emphasised that the source and screens are persistent over time (in the y-axis) and don't move (so unchanging in the x-axis).


There is always a potential path between the source and the screen, dependent on the state of the metamaterial (indicated as a black line when transparent and red when reflective - using the same protocol as Astrum) at the time (in the laboratory frame) that the beam of light gets there.  There is no need to postulate that photons went backwards in time in anyone’s frame.

 

The light blue and light green shaded areas indicate the spacetime region over which the light beam and individual photons are smeared, terminating at the screen event when and where the photons decohere.  Interference would result from where those shaded areas overlap.

 

So, there’s a hypothesis.  Can it be tested?


---


Oh, and in answer to the question of the title ...  Yeah, nah.  I don't think so.


Also, there's a characteristic of square waves that may not be well understood by many.  The more square a wave looks, the more overlapping and slightly off-set sine waves are required to generate it.  The ramification of this is that in the frequency domain, a square wave is very wide - smeared out, one could say - and the more extreme that is (so if you have a short duty cycle square wave), the more spread out the frequencies are.


It'd be interesting to know how these frequencies travelled, and whether, as a consequence of turning on and off, a bow wave and wake of was transmitted, and whether they could interact to cause interference without the need to posit the smearing of photons (although such a scenario would not resolve the issue of how a photon "knows" where to stop if blocked by something in its path).


---


Note, from just before 15:00 in the video, Astrum talks about lightning finding its "optimum route" and implies that the optimum path for a photon might involve travelling backwards in time (see about a minute or so earlier when he states "light always travels the path of least time").  I reject the latter notion, but the idea that photons are smeared over the spacetime between source and target is similar to notion of finding the optimum path, with the photon effectively sampling the entire range of options.  So, in that sense, it would be the process of sampling options that leads to interference.

Sunday, 1 October 2023

Thinking Problems - Lab Leak


This is Fu (it's his name in PowerPoint).  He's our nominal Patient O (also sometimes styled as 0, or Zero) for Covid-19, caused by the virus SARS-CoV2.  Behind him is a potential other person in the chain, we can call him Fu2, he's a hypothetical intermediate human carrier of the virus who didn't come down with Covid-19 - who may or may not exist.  As they collectively are the portal of the virus into humanity, we can just refer to the Fu/Fu2 nexus as Fu, just keeping in mind that there may have been that human-human mechanism right at the start.

We don't know how Fu got infected with SARS-CoV2, but there are some theories, indicated by the lines.

It could be entirely natural, noting that there are some variants of that, some of which have the virus being shared between different animal vectors as it evolved (some of which might have been human).  That's what the additional dotted box means.  Fu interacted with an animal in the wild, at a market or somewhere else that had the virus and got Covid-19.

SARS-CoV2 could have been genetically engineered in a lab and then Fu could have been deliberately infected with it.  This would imply that SARS-CoV2 had been developed as a biological weapon.

Alternatively, there could have been infection from a petri dish, test tube or surface in a lab where the virus was being genetically engineered, as a biological weapon, in a gain of function effort to develop better methods for treating coronaviruses more widely (vaccines, retrovirals, and the sort) or just out of scientific curiosity (i.e. pure research).

Finally, there could have been a crossover from an animal infected with SARS-CoV2 that was being treated, dissected, studied or whatever in a lab.  This may have been with the intent to develop a biological weapon, or do some gain of function for benign reasons, but in this case there had not (yet) been genetic engineering carried out.

Note that the purple arrows are pointing at the boxes, not any of the other arrows.  The amount of evidence for each event is nominal, the size of the bubble could also relate to the quality of evidence, rather than a mere quantity.  Note that it's evidence, not proof.  Some evidence might support multiple possibilities.

---

I think I have captured all the possibilities being thought of seriously.  Even if there is some bizarre vector, like aliens or the New World Order doing the genetic engineering and deliberately injecting Fu, this still falls into the category "Genetic Engineering".  Same with a god doing it, it's just that the technology would be different (supernatural genetic engineering).  If there is something that I have missed, I am more than happy to go through it and try to weave it in.

Note that even with genetic engineering, there was still a natural origin of the base virus that was being fiddled with.  So, there is naturally going to be a lot of evidence for natural origins.  I'm not really thinking about evidence that supports all cases, just delta evidence.  Those cases are (arrow type):

  • purely natural – Natural Origins→Fu (large red)
  • simple leak from a lab – Natural Origins→Leak from a Lab→Natural Origins→Fu (small orange)
  • deliberate infection – Natural Origins→Genetic Engineering→Fu (tiny grey)
  • complex direct leak from a lab – Natural Origins→Genetic Engineering→Leak from a Lab→Fu (large green)
  • complex indirect leak from a lab – Natural Origins→Genetic Engineering→Leak from a Lab→Natural Origins→Fu (small blue) – so we can think of zoonosis as “natural”, in a sense, even if the virus were to be tinkered with at some point.

There is one other that I identified after I put the image together, namely Natural Origins→Leak from a Lab→Natural Origins→Fu.  The notion here is that the virus was transferred from where it normally is (in a bat, in a cave, somewhere in southern China) to a lab and gets into another animal (pangolin, civet cat or one of those adorable raccoon dogs), and then that other animal becomes the vector for transmitting SARS-COV2 into humans.

There is also the possibility of a pre-SARS-COV2 virus being carried from a lab to the animal (via an intermediate human infection), with mutation(s) then happening in an animal or range of animals – resulting in a variant that became known as the Wuhan strain of SARS-COV2.

I’m not specifying a lab, although there are two candidates that seem more reasonable than any others given the location of the first outbreak – Wuhan Institute of Virology and the Wuhan Centre for Disease Control (about a quarter of a kilometre from the Huanan Seafood Market [also variously known as the Huanan Wholesale Market and Huanan Wholesale Seafood Market]).  It’s somewhat less likely that any leak occurred at another of the many labs in large cities in China and then got carried to Wuhan to break out there.  About as likely as Chinese authorities deliberately releasing a deadly virus on the doorstep of their major virology institute.

---

The problem, as I see it, is that the light blue ellipse encompasses what some people refer to as a "lab leak", also indicated by the larger green arrow – implying genetic engineering in a lab with an accidental release, possibly of a biological weapon but, at the very least, some questionable gain of function research.  Then they take any evidence that there might have been a leak from a lab as evidence for genetic engineering, which it isn't.

I suspect that there's a similar problem on the other side in that initial discussions of a "lab leak" included the assumption that it encompassed both a leak from a lab and genetic engineering, so they weren't counting direct transmission from an animal to a human inside a lab (or even just a SARS-CoV2 sample from an animal, onto a surface or into a test tube and thence to a human) as a "lab leak".  So they were saying that a "lab leak" was considered extremely unlikely where, in reality, a leak from a lab is entirely possible and they should have said more clearly that genetic engineering is extremely unlikely (for various reasons) but not entirely impossible.

It isn't helped by the fact that dog-whistles are used on both sides, and the one term sometimes means quite different things.

---

If something seems unclear, please let me know.

Tuesday, 19 September 2023

The Patrick T Brown Debacle - An Own Goal or Something More Sinister

 On 5 September 2023, the climate scientist Patrick T Brown published an article at “The Free Press” which implied that he had perpetrated a Sokal style hoax on the journal Nature.  His explicit claim was that an “unspoken rule in writing a successful climate paper (is that the) authors should ignore—or at least downplay—practical actions that can counter the impact of climate change.”

 

Note the title of the article, “I Left Out the Full Truth to Get My Climate Change Paper Published” and the by-line “I just got published in Nature because I stuck to a narrative I knew the editors would like. That’s not the way science should work.  Note also that the link includes the word “overhype”, indicating that the editor had a different title in mind.  This is another claim in itself, although it doesn’t really appear in the text.

 

The paper he co-authored was Climate warming increases extreme daily wildfire growth risk in California.

 

This all raises some key questions.  Who is Patrick T Brown?  Where does he hail from?  And are his claims reasonable?

 

Patrick T Brown is, among other things, a co-director of the Climate and Energy Group at the Breakthrough Institute.  This institute, established by Michael Shellenberger and Ted Nordhaus, is focused on “ecomodernism” which tends to be in favour of using technology to solve problems – replacing fossil fuels with nuclear energy (not entirely bad), but resisting anything approaching efforts to minimise current reliance on fossil fuels.  To be cynical, they appear to be in the “we don’t need to worry about climate change because we can fix it with out technology” camp of climate deniers.

 

If it were true that there was a real effort by academia to squash efforts to address climate change, this would indeed be a problem.  We should do all the research, research into the impact of human activities on the climate (which we can more easily moderate), the effects of climate change and ways of mitigating the effects of climate change.  However, there are journals which address different aspects of science.

 

What does Nature publish?  According to their website:

 

The criteria for publication of scientific papers (Articles) in Nature are that they:

  • report original scientific research (the main results and conclusions must not have been published or submitted elsewhere)
  • are of outstanding scientific importance
  • reach a conclusion of interest to an interdisciplinary readership.

 

Note that they don’t indicate that they publish articles on technological developments (which is where much of the detail on efforts to mitigate climate change would be expected to appear).  However, there is a journal in the Nature stable precisely for that, the open access journal npj Climate Action.  So, the question is, did Patrick T Brown do any original scientific research into other contributions to climate change?  He doesn’t say so we don’t know.

 

Does Nature refuse to publish papers on natural contributions to climate change?  No.  Contribution of natural decadal variability to global warming acceleration and hiatus. Indirect radiative forcing of climate change through ozone effects on the land-carbon sink.  Admittedly this is old (about a decade), but there’s no indication that there is new original research into other factors that has been rejected.  There are newer papers on the effect of the release of methane due to melting permafrost, such as this one from 2017: Limited contribution of permafrost carbon to methane release from thawing peatlands.

 

Did Nature give any indication that they didn’t want publish a paper that talked about other drivers of climate change?  No, the opposite in fact.  Hi co-author, Steven J Davis (reported at phys.org), said “we don't know whether a different paper would have been rejected.  … Keeping the focus narrow is often important to making a project or scientific analysis tractable, which is what I thought we did. I wouldn't call that 'leaving out truth' unless it was intended to mislead—certainly not my goal.”

 

Nature provides visibility of the peer review comments, available here, and in those comments, there are references to other factors “that play a confounding role in wildfire growth” and the fact that “(t)he climate change scenario only includes temperature as input for the modified climate.”  Two of the reviewers rejected the paper, but neither of them did so on the basis that it mentioned other factors than anthropogenic climate change.

 

In the rebuttal to the reviewer comments, the authors wrote:

 

We agree that climatic variables other than temperature are important for projecting changes in wildfire risk. In addition to absolute atmospheric humidity, other important variables include changes in precipitation, wind patterns, vegetation, snowpack, ignitions, antecedent fire activity, etc. Not to mention factors like changes in human population distribution, fuel breaks, land use, ignition patterns, firefighting tactics, forest management strategies, and long-term buildup of fuels.

Accounting for changes in all of these variables and their potential interactions simultaneously is very difficult. This is precisely why we chose to use a methodology that addresses the much cleaner but more narrow question of what the influence of warming alone is on the risk of extreme daily wildfire growth.

 

We believe that studying the influence of warming in isolation is valuable because temperature is the variable in the wildfire behavior triangle (Fig 1A) that is by far the most directly related to increasing greenhouse gas concentrations and, thus, the most well-constrained in future projections. There is no consensus on even the expected direction of the change of many of the other relevant variables.

 

So the decision to make the study very narrow, in their (or his) own words, was made on the basis of ease and clarity, not to overcome publishing bias.  Perhaps Patrick T Brown was lying.  But there would be little point, since the paper’s authors write:

 

Our findings, however, must be interpreted narrowly as idealized calculations because temperature is only one of the dozens of important variables that influences wildfire behaviour.

 

So, that’s true.  Like much of science, it’s all about trying to eliminate confounding factors and working out what the effect of one factor is (or a limited number of factors).  In this case, the authors have (with assistance of machine learning) come to the staggering conclusion that if forests are warmer and drier, they burn more.  The main criticism that could be made is that Nature published a paper with such a mundane result.  However, the mechanism, using machine learning, is potentially interesting.  It could easily contribute to modelling – both in predicting the outcomes of various existing models and potentially by being redeployed to improve existing models (or create new and better models).

 

It’s a bizarre situation.  Why did Patrick T Brown, as a climate scientist, do this?  Maybe he has been prevented from publishing something in the past.  Perhaps his new institute (or group) has been prevented from publishing something.  That would be interesting to know.

 

Or is it something else?

 

Well, if you search hard enough, you can find that Patrick T Brown has posted at Judith Curry’s blog back when he was a PhD student.  And if you look at Judith Curry, you will find that she is what Michael Mann labelled a delayer – “delayers claim to accept the science, but downplay the seriousness of the threat or the need to act”.

 

Is it merely coincidence that the Breakthrough Institute for whom Patrick T Brown works, and his fellow ecomodernists, are also the types who appear to accept the science, but downplay the seriousness of the threat of climate change and the need to act, or at least criticise all current efforts to act?

 

---

 

My own little theory is that Patrick T Brown was not so much involved in scoring an own goal in the climate science field, but that he was attempting deliberate sabotage.

Wednesday, 13 September 2023

A Further Departure from MOND

Looking more closely at Milgrom’s Scholarpedia entry on MOND, I found something else that I didn’t like.  It was the method by which he arrives at an equation that I used in the previous post, A Minor Departure from MOND, namely g(in the MOND regime)=√(GMa0)/r.

I was walking the dogs actually, mulling over things, and realised that I couldn’t for the life of me remember how I arrived at that equation.  I must have seen it, got stuck in a mental alleyway and just automatically applied it.  Very embarrassing.

While it works, and seems to work better from one perspective with the different value of a0, it won’t wash if there’s no derivation.  And there’s no derivation.  This is the numerology that I was complaining about a few posts ago.

What Milgrom writes is: “() A0 is the “scale invariant” gravitational constant that replaces G in the deep-MOND limit.  The fact that only A0 and M can appear in the deep-MOND limit dictates, in itself, that in the spherically symmetric, asymptotic limit we must have g∝(MA0)1/2/r, since this is the only expression with the dimensions of acceleration that can be formed from M, A0, and r.”  The term A0 had been introduced earlier in the text: “A0 is the “scale invariant” gravitational constant that replaces G in the deep-MOND limit. It might have been more appropriate to introduce this limit and A0 first, and then introduce a0≡A0/G as delineating the boundary between the G-controlled standard dynamics and the A0-controlled deep-MOND limit.”

The problem I have is that, in Towards a physical interpretation of MOND's a0, I considered critical density of our universe, and that very specifically uses the Gravitational Constant (G), and I consider the gravitational acceleration at the surface of a Schwarzschild black hole with the same density as that critical density, and that equation also very specifically uses G.  However, the resultant acceleration would be right on the border between “the G-controlled standard dynamics and the A0-controlled deep-MOND limit”, so there’s an issue right there.

There’s also an issue with the fact that forces are vector quantities, in the case of gravity directly towards the centre of mass (although due to the summing and negation of sub-forces created by every element of the mass).

When considering the surface of a black hole, the gravitational force is towards the centre of the mass of the black hole.  Now, in earlier posts, I have indicated that the density of the universe is the same as the density of a Schwarzschild black hole with a radius equivalent to the age of the universe times the speed of light.  What I have never said, at any point, is that the universe is inside a black hole.

My position has been more that the universe *is* a black hole, which may seem rather esoteric, but the point is that I don’t consider there to be an outside in which there would be a black hole inside of which our universe would sit.  To the extent that there is a universe in which our universe is nestled, that “outer” universe is on the other side of the Big Bang.  So it’s not so much a “where” question, but rather a “when” question.

But even then, it’s not correct to say that the “outer” universe is in our past, because time in that universe was/is orthogonal to our time, and in the same way the spatial dimensions of the “outer” universe were/are orthogonal to our spatial dimensions.

(I know this is difficult to grasp initially, but this video may go some way to explaining a version of the concept.)

This introduces another issue.  If we could, in any way, consider our universe to be a black hole in an “outer” universe, then our universe would be smeared across the surface of that black hole and any gravitational force due to the total mass of that black hole would be orthogonal to our spatial dimensions.

So, while it’s tempting to consider a value of a0 that is linked to the mass of a black hole with the dimensions of a FUGE universe, it doesn’t seem supportable.

I had tried a method, considering the curvature of the “fabric of spacetime”, but I suspect that it introduces more problems than it solves.


An image like this illustrates curvature of two dimensions, but it represents curvature of three dimensions.  We could eliminate another dimension, to get something like this:

In this image, the notional gravitation that a0 would represent would be a vector field throughout with a downwards trajectory.  Without a mass deforming spacetime, that vector field would be orthogonal to it, but with any deformation, there would be a component that is not orthogonal.

It made sense at the time, since it does tie the effect of a gravitational force that should be uniform throughout the universe to a mass that is deforming spacetime but I don’t have any confidence that it works, since the upshot would be additional deformation, which could have a potential runaway effect.

Someone else might have an idea as to how this could work, even if it seems to me to be a dead-end.