Wednesday 4 October 2023

Light that is faster than light?

In the paper Double-slit time diffraction at optical frequencies, the authors describe using “time slits” to demonstrate inference between two pulses of light that are separated by time.  This was interpreted by Astrum as indicating that light can travel faster than the speed of light.

 

I’ve not been able to find anyone else who believes this, nor any paper mentioning time slits that follow up on the one indicated above that concludes that light travels faster than the speed of light.  Nor does the paper make clear that the set up described by Astrum is what they had.

 

However, it’s an interesting thing to think about.  The question that immediately came to mind for me was: what was the temporal separation between time slits and how does that compare to the spatial separation between the source of the light pulses and the location where the time slits were instantiated (using a metamaterial that swiftly changes from mostly transparent to mostly reflective and back – or as they say it “creating” time slits by inducing an ultrafast change in the complex reflection coefficient of a time-varying mirror”)?

 

This is the image that Astrum uses to illustrate the concept (noting that none of the light illustrated here is claimed to travel faster than light, that bit comes later in the video):


We actually have enough information to work out approximately how far the transmitter and target must be from time-varying mirror.  The slits are separated, in one instance and according to the paper, by 2.3 picoseconds.  The transmitter is at very slightly more than 4 picolightseconds from the time-varying mirror, or a little over 1mm.  There is a mention of separations of 800 femtoseconds, which would reduce all by a factor of four, and 300 femtoseconds (when the slits begin to merge) by another factor of about 2.5.

 

I suspect that this is not actually the case.  I suspect that the source-mirror separation is going to be in the range of 10cm, at least.  This is two orders of magnitude greater.  It could be as much as a metre or more, adding another order of magnitude.

 

Note also that the period of increased reflectivity is in the order of about 0.5 picoseconds (or 500 femtoseconds):

 

The implication is not trivial because Astrum has created an image in which the second pulse is initiated after the first pulse has already been reflected (watch the video for clarification, the image has been simplified to illustrate his point) and the metamaterial has gone back to being transparent.

 

I think it’s more likely to be the case that, when the second pulse is transmitted, the reflection-state for the first pulse has not even commenced.  Revisit the image about and move the source away by a factor of 100.  Even a factor of 10 would put the second pulse below the period in which the metamaterial is reflective for the first pulse.

 

Why does this matter?

 

First, we need to think about another problem.  Let’s pretend that it’s ok to do what Einstein did and have a thought experiment in which we imagine riding on a beam of light.  Some physicists don’t like you doing this, so we may need to be careful.

 

Say we are travelling in a vacuum parallel to an enormous ruler that is L0 = 1 light second long. How long is that ruler in our frame?  Consider the ruler to be stationary (and pretend for the briefest moment that the question “relative to what?” doesn’t come up) so that we, riding on the beam of light, are traveling at v=c, relative to it until we hit a target at the far end.

 

The equation for length contraction is L=L0(1-v2/c2), meaning that the length of the ruler, in our frame, the frame of the beam of light (or photon), is 0 light seconds.  The time taken to travel the full length of the ruler is 0 seconds.  The same applies if we double the length of the ruler, and keep on doubling it, or halve it and so on.  Irrespective of how long the ruler is, as soon as the beam of light starts travelling along it, within its own frame, it has already finished travelling along it.  It’s like beam of light simply teleported from the source at one end of the ruler to the target at the other.

 

Now remember that we are on a beam of light.  A beam consists of a multitude of photons, each travelling through the vacuum at the speed of light, c.  And imagine that there are some motes of dust in the way, halfway along the ruler, some of which are struck by photons which therefore only travelled 0.5 light seconds (in the ruler’s frame), in a travelling-frame period of 0 seconds, getting to the mote as soon as it sets off.

 

How does this happen?  How does each photon “know” to travel only halfway along the ruler (which has no length anyway in its frame) and not the full length (or to just keep going)?

 

One possibility (in the weak sense of that word) is that each photon does in fact teleport from starting position to final position – with a delay due to the maximum speed at which information propagates.  But this implies an ability to predict the future, since photons only hit the motes of dust that are there at the time that the path of the light intersects them, so they would have to predict where to teleport to.  We can put that idea aside.

 

The idea that comes to mind is that the photon is effectively smeared across the entirety of its path until it is caused to decoheres by an interaction with something (hence the need to specify “speed in a vacuum”).

 

The consequence of this is that so long as there is spacetime path from source to target, some element of the photon takes it.  And there’s no limitation on whether that path is time-like (Δx/Δt<c), space-like (Δx/Δt>c) or light-like (Δx/Δt=c).  What it won’t do, however, is go back in time, as the imagery produced by Astrum inferred, when he presented this:

 

 

I would understand it more like this:



Note that the dashed horizontal lines are there to emphasise that the source events are in the past from the reflection events (the tall red boxes) and the reflection events are in the past from the capture events (the screens).  I have also emphasised that the source and screens are persistent over time (in the y-axis) and don't move (so unchanging in the x-axis).


There is always a potential path between the source and the screen, dependent on the state of the metamaterial (indicated as a black line when transparent and red when reflective - using the same protocol as Astrum) at the time (in the laboratory frame) that the beam of light gets there.  There is no need to postulate that photons went backwards in time in anyone’s frame.

 

The light blue and light green shaded areas indicate the spacetime region over which the light beam and individual photons are smeared, terminating at the screen event when and where the photons decohere.  Interference would result from where those shaded areas overlap.

 

So, there’s a hypothesis.  Can it be tested?


---


Oh, and in answer to the question of the title ...  Yeah, nah.  I don't think so.


Also, there's a characteristic of square waves that may not be well understood by many.  The more square a wave looks, the more overlapping and slightly off-set sine waves are required to generate it.  The ramification of this is that in the frequency domain, a square wave is very wide - smeared out, one could say - and the more extreme that is (so if you have a short duty cycle square wave), the more spread out the frequencies are.


It'd be interesting to know how these frequencies travelled, and whether, as a consequence of turning on and off, a bow wave and wake of was transmitted, and whether they could interact to cause interference without the need to posit the smearing of photons (although such a scenario would not resolve the issue of how a photon "knows" where to stop if blocked by something in its path).


---


Note, from just before 15:00 in the video, Astrum talks about lightning finding its "optimum route" and implies that the optimum path for a photon might involve travelling backwards in time (see about a minute or so earlier when he states "light always travels the path of least time").  I reject the latter notion, but the idea that photons are smeared over the spacetime between source and target is similar to notion of finding the optimum path, with the photon effectively sampling the entire range of options.  So, in that sense, it would be the process of sampling options that leads to interference.

Sunday 1 October 2023

Thinking Problems - Lab Leak


This is Fu (it's his name in PowerPoint).  He's our nominal Patient O (also sometimes styled as 0, or Zero) for Covid-19, caused by the virus SARS-CoV2.  Behind him is a potential other person in the chain, we can call him Fu2, he's a hypothetical intermediate human carrier of the virus who didn't come down with Covid-19 - who may or may not exist.  As they collectively are the portal of the virus into humanity, we can just refer to the Fu/Fu2 nexus as Fu, just keeping in mind that there may have been that human-human mechanism right at the start.

We don't know how Fu got infected with SARS-CoV2, but there are some theories, indicated by the lines.

It could be entirely natural, noting that there are some variants of that, some of which have the virus being shared between different animal vectors as it evolved (some of which might have been human).  That's what the additional dotted box means.  Fu interacted with an animal in the wild, at a market or somewhere else that had the virus and got Covid-19.

SARS-CoV2 could have been genetically engineered in a lab and then Fu could have been deliberately infected with it.  This would imply that SARS-CoV2 had been developed as a biological weapon.

Alternatively, there could have been infection from a petri dish, test tube or surface in a lab where the virus was being genetically engineered, as a biological weapon, in a gain of function effort to develop better methods for treating coronaviruses more widely (vaccines, retrovirals, and the sort) or just out of scientific curiosity (i.e. pure research).

Finally, there could have been a crossover from an animal infected with SARS-CoV2 that was being treated, dissected, studied or whatever in a lab.  This may have been with the intent to develop a biological weapon, or do some gain of function for benign reasons, but in this case there had not (yet) been genetic engineering carried out.

Note that the purple arrows are pointing at the boxes, not any of the other arrows.  The amount of evidence for each event is nominal, the size of the bubble could also relate to the quality of evidence, rather than a mere quantity.  Note that it's evidence, not proof.  Some evidence might support multiple possibilities.

---

I think I have captured all the possibilities being thought of seriously.  Even if there is some bizarre vector, like aliens or the New World Order doing the genetic engineering and deliberately injecting Fu, this still falls into the category "Genetic Engineering".  Same with a god doing it, it's just that the technology would be different (supernatural genetic engineering).  If there is something that I have missed, I am more than happy to go through it and try to weave it in.

Note that even with genetic engineering, there was still a natural origin of the base virus that was being fiddled with.  So, there is naturally going to be a lot of evidence for natural origins.  I'm not really thinking about evidence that supports all cases, just delta evidence.  Those cases are (arrow type):

  • purely natural – Natural Origins→Fu (large red)
  • simple leak from a lab – Natural Origins→Leak from a Lab→Natural Origins→Fu (small orange)
  • deliberate infection – Natural Origins→Genetic Engineering→Fu (tiny grey)
  • complex direct leak from a lab – Natural Origins→Genetic Engineering→Leak from a Lab→Fu (large green)
  • complex indirect leak from a lab – Natural Origins→Genetic Engineering→Leak from a Lab→Natural Origins→Fu (small blue) – so we can think of zoonosis as “natural”, in a sense, even if the virus were to be tinkered with at some point.

There is one other that I identified after I put the image together, namely Natural Origins→Leak from a Lab→Natural Origins→Fu.  The notion here is that the virus was transferred from where it normally is (in a bat, in a cave, somewhere in southern China) to a lab and gets into another animal (pangolin, civet cat or one of those adorable raccoon dogs), and then that other animal becomes the vector for transmitting SARS-COV2 into humans.

There is also the possibility of a pre-SARS-COV2 virus being carried from a lab to the animal (via an intermediate human infection), with mutation(s) then happening in an animal or range of animals – resulting in a variant that became known as the Wuhan strain of SARS-COV2.

I’m not specifying a lab, although there are two candidates that seem more reasonable than any others given the location of the first outbreak – Wuhan Institute of Virology and the Wuhan Centre for Disease Control (about a quarter of a kilometre from the Huanan Seafood Market [also variously known as the Huanan Wholesale Market and Huanan Wholesale Seafood Market]).  It’s somewhat less likely that any leak occurred at another of the many labs in large cities in China and then got carried to Wuhan to break out there.  About as likely as Chinese authorities deliberately releasing a deadly virus on the doorstep of their major virology institute.

---

The problem, as I see it, is that the light blue ellipse encompasses what some people refer to as a "lab leak", also indicated by the larger green arrow – implying genetic engineering in a lab with an accidental release, possibly of a biological weapon but, at the very least, some questionable gain of function research.  Then they take any evidence that there might have been a leak from a lab as evidence for genetic engineering, which it isn't.

I suspect that there's a similar problem on the other side in that initial discussions of a "lab leak" included the assumption that it encompassed both a leak from a lab and genetic engineering, so they weren't counting direct transmission from an animal to a human inside a lab (or even just a SARS-CoV2 sample from an animal, onto a surface or into a test tube and thence to a human) as a "lab leak".  So they were saying that a "lab leak" was considered extremely unlikely where, in reality, a leak from a lab is entirely possible and they should have said more clearly that genetic engineering is extremely unlikely (for various reasons) but not entirely impossible.

It isn't helped by the fact that dog-whistles are used on both sides, and the one term sometimes means quite different things.

---

If something seems unclear, please let me know.