In the paper Double-slit time diffraction at optical frequencies, the authors describe using “time slits” to demonstrate inference between two pulses of light that are separated by time. This was interpreted by Astrum as indicating that light can travel faster than the speed of light.
I’ve not been able to find anyone else who believes this, nor
any paper mentioning time slits that follow up on the one indicated above that concludes
that light travels faster than the speed of light. Nor does the paper make clear that the set up
described by Astrum is what they had.
However, it’s an interesting thing to think about. The question that immediately came to mind
for me was: what was the temporal separation between time slits and how does
that compare to the spatial separation between the source of the light pulses
and the location where the time slits were instantiated (using a metamaterial
that swiftly changes from mostly transparent to mostly reflective and back – or
as they say it “creating” time slits by inducing an ultrafast change in the
complex reflection coefficient of a time-varying mirror”)?
This is the image that Astrum uses to illustrate the concept
(noting that none of the light illustrated here is claimed to travel faster
than light, that bit comes later in the video):
We actually have enough information to work out
approximately how far the transmitter and target must be from time-varying
mirror. The slits are separated, in one
instance and according to the paper, by 2.3 picoseconds. The transmitter is at very slightly more than
4 picolightseconds from the time-varying mirror, or a little over 1mm. There is a mention of separations of 800
femtoseconds, which would reduce all by a factor of four, and 300 femtoseconds
(when the slits begin to merge) by another factor of about 2.5.
I suspect that this is not actually the case. I suspect that the source-mirror separation
is going to be in the range of 10cm, at least.
This is two orders of magnitude greater.
It could be as much as a metre or more, adding another order of
magnitude.
Note also that the period of increased reflectivity is in
the order of about 0.5 picoseconds (or 500 femtoseconds):
The implication is not trivial because Astrum has created an
image in which the second pulse is initiated after the first
pulse has already been reflected (watch the video for clarification, the image
has been simplified to illustrate his point) and the metamaterial has gone back
to being transparent.
I think it’s more likely to be the case that, when the
second pulse is transmitted, the reflection-state for the first pulse has not even
commenced. Revisit the image about and
move the source away by a factor of 100.
Even a factor of 10 would put the second pulse below the period in which
the metamaterial is reflective for the first pulse.
Why does this matter?
First, we need to think about another problem. Let’s pretend that it’s ok to do what
Einstein did and have a thought experiment in which we imagine riding on a beam
of light. Some physicists don’t like you
doing this, so we may need to be careful.
Say we are travelling in a vacuum parallel to an enormous
ruler that is L0 = 1 light second long. How long is that ruler in
our frame? Consider the ruler to be
stationary (and pretend for the briefest moment that the question “relative to
what?” doesn’t come up) so that we, riding on the beam of light, are traveling
at v=c, relative to it until we hit a target at the far end.
The equation for length contraction is L=L0√(1-v2/c2),
meaning that the length of the ruler, in our frame, the frame of the beam of
light (or photon), is 0 light seconds.
The time taken to travel the full length of the ruler is 0 seconds. The same applies if we double the length of
the ruler, and keep on doubling it, or halve it and so on. Irrespective of how long the ruler is, as
soon as the beam of light starts travelling along it, within its own frame, it
has already finished travelling along it.
It’s like beam of light simply teleported from the source at one end of
the ruler to the target at the other.
Now remember that we are on a beam of light. A beam consists of a multitude of photons, each
travelling through the vacuum at the speed of light, c. And imagine that there are some motes of dust
in the way, halfway along the ruler, some of which are struck by photons which
therefore only travelled 0.5 light seconds (in the ruler’s frame), in a travelling-frame
period of 0 seconds, getting to the mote as soon as it sets off.
How does this happen?
How does each photon “know” to travel only halfway along the ruler
(which has no length anyway in its frame) and not the full length (or to just
keep going)?
One possibility (in the weak sense of that word) is that each
photon does in fact teleport from starting position to final position – with a
delay due to the maximum speed at which information propagates. But this implies an ability to predict the
future, since photons only hit the motes of dust that are there at the time
that the path of the light intersects them, so they would have to predict where
to teleport to. We can put that idea
aside.
The idea that comes to mind is that the photon is
effectively smeared across the entirety of its path until it is caused to decoheres by an interaction with something (hence the need to specify “speed in
a vacuum”).
The consequence of this is that so long as there is
spacetime path from source to target, some element of the photon takes it. And there’s no limitation on whether that
path is time-like (Δx/Δt<c), space-like (Δx/Δt>c) or light-like (Δx/Δt=c). What it won’t do, however, is go back in
time, as the imagery produced by Astrum inferred, when he presented this:
I would understand it more like this:
Note that the dashed horizontal lines are there to emphasise that the source events are in the past from the reflection events (the tall red boxes) and the reflection events are in the past from the capture events (the screens). I have also emphasised that the source and screens are persistent over time (in the y-axis) and don't move (so unchanging in the x-axis).
There is always a potential path between the source and the screen, dependent on the state of the metamaterial (indicated as a black line when transparent and red when reflective - using the same protocol as Astrum) at the time (in the laboratory frame) that the beam of light gets there. There is no need to postulate that photons went backwards in time in anyone’s frame.
The light blue and light green shaded areas indicate the
spacetime region over which the light beam and individual photons are smeared,
terminating at the screen event when and where the photons decohere. Interference
would result from where those shaded areas overlap.
So, there’s a hypothesis.
Can it be tested?
---
Oh, and in answer to the question of the title ... Yeah, nah. I don't think so.
Also, there's a characteristic of square waves that may not be well understood by many. The more square a wave looks, the more overlapping and slightly off-set sine waves are required to generate it. The ramification of this is that in the frequency domain, a square wave is very wide - smeared out, one could say - and the more extreme that is (so if you have a short duty cycle square wave), the more spread out the frequencies are.
It'd be interesting to know how these frequencies travelled, and whether, as a consequence of turning on and off, a bow wave and wake of was transmitted, and whether they could interact to cause interference without the need to posit the smearing of photons (although such a scenario would not resolve the issue of how a photon "knows" where to stop if blocked by something in its path).
---
Note, from just before 15:00 in the video, Astrum talks about lightning finding its "optimum route" and implies that the optimum path for a photon might involve travelling backwards in time (see about a minute or so earlier when he states "light always travels the path of least time"). I reject the latter notion, but the idea that photons are smeared over the spacetime between source and target is similar to notion of finding the optimum path, with the photon effectively sampling the entire range of options. So, in that sense, it would be the process of sampling options that leads to interference.