"...the first trapped-ion space clock..."
Ahem. Let me fix that:
...the first trapped-ion clock IN SPAAAAAAAAACE.
NASA has demonstrated the first trapped-ion space clock in a move that could pave the way for real-time navigation in deep space. Reported in Nature, the results of more than a year's experimentation have shown the ion optical clock has outperformed current space clocks by an order of magnitude. NASA engineers believe the …
"The paper claims that variations in radiation, temperature, and magnetic fields did not seem to limit the performance of the clock, making it suitable for operation in the extreme environment of space."
One presumes any effects of time dilation due to gravity or acceleration are not of concern here?
i.e. they only want an accurate time in the frame of reference of vessel it's on, rather than comparing it to earth's frame of reference?
Might have been interesting to at least mention a nod in that direction...
I'm wrong about that: it can in fact do a little better. Since it can measure the non-inertialness (linear acceleration, rotation) of its frame, it could if it wanted to correct for that, assuming it's in flat spacetime. So for instance in the twin 'paradox' a if the twin who gets accelerated measures their acceleration & integrates it up suitably they can predict the time told by a clock the other twin is carrying.
What it can't do is deal with curvature, since it can't measure that (well, not easily, and not unless its very large, or the curvature is very high indeed).
I would assume it is important to take gravity into account for deep space clocks. GPS satellites near Earth already need special and general relativity corrections.
Unless you have a funny definition of 'closed system' no, they can't. They can in special relativity: because you know spacetime is flat you can simply integrate your accelerations & rotations to know both where you are and how fast you are moving with respect to some inertial frame (this is just what an IMU does, really, although they're probably not generally relativistically corrected, and just like an IMU you'd end up needing to recalibrate yourself all the time). Once you have sorted things out in one inertial frame it's easy to get to any other.
In the presence of gravity things become impossible to do locally since curvature cannot be measured locally aka the equivalence principle is true. You can do it globally if you know what the field is, but even then things become horrible since there are no global inertial frames and local inertial frames have varying time differences.
Obviously that is exactly the problem you need to solve if you're using clocks for navigation, but the clock can't solve that.
I pulled the following figures for the orbital radius of the Earth and Mars from Google:
Earth: 149.6 million km
Mars: 227.9 million km
Quick back-of-the-envelope maths says (assuming perfectly circular orbits in the same plane) that they would be 78.3 million km apart at closest approach and 377.5 million km apart when on opposite sides of the Sun.
This may vary a little when taking orbital eccentricities, relative inclinations and other effects into account, but is good for a ballpark figure
[Note: too lazy to re-look this up for free in case of memory issues, Feel free to correct and/or improve this answers accuracy.]
I've read that light takes something like 8 minutes to get to earth from it's sun. This is saying up to 20 minutes, so if Earth was furthest away from Mars on the other side of it's Sun that means the equation is (8+8+m=20) minutes, making at it's closest that would be (m=4) minutes.
Edit: Rough math, This seems to fit what 'My-Handle' said. 377.5/78.3 ≈ 4.8212...
In the event that Earth and Mars are both at aphelion and Earth is also at “apareon” (making Mars at “apgeon”), the distance would be around 401.3 Gm (22 lightminutes + 18.6 lightseconds).
In the event that Earth and Mars are both at perihelion and Earth is also at “periareon” (making Mars at “perigeon”), the distance would be around 59.605 Gm (3 lightminutes + 18.8 lightseconds).
Whether either of these events could possibly occur or not is a question that astroboffins could answer better than I could.
This post has been deleted by its author
> " NASA's Deep Space Atomic Clock loses one second every 10 million years, as proven in controlled tests on Earth."
Proven? You mean it's been running for 10 million years? It will still be running in 5 million years time and will have lost 1/2 a second?
I suspect they mean "lost time at a rate ot one second per 10 million years over a trial period" which is a very different matter..
"There is just one catch: time. That is, the current Deep Space Atomic Clock has a predicted lifespan of three to five years, well short of what might be needed for a mission to Mars and beyond."
Simple - just take two with you. After 3-5 years, switch on the new one and throw the other one away.
Yes, I understand that it can take a long time for signals to reach a space craft, so *real time* updates are not possible. But surely they are not necessary? So long as the signal delay (and also change in signal delay per unit time) is known accurately, the time correction simply takes that delay into account and the onboard standard atomic clock can be accurately set no matter how long the signal takes to reach it. The on-board clock can then be used for all real-time position calculations. The on-board atomic clock just has to keep time sufficiently well to be accurate enough between updates. Even if the position and time delay are not initially known with sufficient precision, a series of "pings" to & from the spacecraft can determine the delay and speed of the craft very accurately, even if the round-trip delay of each "ping" is weeks in duration.
"So long as the signal delay (and also change in signal delay per unit time) is known accurately"
That's the point - the signal delay isn't known accurately unless you've agreed clocks at the start of the journey.
"I sent this message at X"
"I received this message at Y"
The delay is only Y minus X *if* you can be sure that neither clocks have drifted.
You send a signal at time X and another a short time later at time Y
The spacecraft responds to signal X and Y giving the time they were received according to its uncorrected on-board clock
The ground station uses the delay (as measured by the accurate ground clock) between sending X and receiving the reply to X to compute the distance to the space craft as it was when it received the signal. The processing delay between receiving the signal and transmitting the reply must be taken into account, but this is a known constant.
The ground station does the same when it receives the reply to signal Y. This will provide the velocity vector of the spacecraft relative to Earth.
Together with knowing the approximate position & direction of the spacecraft, doing the above over several iterations will allow the computation of successively more accurate estimates of position & direction until the required level of accuracy is achieved. It also allows the ground station to send an accurate clock update that takes the delay into acount.
Actually this is an non-obvious problem.
The way things work currently is more-or-less what you say: the position of a spacecraft is measured almost without using a clock on the spacecraft at all: you bounce a signal off it from the DSN on Earth and measure, on Earth, how long that took using a really good clock. You then transmit that information to the spacecraft which, given its rather grotty clock, can now know where it is, based on where it was when the signal hit it and the time difference in the grotty clock and knowing its velocity (which can be worked out by iterating this process). You can also use this to correct the grotty clock on the spacecraft.
This works, but it's nasty for at least two reasons.
First of all the transmit-listen-time-transmit cycle ties up DSN bandwidth which is very expensive, especially if you want to run lots of spacecraft: you need to go through this cycle for each spacecraft you're running and it takes a long time. You could maybe interleave spacecraft but only if you can train the dish on the ground fast enough.
Secondly this really doesn't work very well if you don't have a very good idea of the rate of change of velocity. If, for instance the spacecraft has just done some course correction then it's spat out some slightly-unknown amount of propellant at a slightly-unknown velocity in a slightly-unknown direction and although it now can work out pretty much where it is and where it's going, 'pretty much' probably isn't good enough and it needs to wait for another fix from Earth which is going to take from seconds (near the Moon) to many hours (near Pluto). That's ... not great if you're doing some close approach or landing on a planet or moon: you don't have hours. Well, that problem perhaps can be solved by super-precise engines and super-precise inertial measurement units on the spacecraft. But it gets worse: if you're doing some close approach to a planet or moon you're in a slightly unknown gravitational field which you can't measure: you can't know exactly what your acceleration (in the Newtonian sense) is. And that field may have all sorts of lumps in it (The Moon is notoriously bad in this sense) and, well, you're close to a planet or moon: being in the wrong place or having the wrong velocity may be very, very bad indeed.
So what you want to be able to do is both to solve the first problem without chewing up DSN bandwidth, and to solve the second problem at all.
So the way you do this is that you fly a super-precise clock in the spacecraft. And then you just send one-way timestamped pings from the DSN. Now, so long as you know the various corrections you need to apply to the clock (which can be sorted out in advance based on computed position & velocity unless your spacecraft is very massive!), you just need to listen to these pings, know their arrival time from your super-good clock, and do the computation locally.
And this solves both problems: the DSN is no longer tied up in these long cycles and can transmit timestamps really frequently to potentially lots of spacecraft (it doesn't have to listen to them so the antenna can be a lot less directional); and because there's no long delay and the pings can be really frequent a spacecraft can now know where it is almost immediately.
Finally once you have a bunch of these things in orbit around Mars, say, you have the start of a real GPS system for Mars, which is going to help everything.
But to do this you need really good clocks in the spacecraft. The clocks in GPS satellites get corrected from really good clocks on the ground frequently, and you don't want to be doing that for some spacecraft near Mars because it starts eating DSN time again. Hence these.
"you bounce a signal off it from the DSN on Earth and measure, on Earth, how long that took using a really good clock"
An extension of the problem is you don't always know with exact certainty how long the "bounce" process took (i.e. the time taken for the onboard computer to receive the ping, process it, trigger a transmission back). Even if you know with certainty how many cpu cycles that took, you're timing is based on the clock frequency of the onboard cpu, which isn't accurate enough either.
"
Even if you know with certainty how many cpu cycles that took, you're timing is based on the clock frequency of the onboard cpu, which isn't accurate enough either.
"
Then you don't base the delay on the CPU cycles of the CPU clock. You use the CPU to put the reply into a buffer, but use the atomic clock as a timer to decide when to release the buffer to the transmitter.
Or have a dedicated CPU (or hardware state machine) for handling pings, which has its clock derived from the on-board atomic clock instead of a normal Xtal oscillator.
Then you make sure that's not in the loop: either you simply replay the signal you got via some fixed delay with no hairy digital processing at all, or you read the reply into a buffer which you then read & transmit from using an accurate-enough clock (not the CPU clock probably) timing from the moment the signal was received.
No computers involved in the round trip measurement. Deep Space spacecraft fly what's known as a coherent transponder. It takes the input signal (e.g. at 7.15 GHz) and generates an output signal at precisely 880/749 * input frequency (usually around 8.45 GHz). Or, at X band up (7.15) and Ka-band down (at ~32 GHz) or other combinations. On the ground, the phase of the received signal is compared with a reference from a hydrogen maser which was used to generate the transmitted signal. Typically, over a path to, say, Jupiter, the time delay (and hence the range) can be resolved on the order of cm, and the range rate (Doppler) to mm/s. That is, on a billion kilometer path (1e12 meters), phase changes corresponding to a few cm can be measured - call it 1 part in 1E14.
The time delay through the transponder and the rest of the radio system is measured on the ground, before launch, over temperature, so that factor can be taken into account.