is this fake news
or just another smear campaign?
Google's turned on a set of public network time protocol (NTP) servers. You'll find the servers at time.google.com – which resolves to 18.104.22.168, a rather less pretty IP address than the 22.214.171.124 and 126.96.36.199 Google uses for its public domain-name system (DNS) servers. There's also time2.google.com at 188.8.131.52, time3. at …
You don't use the calendar time set by NTP for anything other than human readable output, any program that is remotely sensitive to time differences should be using the monotonic API, which is just a value incrementing at a constant rate and completely unaffected by leap seconds or any other adjustment to the calendar time.
What about sub-second units?
Taking it as you have said, for events which are timed at the milli/microsecond level, you could have an event which occurred after another appearing before it. Or else, you would need to either "smear" that last second, or repeat the final milli/micro/nano/picosecond.
The "smearing" approach is probably the most sensible method in the vast majority of cases.
The "smearing" approach is probably the most sensible method in the vast majority of cases.
You mean for those code monkeys who don't know/don't care and don't test?
First point - your software should not crash if time is stepped anyway, what happens then if a machine is off-network for a while and then adjusted to the correct time (manually or by NTP)?
Second point - if you depend on precise time then do it properly! This is not a new issue, it has been documented and implemented in sane systems since the late 1970s. And for those who really need continuous time-scales (e.g. for computing time differences that are correct in any absolute sense) we already have TIA or even simpler GPS time.
Just use a private NTP server - have a couple that lock to GPS with rubidium reference oscillators, have one that has been nobbled so the precise 1 pulse per second is derived directly from the GPS receiver - the NMEA data from the receiver is intercepted - can then give false data and leap second flags to the server (SyncServer S200 )
( Just don't let it out of your network ! not that a good NTP client should ever be happy with a single stratum 1 server - so would soon spot a problem -- had this in the past with Microsoft servers hat supposedly had an NTP service running - but as they don't use hardware interrupts but software - the clients all rejected them as they jittered excessively - so where flagged as illegitimate servers and blacklisted ! )
How the heck to do test a leap-second event from NTP?
By the power of Google searching, first on the list:
Or if you are looking for an easy to deploy commercial solution:
What makes this all very frustrating is that there's already perfectly good solutions to the leap second problem.
If OS developers wrote their OSes to use International Atomic Time instead of UTC as their base timescale, the OS would never need to deal with a leap second.
And there's perfectly good libraries for converting TAI to, e.g. UTC that already handle leap seconds, can do accurate time calculations, etc. One such example is the SOFA library from the IAU.
Like everything else it cannot predict leap seconds, but an OS is already well placed to receive library updates as part of its regular maintenance. Why not this one too? And if every developer used TAI instead of UTC to represent time values then all their calculations would always be correct, with conversion to UTC for display being the only thing that'd be wrong in the absence of updates.
All sane OS already handle the leap second properly, except when some code monkey changes it and does not test it, and NTP has this built-in (it announces the leap 1 day in advance so the kernel can step as needed without an NTP packet at the precise change point).
No, this is simply a sop to shitty coders who do not understand the basics of precise time-keeping that have been this way for 40 odd years. I.e. for longer than most of them have lived.
"Like everything else it cannot predict leap seconds, but an OS is already well placed to receive library updates as part of its regular maintenance."
Unless the OS (like XP and earlier) is at EOL. Or the OS is meant to operate in a fixed, non-upgradeable capacity such as an embedded device?
"Unless the OS (like XP and earlier) is at EOL. Or the OS is meant to operate in a fixed, non-upgradeable capacity such as an embedded device?"
You mean the same devices that have been working with leap seconds for years and years now? Lets face it, Windows default is to update the time using SNTP once per week! So it steps time every week and your system just get on with like. So what is the beef about a correctly applied 1 second step every ~18 months?
"If OS developers wrote their OSes to use International Atomic Time instead of UTC as their base timescale, ..."
At least according to the documentation, Windows has used "seconds since 1601" as its base timescale for the last twenty years and UNIX has used "seconds since 1970" for rather longer. It has always been my impression that a conversion to UTC is purely a user-interface thing for the benefit of meatware. Any programmer baking Babylonian time-keeping conventions into their design really needs whacking over the head with a two-by-four clue-stick.
Google could well have added its servers to the NTP time pool, which was established to make it as simple as possible to establish a time base. There's the baseline pool.ntp.org, and if you need a narrower scope, simply prepend your two-character country code (like us.pool.ntp.org).
As for other options, that depends on where you are. For example, WWV (the US clock radio station our of Colorado) is tricky to pick up east of the Appalachians, especially during the day.
Google could well have added its servers to the NTP time pool
No, no, and thrice no! Because Googles NTP servers will be telling the wrong time for about a day after every leap second.
Now you might not care, and many others don't case, because all they want is some sort of time-of-day indicator. But heaven help you if you need millisecond or better accuracy for anything like financial HFT, log file forensics or any number of science applications.
There's also the simple fact that it's Google, and I don't even want to give them a ping that I'm interested in accurate time (you probably guessed from that I'm not using their DNS either :) ).
Actually, I'd be confirming to them that I'm clueless, because the whole time infrastructure relies on everyone doing the exact same. Google doing its own with something so seriously critical instead of collaborating with a large, globally established infrastructure with established processes and protocol for adjustments is only going to cause problems for the people that fall for it.
I rather draw stabilised NTP from a number of pool.ntp.org elements or GPS than Google, thanks. I need to keep things running, which means staying compatible with a collaborative approach.
I don't have time for this :).
Google are doing their own thing internally, and have been doing so for a while.
Now that others have access to the Compute power in their bit barns they need a way to sync time between their on and offsite resources, and running GoogleTime (tm) seems like a sensible option. I'd still suggest pool servers for most people, but for those with ties into the Google systems that need accurate sync (not necessarily time, but sync) then this is a good solution.
For a normal home user then the conventional approach of a leap second actually doesn't work too badly - we do seem to have leap seconds figured out reasonably well...
Having computers run on a non sidereal dependant clock does seem like a good idea though...
"Now you might not care, and many others don't case, because all they want is some sort of time-of-day indicator. But heaven help you if you need millisecond or better accuracy for anything like financial HFT, log file forensics or any number of science applications."
If your application is SO time-sensitive as to require BOTH precise AND accurate time to less than a second (in the case of HFT, to within 1us), then you can probably justify the expense of your own authoritative time source.
"Google could well have added its servers to the NTP time pool
No, no, and thrice no! Because Googles NTP servers will be telling the wrong time for about a day after every leap second."
I think the assumption in this is that Google could have done that and then implemented the leap second along with everyone else instead of having the Google Second.
The problem there is that GPS signals can drift due to atmospheric interference (that's also why your GPS fix tends to drift even when you stand still). They're only good for casual time synchronization, in which case if you have an internet connection, it's easier to just sync to a time pool since the connection's so terse even a dialup connection can handle it.
For high-precision, high-accuracy demands, you're probably going to need your own source for consistency.
the speed of light is inconsistent in atmosphere
That is why the military GPS used two frequencies, to compensate for ionospheric electron density effects. You can get the same with newer systems like Galileo and from differential GPS, etc.
But if you need us or better time its a challenge for the OS, etc, to respond and stamp the network packets with sufficient stability. For that sort of job you use PTP instead:
The 'struct tm' that holds time in year, minutes, seconds, etc. has allowed tm_sec to go from 0-60 (instead of 0-59) for this very reason since before I first touched Unix 25 years ago, and presumably from day one for Linux. So Android and iOS should be perfectly fine. As for Windows, who the hell knows?
Now some applications may not be coded properly to expect that extra second and get a tm->tm_sec == 60, but this is hardly the fault of the OS, it is the fault of the application!
I'd suggest if you've got something that critically relies on sub-second accuracy then you shouldn't be relying on the OS clock and NTP.
If you don't have your own counter, at the very least you need code to handle fluctuations in the length of a second because all sorts of things can affect the clock cycles of off the shelf hardware running off the shelf OS's.
That's not correct. Google use this for their NoSQL solution. Update one comes in a 11:42:10.100 and update two comes into a different server at 11:42:10.102 - so update two wins in the eventually consistent model. If they don't do the time smearing, then events can get out of order and applications end up having incorrect data.
You may not agree with the approach of using smeared NTP over their large numbers of servers to perform this but it works very well for them and they seem to have managed distributed transactions better than almost anyone else.
Well, since the problem is due to the effects of tides caused by the Moon, take the military option and Nuke the Moon!
"We stand today on a great threshold of History. After this war with Luna, the lunanites will no longer be able to steal our precious Earth's rotation! >click<."
I read about Google's approach some time ago, and for non-sub-second-critical apps it's a better solution than stepping by 1 sec. If you do need the sub-second stuff then TAI is the way to go. Of course, even things like HFT need to ultimately tie to real-world time but it's much more important to have a unique time ordering of transactions (no, relativity doesn't (yet) apply!) and the reference back to real-world time can be done after the fact from logs, e.g. when litigation requires it.
I've configured my home network now to have two internal NTP servers referenced to the Google ones & then everything else talks to them. I'll see what happens at New Year, probably set up a client with logging on & talking to NPL or Linx.
The article reads as if its a "new Google thing", which it isn't.
As long as its done to the standards and everyone follows the same standard, then there should be no problem here. RFC5905 defines NTP4 which includes handling of leap time.
I agree with the other posts stating that this should come from the reference clocks out,
This is a bad idea. Google are free to use whatever system they like internally, but all public-facing NTP servers should agree. Google is deliberately making its servers give the wrong time for 20 hours. If someone uses a mix of Google and non-Google NTP servers for their time then the results will depend on which version of time is in the majority in their list.
Our Linux servers encountered synchronization issues during the last leap second. We were using NTP to sync to the NTP pooled servers as well as google smeared servers. Needless to say on the day, Linux rejected both time sources. Moral of the tale - do not sync to both at the same time! See:
The problem is we are using an artificial time constant, the second, to measure a variable time activity, the Earth year.
Just go back to the old system of defining a second based on a fraction of a year. Then all that needs to be done is agree annually the length of the second that will be used next year to keep us on track.
It's that old computing problem of using a data index pointer as a real world definer.
Problem is, the second has transcended the year, and an exact measurement of the second is necessary for various scientific and non-scientific purposes. Like it or not, the second is not relative to the year anymore. It's now relative to the speed of light, which IS constant in a vacuum and can apply extraterrestrially as well.
Biting the hand that feeds IT © 1998–2020