How is this a fix for "Linux"? It is a patch to the optional XFS filesystem which yes is included in the Linux kernel, along with multiple other filesystems. XFS is not even used by default when installing any major Linux distros. It has to be manually selected.
Linux 5.10 to make Year 2038 problem the Year 2486 problem
The forthcoming Linux 5.10 looks like it will include further fixes for the Year 2038 problem, aka Y2K38. The flaw means that many systems can’t conceive of dates beyond 03:14:07 UTC on 19 January 2038. Y2K was caused by systems representing years with two digits and assuming that a year ending with two zeroes would be 1900. …
COMMENTS
-
-
-
Monday 19th October 2020 09:05 GMT Martin an gof
It's long been the default on openSUSE for /home, which by default was created as a separate partition (root is BTRFS). Latest versions of openSUSE no longer propose a separate partition, I'm not entirely certain why. The official line is that
Placing it on a separate directory makes it easier to rebuild the system in the future, or allows to share it with different Linux installations on the same machine.
Never quite understood why that's easier in a directory on the same partition as root than in a separate partition. If you tell the installer that you do want a separate partition for /home it will still default to XFS.M.
-
-
-
Wednesday 21st October 2020 08:32 GMT Martin an gof
Re: A Place for Everything, and Everything in it's Place
A separate partition could be an entirely separate disc though - this is less useful now that "big enough" SSDs are coming down in price, but in the days when I could only afford 64GB or so, having that available for root (and software installs) with /home elsewhere was a good enough compromise. And as someone already pointed out, with LVM you can expand storage relatively easily.
Having home completely separate to root does mean that in the event of a catastrophic failure (and I have had several over the years) which requires a wipe and re-format, user data is safe.
M.
-
Wednesday 21st October 2020 06:52 GMT Anonymous Coward
Re: A Place for Everything, and Everything in it's Place
"Really can't imagine any rationale for many distros to ditch the separate /home partition. It's one of the best things about installing Linux."
Yep, pretty much that. Especially true on SteamOS, since /home is used "only" for games data/execs.
You *will* fill it up, I guaranty that !
-
-
-
Monday 19th October 2020 13:41 GMT Anonymous Coward
"It's been default in RHEL since 7"
Maybe, but what about every other distros out there ? For example, SteamOS is on ext4, so ext4 will have to be patched as well, plus, indeed, every other FS there are out there ...
Suse is on brtfs. ZFS is also quite a thing, if we dismiss the licensing issues.
XFS is only one in multiple FS and having it patched is only a tiny thing in Linux going full Y2038 free.
-
-
Monday 19th October 2020 14:26 GMT Dazed and Confused
Most of XFS has been 2038 safe for eons. Haven't checked what's broken but
[dazed@microg82 ~]$
[dazed@microg82 ~]$ cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
[dazed@microg82 ~]$
[dazed@microg82 ~]$ grep store1 /proc/mounts
/dev/mapper/vg_microg82-store1 /mnt/store1 xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,sunit=128,swidth=512,noquota 0 0
[dazed@microg82 ~]$
[dazed@microg82 ~]$ cd /mnt/store1/share/dazed/tmp
[dazed@microg82 tmp]$
[dazed@microg82 tmp]$ touch -t 203901020304 ts
[dazed@microg82 tmp]$ ls -l ts
-rw-rw-r--. 1 dazed dazed 0 Jan 2 2039 ts
[dazed@microg82 tmp]$
No worries
[dazed@microg82 tmp]$
[dazed@microg82 tmp]$ touch -t 250001020304 ts2
[dazed@microg82 tmp]$ ls -l ts2
-rw-rw-r--. 1 dazed dazed 0 Jan 2 2500 ts2
[dazed@microg82 tmp]$
Given the origin of XFS, I'd have expected it to be pretty clean.
-
Monday 19th October 2020 19:23 GMT NullNix
64-bit time_t on 32-bit platforms is not a thing which has been around for "ages": indeed the user interface (well, programmer interface, like -D_FILE_OFFSET_BITS) for 64-bit time_t on 32-bit was only finalized earlier this year and has basically not trickled out to anyone yet.
The major advantage of this fix is that it can be applied to existing filesystems with a single traversal over the inodes to fix them up. Going to true 64-bit time_t would require a mkfs (which means most systems would never do it).
-
Tuesday 9th February 2021 23:18 GMT NullNix
With the release of glibc 2.33 (install with care, I found several bugs and the fixes haven't hit the release branch quite yet), it has now trickled down! This has of course instantly broken OpenSSH because it didn't have all the necessary syscalls in its seccomp filter list... (patch submitted).
-
-
-
-
-
-
-
Monday 19th October 2020 14:08 GMT Anonymous Coward
Re: No need to imagine
2100: Windows ME. (Last version of Windows based on DOS.)
2108: FAT timestamps.
4105: Outlook.
So, no. Not Windows that anyone uses today. Windows proper will next experience a timestamp issue in the year 30,828.
And as for Windows not being around in 80 years time, $DEITY forbid we're still using something based on POSIX a hundred years from now.
-
-
-
Monday 19th October 2020 08:59 GMT Phil O'Sophical
Re: Glad to see the legacy of Silicon Graphics living on
while Solaris still has unresolved date issues.
Which ones?
Solaris has been using 64-bit time_t for 10+ years, and should not be bitten by Y2038. Try it.
UFS as an on-disk filesystem will have issues, which are difficult to fix without creating compatibility issues with taking/restoring backups (and I'd expect that also to be true for Linux filesystems with 32-bit timestamps), but that is one of the reasons that Solaris replaced UFS with ZFS a decade ago.
-
-
Monday 19th October 2020 07:59 GMT Grease Monkey
The Y2K problem was known almost since people started recording years as two digits, but the response to is was basically "we'll have stopped using these systems by 1999". Whereas of course I suspect that most of the programmers coding for two digit years in the sixties and seventies were really thinking "I'll be long retired by then so I don't give a shit". Then of course it just became normal and that's how people did things. Whenever a voice was raised in dissent for the next thirty years it was drowned out. People were still coding 2 digit years well into the nineties. Then of course come 1999 huge budgets were expended on fixing code or in some cases entirely replacing software or hardware when things couldn't be changed.
You would think that lessons would have been learned, but the approach to this (and other) date rollover issues proves that they weren't.
-
Monday 19th October 2020 08:14 GMT Anonymous Coward
"we'll have stopped using these systems by 1999"
What they didn't factor in was the reluctance of the PHBs to spend any money on maintaining the infrastructure/code. Same old story even today. Leave it until the last minute, or even to break, before they take their heads out of their arses....
-
Monday 19th October 2020 08:32 GMT Anonymous Coward
To this day, I think the most important Y2K fix ever published was the one from the Insurance industry around 1998, which basically said "We will not pay out for any disasters caused by date-related calculation errors". As a bonus, this also covered 2038 as well...
That got the attention of Upper Management!
-
Monday 19th October 2020 16:53 GMT Martin Gregorie
Some of us mainframers, at least those of us progamming ICL kit in the 1960s, 70s and 80s, were used to storing dates as days since 31Dec1889 in 24 bit words, which works well, leapyears and all, into the 22th century.
We still had problems with Y2K but that was due to the CODASYL gang, which decreed that the ONLY way a COBOL program could access the computers clock to get the date was with the statement
ACCEPT CURRENT-DATE FROM SYSTEM-DATE.
where CURRENT-DATE was required to have 6 digits that would be filled by a date in the format YYMMDD and SYSTEM-DATE was a system-defined name which was specific to the operating system and/or compiler. Unsurprisingly, as the century was not part of this CODASYL definition until sometime in the 1990s, almost all COBOL programs did the same and consequently they hard coded the century wherever it was required to be shown. Most programs written in assemblers and, I think, PL/1 together with a lot of 4GL systems shared this limitation and these were the systems that caused the Y2K panic.
The only COBOL system I was personally associated with that was written in the early 1980s and dodged that bullet had to deal with a wide range of date formats, some inexact, i.e. 'flourished 980AD' (i.e. they were alive then but we don't know when they were born or died). The required date range extended from the pre-Christian era, into the future to handle planned events. Precision varied equally widely, i.e. Euripides was alive in 55BC, birth and death dates unknown, Turold wrote the Song of Roland some time in the 11th century, while John Cage was born 5Sep1912 and died 12Aug1992, but we needed to invent our own date representations to make this work. So, we stored dates as Xccyymmdd where X was a code representing the format required for this type of date and controlled both input and display as well as validation rules. Precision was simple - we just set the day, month and year to spaces if they weren't known and the date display code formatted it appropriately.
-
-
-
-
Monday 19th October 2020 16:07 GMT Martin
Us system programming types used binary.
Which of course worked, but needed extra software (which takes space, which we couldn't afford) to display the date. And also, of course, it took extra time to execute, which mattered with real-time systems.
It was always an interesting balancing act, trying to get as much out of a microcontroller as you could with as little cost as possible. It was great fun, but I don't think I'd want to go back to those days !
-
Monday 19th October 2020 21:32 GMT John Brown (no body)
"Us system programming types used binary. Years up to 65,535 in two bytes, no problem. Even one byte got you past 2200 with a 1970 epoch."
I once wrote a video rental management system back in the early 80s and I couldn't afford a whole byte for the year, let alone two! Even then, I knew the system would not be in use in 10 years time (it wasn't!) so used 4 bits each for month and year, 5 bits for the day and the remaining three bits for rental status. Film titles were stored using a limited uppercase, numbers + some symbols character set using 6 bits per character and a couple of functions to convert a data string into the simple compression format. Storage was a pair of 180KB floppies and only 48K RAM to play with (or whatever was left after LDOS loaded on a TRS-80.)
-
-
Monday 19th October 2020 19:39 GMT martinusher
Hardly a big deal. The original rationale for using two (BCD) digits is that it only used two columns on a punched card. There are a number of techniques for squeezing to much number into too little space (its a common problem in the fixed point world), it requires some ingenuity and a code tweak. It was never "the End Of The World As We Know It" scenario that it was hyped up to be with aircraft crashing, power grids collapsing and darkness reigning supreme over the Earth (worst case with the power grid scenario would be the lights stay on but they couldn't bill us for the power).
The worst that will happen to most systems who still use a 32 bit seconds counter for a clock is that they will suffer a tempoary glitch as the clock counts over (try it). As the article points out we should have moved on by then -- if the systems absolutely need to keep absolute time then they'll have long since shifted to a 64 bit time counter.
-
-
Monday 19th October 2020 08:39 GMT Peter Gathercole
Linux kernel
After much digging through the Linux include files, you can see that the time_t type on 64 bit kernels is defined as __SYSCALL_SLONG_TYPE, which appears to be a signed long integer. On x86_64. this is 8 bytes, or 64 bits.
It's been like this in the kernel for a long time (can't be arsed to go back through the kernel history).
On (legacy UNIX, so who would patch that), AIX, time_t has been directly defined as a long int since about AIX 5.1, (available before Y2K) and I'm pretty certain they carried that through into the filesystem code (this tends to happen automagically when the source is recompiled on a 64 bit system, unless explicitly turned off) by the types being defined in system wide #include files.
So the kernel has been fixed on Linux and many UNIX's for a long time There's been a range of tricks deployed to allow 32 bit binaries running to still pick up 32 bit time_t. This code will break still, but who is likely to be running binaries compiled for 32 bit systems im 2038. That would be real legacy code?
-
Monday 19th October 2020 09:07 GMT FIA
Re: Linux kernel
On (legacy UNIX, so who would patch that), AIX, time_t has been directly defined as a long int since about AIX 5.1, (available before Y2K) and I'm pretty certain they carried that through into the filesystem code (this tends to happen automagically when the source is recompiled on a 64 bit system, unless explicitly turned off) by the types being defined in system wide #include files.
Problem is you can't just start widening on disk data structures with a recompile. If your time_t is 32 bits in your on disk data structures you'll need to rewrite your on disk data.
The issue is unlikely to be with OS level stuff, it's all the compiled software that's assuming it's a 32 bit value that will bite people.
This code will break still, but who is likely to be running binaries compiled for 32 bit systems im 2038. That would be real legacy code?
Lots of people I expect. Not spending money is a powerful motivator. :) I'm currently working on a codebase that is nearly 30 years old, we're currently modernising it (ie, rewriting it bit by bit), but I expect the existing code to still be running 10 years down the line, and that's 32 bit.
In the 70s computers were advancing at a frenetic pace, even in the 90s when I was starting out I remember being in awe of a minidisc player I had that had more processing power than the desktop I could've bought for 10 times the price just 2 or 3 years previously.
Those days are gone now, we're at the more gentle progression curve in computing now, like many other things, we now do need to start assuming the things we're building will be around for decades.
-
Monday 19th October 2020 09:40 GMT Peter Gathercole
Re: Linux kernel
I agree, which is what the last sentence was all about, and I also agree about the space in assigned structures.
But when it comes to filesystems, for example, there's been a bit of a tweak that allows the mounter code to identify whether the filesystem was created using 32 bit or 64 bit time_t. Provided you go through the OS acquire the info contained in things like the inodes, it's possible to allow the system call to decide how to identify and present the data, keeping the function in the core part of the OS.
Anything that directly accesses this data without the OS's involvement would need special attention, however, as would any code managing it's own datafiles.
But in the next 18 years, we will not be running 32 bit processors (support for 32 bit Intel is due to be removed from the kernel quite soon, if it's not already), and I would be surprised if any system, or even code now running will be still running when the time comes without at least recompilation. It would be really clumsy system management to also not have re-created filesystems before then either.
Because of the nature of the system call interface being changed and the way that dynamic linking works, x86 Linux is not quite as tolerant when running old code (if the version of a shared library changes on a system, quite often old binaries fails to load and execute) as some other UNIX variants (I ran a binary I compiled in 1995 on a 32 bit AIX 4.1.2 system on a 64 bit system running AIX 5.3 a few years back, and it still ran perfectly)
I worked through the 1999-2000 transition working on UNIX systems, and know that in my first job in 1981/2 (not on UNIX), some of the code I created definitely would not cope with the 2 digit year rollover (I did point it out, but I was just a junior programmer). I would be interested in knowing whether anybody had any problems with parking ticket fines in the Borough of Rushmoor around Y2K, because that is the main system I worked on (although I did also work on DLO) in the fairly miserable year I was there.
I will be retired by 2038, but I hope to be mentally able enough (and still interested) to be able to say "I told you so!"
-
Monday 19th October 2020 13:38 GMT Anonymous Coward
Re: Linux kernel
"But in the next 18 years, we will not be running 32 bit processors"
Wanna bet? There are literally millions of 8 and 16 bit embedded devices still out there never mind 32 bit so I would bet a significant sum on plenty of embedded 32 bit systems still running embedded linux in 2038+. Linux doesn't just run on x86 PCs.
-
Monday 19th October 2020 18:04 GMT SImon Hobson
Re: Linux kernel
But in the next 18 years, we will not be running 32 bit processors (support for 32 bit Intel is due to be removed from the kernel quite soon, if it's not already)
I wouldn't count on that - I am currently designing an 8 bit (yes, EIGHT) bit system. As it's for the heating controls in the house, I won't be throwing it away and replacing it with something newer and shinier in 2 or 3 years time - but then it won't be handling dates at all. I'm also involved at work with systems where they have to consider the possibility of components becoming obsolete between the design being frozen and actually going into service (could be getting on for a decade, with any changes needing a very expensive refresh of the safety case), and having a planned service life of several decades. I strongly suspect (I'm not involved in that side of things) that since "more complex" means "much harder to prove safety", these won't be using the latest processors available.
-
-
-
-
Tuesday 20th October 2020 21:44 GMT Anonymous Coward
Re: Linux kernel
Sorry? The (C) type doesn't matter, they're just reserved bytes and the type can be changed to suit when needed. So long as they're in useful byte multiples such as 2, 4 or 8 and properly packed then you're usually sorted. I would suggest you and the clueless no-nothings who modded me down stick to json or XML and leave the binary side to those of us who know what we're doing.
-
-
Monday 19th October 2020 13:40 GMT Ken Hagan
Re: Linux kernel
"Problem is you can't just start widening on disk data structures with a recompile. If your time_t is 32 bits in your on disk data structures you'll need to rewrite your on disk data."
Actually no. 2038 is the limit of a signed 32-bit value, but if we are talking about on-disk metadata, there is no need to handle dates prior to 1970 and so the 32-bit can be read as an unsigned value. That punts the problem out to 2106. Whilst it is nice that the XFS maintainers are looking at the issue, I'm not really sure there is a story here.
And since others have noted that 64-bit software (including the kernel) are already using a 64-bit time_t, we are now really only worrying about 32-bit user-space software being confused by the sudden appearance of a file creation date from around 1902.
-
Monday 19th October 2020 19:41 GMT NullNix
Re: Linux kernel
Can't do that. Real users might have used touch to set file times to any date in the currently-valid range, so we have to expand the range in a compatible fashion, not just slide it along. (Sure, maybe you could say "bugger any users doing such crazy things", but that's the difference between a hobby filesystem and a bulletproof one. :) )
There are also (mostly-invisible) timestamps in places like the quota format that needed handling (that one was handled by reducing its precision by a factor of four, quadrupling its range with almost certainly zero visible impact on any users ever).
-
-
-
Monday 19th October 2020 12:41 GMT J27
Re: Linux kernel
This was my thought too, "wasn't this already fixed in 64bit Linux". I checked and was coming down to storm into the comments to bring it up. But you beat me to it.
By 2038 I can't imagine this is going to be a big issue, it'll probably be like Y2K when only the worst-designed most legacy systems are affected.
-
Tuesday 20th October 2020 18:43 GMT Henry Wertz 1
Re: Linux kernel
Linux has had long time for a long time too; trick is, you've got some clocks that will roll from 2038 to 1970 (or 1900), you've still got linux on 32-bit platforms (IBM POWER was 64-bit from the start, although supporting 32-bit code), etc. As I say in another post, they were thinking they had 2038 fixed by 1999 or so. Hopefully IBM will thoroughly test setting some systems to 2038 to see what happens, I could easily see everything being in good shape, or I could see what happened with Linux where you'd ASSUME everything is fine and it's not.
-
Friday 23rd October 2020 09:12 GMT Peter Gathercole
Re: Linux kernel @Henry
Actually, it depends on what you define as POWER. The original RIOS processors released in 1989/1990 were 32 bit, and 64 bit was introduced with the PowerPC processor architecture extensions, with the PowerPC 620 and RS64 processors (as well as the APACHE processor from the AS400 people in Rochester) being the first 64 bit processors in the extended family.
The mainstream processors that have only ever been 64 bit since their inception were the DEC Alpha and Intel Itanium (although was Itanium really mainstream?)
But my point is that any code that has been or will be re-compiled on a more modern system before 2038 will have 64 bit time_t (and the associated C library calls) by default, unless someone takes great pains to prevent it. It is likely to only be binaries that have not been compiled which will have problems.
Of course, there may well be code that instead of using the system definitions of various structures and types, define them themselves, but that would have been poor programming that probably will rattle itself out whenever the code is ported between systems. It's always been poor practice since the very early days of UNIX systems to hard code system properties in your code rather than using the system defined types.
I mis-spoke about no 32 bit processors in 2038, but I would like to point out that many embedded processors probably couldn't give a hoot about whether they have the correct date and time. Not sure what would happen during the actual rollover, though.
It is interesting. I had an IBM 6150 AIX system (the one before the RS/6000), whose support ended before in the mid 1990's, and I ran some quick checks on it in 1999, and I found that the only thing that didn't work properly was actually setting the date with the date command. Even the RTC that the system had worked properly. I don't think it would have coped with the 2038 rollover, but that system is now long gone.
-
-
Monday 26th October 2020 14:49 GMT Someone Else
Re: Linux kernel
After much digging through the Linux include files, you can see that the time_t type on 64 bit kernels is defined as __SYSCALL_SLONG_TYPE, which appears to be a signed long integer. On x86_64. this is 8 bytes, or 64 bits.
It's been like this in the kernel for a long time (can't be arsed to go back through the kernel history).
And yet, they're still using signed
time_t
's, when a negativetime_t
value is invalid.
-
-
Monday 19th October 2020 08:49 GMT pavel.petrman
Sigh... the K notation again.
Where thith the "aka Y2K38" come from? As is usual, when a nice and functional thing created by enigneers and used by engineers lands in the hands of laymen without two layers of protective insulation, cringeworthiness ensues. Just like, for example, the semantic version numbering (remember the 2.0 craze followed by current 4.0 folly?) or the Internet itself, the order-letter notation (or does it have an actual name?) used to great advantage by engineers somehow leaked to the instagram-using youth for whom the letter K seemed to work much better than the digit 0. I couldn't care less if they kept it using just for amusement, but it came the full circle somehow and now whenever I see the letter K on the significatn position, especially following the digit 2, I must ask explicitly whether it really mans K or is just a fancy zero. Otherwise I can't be sure whether the value is 2038 (cool new instagram number format) or 2380, which it had meant for sveral decades before instagram ruining it. Fuc0 it, people, Y2K was not Y2K00!
-
-
-
-
Monday 19th October 2020 16:20 GMT Kristian Walsh
Re: Sigh... the K notation again.
Heh.. Not surprised one of the ones above me got deleted. I’m pretty sure I know what it was too. Our Electronics lecturer taught it as “look, there’s also another one that will guarantee you remember the order, but for god’s sake, don’t ever say it out loud”
-
Monday 19th October 2020 18:11 GMT SImon Hobson
Re: Sigh... the K notation again.
Might have been the same one I got taught as an impressionable apprentice - but back then you could just about get away with saying it if you were careful who was around you. These days I'm not sure it's even safe to think it - just getting as far as "1" could get you in trouble these days !
-
-
-
Monday 19th October 2020 11:09 GMT Lars
Re: Sigh... the K notation again.
Kilogram reveals it all.
"Kilo is a decimal unit prefix in the metric system denoting multiplication by one thousand (103). It is used in the International System of Units, where it has the symbol k, in lower case.
The prefix kilo is derived from the Greek word κιλό (kiló), meaning "thousand". It was originally adopted by Antoine Lavoisier's research group in 1795, and introduced into the metric system in France with its establishment in 1799.
In 19th century English it was sometimes spelled chilio, in line with a puristic opinion by Thomas Young.".
-
This post has been deleted by its author
-
Friday 23rd October 2020 09:23 GMT Peter Gathercole
Re: Sigh... the K notation again.
Things like capacitors are now often marked as 4u7 (using the letter as a decimal point as well as a scaling factor), something I didn't realize until I started using miniature bead capacitors and surface mount components.
Having said that, I'm looking at the schematic for a NAD7020 HiFi reciever (circa 1977-1984), and I see C421 having a value of 2n2 (2.2nF), and R425 as 4K7 (4.7K Ohm), so I guess it has been used for a while. But it looks like different parts of the schematic were prepared by different people, because it's not consistent!.
-
-
-
Monday 19th October 2020 13:38 GMT J.G.Harston
Re: Sigh... the K notation again.
They do, but the convention in engineering is that the unit multiplier can be used to replace the decimal point, viz:
1.2R -> 1R2
3.6K -> 3K6
6.8M -> 6M8
so
2038 -> 2.038K -> 2K038
Yes, Y2K38 would be the year 2380 +/- 5.
Plus, 238 isn't a prefered value! Should be 220 or 270, or go to E24 to get 240. ;)
-
-
This post has been deleted by its author
-
-
-
Monday 19th October 2020 09:46 GMT Uplink
Future
Sounds like Oracle hit a problem with timestamps set in the future already and needed a quick fix, but didn't want to waste precious disk space either.
This should be taken as one of the first signs that this problem is starting to rear its ugly head and can't be put off much longer for software and structures that have't been updated to use 64-bit time yet.
-
Monday 19th October 2020 10:32 GMT Simone
Short memories...?
There have been several examples of industries that have bought an expensive machine tool (e.g. CNC machining centre) that runs using a computer program running on Windows 95, possibly using the parallel print port to print documentation. These have been bought assuming decades of use as heavy machinery was usually 'built to last'; things that wear out, such as bearings, were standard parts and could be replaced. The high cost of the machine was depreciated over a long time to justify its purchase.
The fact that software is no longer supported, both the operating system and the programs (usually the provider of these has gone out of business), does not matter to these companies. You may think it is short sighted, but the cost of a new machine would bring on bankruptcy, and the software does work. A survey after the Y2K
bug , Jan 2001, found that 80% of organisations were running windows 95; it went EOL in Jan 2003.
It is not always easy to stay on the upgrade train. It is not easy to guarantee what the expected end of life of an expensive piece of equipment will be.
-
Monday 19th October 2020 15:43 GMT DS999
Re: Short memories...?
Equipment like that doesn't depend on the year. I'd be more concerned with slightly newer gear that requires or at least supports networking. That's far more likely to be a problem down the road than a Windows 95 machine humming away doing Windows 95 things, even had that had a Y2K bug and you had to set its clock back every few years when it ran over the limit.
-
Monday 19th October 2020 18:17 GMT SImon Hobson
Re: Short memories...?
Actually you'd be surprised how much of this sort of equipment is networked so that the designer sat in his office can generate a machine program from the CAD file and send it directly to the machine down in the workshop - definitely a step up from sneakernet with 3.5" (or even 5 1/4") floppies, or 9600bps serial links.
Without going into the heavy machinery market there are problems. At my last job I recall a client upgrading their phone system, and after well under a decade they were down to keeping a laptop around un-updated just to be able to manage the system.
-
Monday 19th October 2020 20:38 GMT Anonymous Coward
Re: Short memories...?
"[...] you'd be surprised how much of this sort of equipment is networked [...]"
Many years ago a customer's business Remote Job Entry link (max 9600bps) to the data centre was getting corrupted transmissions very often. The errors were so weird they kept finding new bugs in the comms code.
Eventually transpired that the customer end included a long internal wire driven by line drivers. The unshielded cable was laid across the floor of their arc welding workshop.
-
-
-
-
Monday 19th October 2020 11:11 GMT Dan 55
2486 vs 19000
There was already a suggestion in 2014 to make timestamps on XFS last until 19000. Maybe Oracle think they can get more out of their support contracts by fixing a problem twice?
-
Tuesday 20th October 2020 10:45 GMT Anonymous Coward
I've just submitted my one byte Linux date patch.
All dates are covid work-from-home friendly (so no point in having a time portion, I mean who is keeping track of that nowadays), and are encoded in a single byte as well as being human-readable. The encoding scheme is as follows:
L - last week/month/year/whenever.
Y - yesterday
T - today or tomorrow or just pretty soon really.
N - next week/month/year/whenever.
P - whenever the next mortgage, gas bill or credit card payment is due. They'll remind you.
D - the time interval between standing up in your Zoom call and remembering that your bottom half is not as well attired as your top half.
I'll post any feedback here.
-
Tuesday 20th October 2020 18:40 GMT Henry Wertz 1
Good to get on this
Good to get on this! I remember, back in 1999, some patches being put into the Linux kernel to "fix" the 2038 bug, it was considered to be a solved problem! Like "let's fix this bit of time-handling code; done!" It looked reasonable, was reviewed by the kernel people at the time and considered a done deal.
Turns out, when people started working on 2038 bug again with the last year or two, that the 1999 fixes to handle clock rollover DID NOT WORK AT ALL. Once that was fixed, there were some places where you'd set something (near 2038 cutoff) for 1 second in the future or whatever, it'd instead schedule for 4 billion seconds in the future, or possibly 2^32+1 so it'd never reach it (this included stuff like process scheduling, so at 2038 rollover apparently the entire system would lock hard). Some filesystems only support UNIX timestamp, probably SOL (I was surprised to find one of my home computers still is using ext3, so vulnerable to year 2038 bug); some supported past 2038 but the support was not in-kernel (like this XFS case). It's turned out to take many many more patches than I think anyone was expecting!