Can someone...
help me understand why people aren't running 64 bit OSs already?
Is it hardware, or the apps are still 32bit, or a combo of both?
Linux fans intent on holding back the years will be delighted to hear that the upcoming version 5.6 of the kernel should see 32-bit systems hanging on past the dread Y2038. Arnd Bergmann, an engineer working on the thorny Y2038 problem in the Linux kernel, posted to the mailing list that, yup, Linux 5.6 "should be the first …
ack on usable old laptops, even ones that can't do 64-bit. I have 2 of those [one has win 7 on it out of necessity].
The older of the 2 laptops I have has Linux on it. When I started a new contract last year, I needed Linux to do RPi-related development. They didn't have any Linux machines so I brought in a 2003-ish Toshiba laptop with Linux on it. With that old thing I was able to get work done [good luck doing work under Win-10-nic, it was SO in the way it was pathetic]. I quickly set up the RPi to run X11 applications remotely on the laptop (using the 'export DISPLAY' trick) and I was quickly getting things done using pluma to edit files directly on the RPi from the keyboard+mouse+display on the laptop. After a couple of weeks of that they found an old computer nobody was using and I put Linux on it. [quite obviously they were "sold" on the idea and maybe a *bit* embarassed that I was using a 17 year old laptop in lieu of a "modern" computer running Windows 10, and getting MORE done that way]
In any case, 32-bit devices _definitely_ have a use. Why throw them away if they still help you get things done???
This post has been deleted by its author
This is one of the most bizarre comments I've seen in ages... back when AMD first made 64 bit CPUs mainstream Open Source software was practically the only software available to take advantage of them! (It had of course been running on other 64 bit architectures for years, unlike the vast majority of commercial Windows software.)
Most Solaris binaries were 32-bit rather than 64-bit - for SPARC binaries there is rather little to be gained from making /ls/ a 64-bit binary rather than a 32-bit binary. x86 is unusual in that the performance hit of throwing around 64-bit chunks of memory rather than 32-bit chunks is outweighed by the architectural changes (including lots of new registers). See "All the world's a VAX" (although it wasn't directly related).
Having spent time compiling open-source software is the days when 64-bit was new, it wasn't always as easy as re-compiling; there were often explicit or implicit assumptions about the size of a word requiring source code changes.
As for 32-bit commercial binaries, sometimes the vendor isn't inclined to release a 64-bit version or even if they are, the business isn't prepared to pay for an upgrade.
As an almost lone nixer in a Windows shop I got really cross when they got all excited about Windows x64 having been running Solaris on 64bit SPARC for years.
Still, things are so much through the looking glass now given that I am installing Powershell on my linux VMs so I can automate snapshots and I can run bash natively on a Windows workstation.
DEC Alpha supported 64-bit apps in OpenVMS
Even 32-bit VMS from the 1970s used 64-bit time values, counting from the Smithsonian astrophysical epoch of 17th November 1858. It will finally have a problem at 2:48:05.47am on July 31st 31086, and although it would be nice to think there might be a VAX still running then I think it's unlikely. There was a bug filed against VMS years ago complaining that the 4-digit date field used for display will overflow on New Year's day 10000. It was closed with "Will be fixed in a future major architecture".
It will finally have a problem at 2:48:05.47am on July 31st 31086, and although it would be nice to think there might be a VAX still running then I think it's unlikely.
Oh, I dunno. There may be one stashed away in a museum somewhere, that knows how to create 120V/240V 50-60Hz AC power (what ever the heck that was...). If such a thing exists, that VAX will run.
"Oh, I dunno. There may be one stashed away in a museum somewhere, that knows how to create 120V/240V 50-60Hz AC power (what ever the heck that was...). If such a thing exists, that VAX will run."
Well... V'Ger was still running in the 23rd century, and there are rumours that it met an early Borg Collective before coming back to Earth. So one could postulate that the Borg run on VAX systems?
And sinxe most Linux Apps are Open Source a ot of them haven't switched to 64 bits yet.You have that exactly the wrong way around. Since most Linux apps are Open Source, they will acquire the word length of the compiler with which they are built. If you never stray beyond your distribution's pre-compiled packages, they will already have the same word length.
It's proprietary apps on Linux that used to be compiled 32-bit only, and now sometimes are compiled 64-bit only.
"It's proprietary apps on Linux that used to be compiled 32-bit only, and now sometimes are compiled 64-bit only."
in a few cases, old versions that were shipped binary-only (let's say you bought a license for an old version of "something" and you don't want to pay for a NEW one) might not work, but you can always run Linux in a VM with an older kernel just for THAT...
The 'Eagle CAD' version I'm using is like that. I had a license before it was bought by AutoCAD that lets me use the 'free' version for commercial purposes. So I still use that version because I don't want subscription licensing. But if I had to I'd use a VM with an older LInux, and it would still work just fine.
[actually I run it on FreeBSD, too, with it's Linux compatibility stuff, and if I design more circuit boards for clients I'll probably have to convince the companies I do the work for to get the subscription version - I've done 2 boards so far with a recent client, using my purchased license, as an independent contractor, with the older version, which I'd like to keep using...].
Of course, if AutoCAD did a similar one-time license that was more sensible for small-time contract people that only OCCASIONALLY do board designs, I'd be interested... (this reminds me to surf around their web site to see if such an offer already exists)
* admittedly I had to 'hack' a symlink or two for shared libs that had a minor version change, and FreeBSD doesn't have "that version". The symlink to the slightly newer library works fine. No problems noted.
Lots of 32bit boxes are built into and controlling some very very expensive and very difficult to replace hardware. It's either physically in situ somewhere awkward and/or replacement requires a chain of other hardware being changed at the same time.
I will lay odds that come 2106 there will still be some forgotten 32bit stuff plodding away doing something important but has been buried under a building for half a century.
Yup - we have a machine weighing about 18 tonnes, £250k to replace running win2000 on it's local control station. Another, about £100k, runs NT on the front. The backends are PLCs (e.g. fanuc/siemens). I fully expect future generations to just reset the clocks and carry on. These sorts of machines are often sold on (2 of our old ones went overseas) for a very, very long life.
I fully expect future generations to just reset the clocks and carry on.
The AT&T 6300 PCs had an oddball clock chip, where you could only set the clock for dates from January 1984 to December 1991. I had to "fix" one in January 1992, and there already was a clockfix utility to set an offset for the system clock to allow you to set dates later. Apparently it will still work too (just have to set the appropriate multiple of 8). Although that's compensating for a hardware issue, while this is OS & software.
reminds me of the PDP-11 clock+date thing. For Y2K there were patches made. For grins I worked on patching an older version for RT-11 that I had source for. The published mechanism works at the OS level, but the software itself can't handle it very well. And I only had source for the kernel... [other people have done re-writes of the user software, though, which DO fix the problem there as well]. yeah ancient computer emulation using 'simh' for fun and nostalgia. [studying the old code, though, can be educating]
has anyone offered to write a Linux-based OS solution for that old hardware? I have to wonder whether there are less expensive solutions that could be less than 1/10 of that [something like "me working on it for a couple of months"]. In theory, you could port C/C++ applications that were written for Windows using existing libraries and toolkits. I have proved the concept by turning MFC applications into POSIX applications using wxWidgets. It's not 1:1 but there is a correlation that, with a reasonable level of effort, can be ported in a month or two [per application] when _I_ do the work. I'm sure there are others who could even do better than me with this sort of thing. Or it could become a pure GTK or Qt application, by carefully re-writing just the UI and dealing with any other windows-isms using some kind of compatibility lib [home-grown or already written, whichever].
In any case, 250k (USD or GBP) is a bit steep. that's like an overpaid team of "paid by the line" coders working for a top-heavy consulting firm. An indie can probably do it for less than 1/4 that... and if you could patch Wine or even CentOS to do the job, you're getting closer to the "1/10 of that" mark I initially suggested.
I deal with a lot of embedded kit in the day job, a lot of this stuff is ancient, predates 64-bit hardware and can't be easily replaced. Either it's too expensive (research grants to buy kit having long since run dry), the manufacturer has gone bust (so no updates available anyway) or the equipment needs an expensive recalibration after any upgrade to ensure that readings are still accurate - vital if experiments rely on the results from the device. Killing a ten-year study in year nine "because security" won't make you any friends.
Some of this kit is frighteningly long-lived, one or two of these things run MS-DOS, let alone Windows !
Because the box attached to the side of <industial machine> has a 386 processor.
Industrial machinery has a much longer lifetime than any computer and often includes an embedded computer as the control system. These systems can't be replaced and often no one knows what the hell they are.
About 10 years ago a member of the IT team at my then employer brought a circuit board to me from a Flexographic printing press that had problems. The regular maintenance team had no idea what it was so asked IT because "it looks like a computer". They managed to recognise an i386 on the board but had no idea what was wrong with it or how to fix it other than "it won't save settings." I worked writing software and repairing embedded devices on production lines for customers so they eventually decided to bring it to me. I replaced the BIOS battery (which was soldered to the board) which fixed the problem and that press is probably still running today.
Don't get me started on Dallas chips. Horrid things, plus a lot of them are out of production now. They were extensively used in SGI workstations and I've seen folk go to the level of using a Dremel to cut away the housing at the side of the Dallas unit to find the internal power pins, then solder a CR2032 holder to them to keep the clock running.
Incidentally, IRIX will be an issue. It may be 64bit, but the time variable is 32bit. Still plenty of SGI workstations out there in use for industry, plus a lot of MRI machines from the 90s used Indigo2s as the controlling workstation. 2038 will be interesting!
Can't SGI workstations use OpenVMS or one of the BSD's? Just a thought...
when I look at THIS page:
https://en.wikipedia.org/wiki/IRIX
I think "that looks a LOT nicer than Windows 10 !!!"
(from the above link)
"Much of IRIX's core technology has been open sourced and ported by SGI to Linux, including XFS."
so maybe Linux, then?
Proper battery, or a Dallas chip ? Horrible bloody things.
At least the Dallas chips mainly just die quietly. The "barrel" batteries in Amiga, Mac (?) and other 90's kit are notorious for leaking out when they die, corroding the system board around them. They have spot-welded leads and are soldered on the boards. Whomever thought having batteries permanently soldered to the board should be made to lick a few of those damaged boards clean.
Happened to a Mac Performa I had in storage. Checked it one month , worked fine. Checked it again 6 months later , wouldn't even switch on never mind boot. Opened it up and the battery had leaked all over the PCB corroding everything. Had to scrap the machine , it was beyond saving. My own fault for not checking but still bloody annoying that they'd put the battery in a place where it could cause such damage and for using a battery that would expire in that manner.
Because management at some companies will not port applications to 64bit because they don't think it is important (been there seen it and dealt with it before they had made a definitive decision one way or another).
However if something is embedded and does not require 64bits why port to 64bit?
The time_t issue has been known about for a very long time and should have been dealt with a long time ago at the application level regardless of 32 or 64bit.
It depends on how you mean "32 bit CPU:s are enough": the extra word length is only really useful for heavier number crunching, but quite a bit benefits from the extra memory space. The only reason the two go together the way they do is because of the flat memory space model that prevailed for so long.
@Oh Matron! It is a fair question but downvotes are just a price you pay for wading in to the commentards on a Friday.
Various folk so far have put in points about non-upgradable hardware or software, but one aspect I know of is the VM86 instruction that is only present in 32-bit mode of the x86 family. AMD dropped that for the x64 CPU extensions, presumably thinking no one will need it. But if you want to easily support 16-bit applications (and there are way more of them than you might imagine, not just legacy DOS games but loads of old special software in industry) the simplest and fastest emulators use the VM86 instruction to emulate 286 style CPU operations. There is a x64 version of dosemu but it is not (yet? ever?) supporting some stuff like interrupt passing.
This is exactly the same reason for recent Windows machines dropping 16-bit support (as the ntvdm relied on VM86 and MS are too poor to pay for the effort to work around it).
AMD dropped that for the x64 CPU extensions, presumably thinking no one will need it. But if you want to easily support 16-bit applications (and there are way more of them than you might imagine, not just legacy DOS games but loads of old special software in industry) the simplest and fastest emulators use the VM86 instruction to emulate 286 style CPU operations.
That is very true. Eventually it won't matter anymore though, because 64-bit CPUs are becoming fast enough that you can emulate the old machines completely in software at speeds which exceed the original hardware. Just as you can run MS-DOS in QEMU (software only -- no KVM) and you can run a Commodore 64 emulated on a Raspberry Pi, eventually we will get to the point where software emulation of 32-bit x86 runs so fast that it isn't worth it for AMD and Intel to continue supporting it in hardware. This will happen long before 2038.
Heck, the only reason they still support Real Mode at all is because a lot of 64-bit systems are still built to boot using BIOS instead of UEFI. That's likely to be sorted out long before 2038 as well.
"
This is exactly the same reason for recent Windows machines dropping 16-bit support (as the ntvdm relied on VM86 and MS are too poor to pay for the effort to work around it)."
Not too sure about recent.. the x64 editions of XP (both Itanic and x86-64) had zero support for 16 bit stuff and that was getting on for 20 years ago. You weren't even allowed to run the command prompt in full screen.
"help me understand why people aren't running 64 bit OSs already?
"Is it hardware, or the apps are still 32bit, or a combo of both?
It's a bit of both, really.
We humans evolved from flatworms with simple nerve structures, but flatworms found permanent niches and never died out. So it is with processor scaling. Many early pocket calculators were 4-bit, for all I know some still are. If you still want to run Linux on 16-bit, you will need a pretty ancient kernel but they are still around. For many uses, such as IoT devices or industrial controllers, 32-bit is quite a good compromise between economy and power, so it is still popular enough for Linus to maintain compatibility and for say Debian to offer both 32- and 64-bit builds.
Some old apps with unique functionality are still available only for 32-bit. One option here is a 64-bit host with 32-bit emulation, but that does not always work and sometimes it is easier to keep the 32-bit hardware going.
BTW, it is typical of El Reg commentards that a perfectly sensible question gets downvoted by a lot of opinionated goons, sorry I can't do much about that, save ripping the piss out of their mixed-metaphor kneejerk willies. (and I have been collecting so few lately, I am obviously not challenging enough preconceptions, so let's hope they oblige me with a few more).
If you still want to run Linux on 16-bit, you will need a pretty ancient kernel but they are still around.
Linux has never run on 16-bit computers. Torvalds wrote Linux on a 386-based PC clone, and it really requires a flat at least 32-bit address space, and memory management features.
If you absolutely want to try a unix-style system on a 16-bit PC, one option is old versions of Minix.
The Microsoft Xenix system ran on the 16-bit 286, and I actually used it at home at one time, after finding the installation floppies in a dumpster at my workplace.
Perhaps a more relevant question is to ask why did Linux adopt the Unix convention and epoch start date..
Another perhaps more relevant question might be, why did Linux go with a signed number of seconds since "the beginning of the epoch"? (As noted in the article, this only pushes out the problem for somewhat less than a century, but what possible use is "negative time", anyway -- to the best of my knowledge, using a negative time_t to represent times earlier than the beginning of the epoch is not supported.)
>Another perhaps more relevant question might be, why did Linux go with a signed number of seconds since "the beginning of the epoch"?
Because we all learned the lazy habit from K&R, who decided that strlen should return an int (was unsigned int even a thing in 1978?). That allowed all of us, me included, not to care very much about doing what was clearly the right thing. After all, we were never going to have more than 32,767 of anything, And an extra 9 characters is so hard to type.
Despite the bitter tone, I'm serious here. strlen in K&R C is where it all starts.
> Another perhaps more relevant question might be, why did Linux go with a signed number of seconds since "the beginning of the epoch"
Because sometimes you have to handle dates prior to 00:00:00 UTC on 1 January 1970. A positive integer indicates number of seconds after epoch, a negative number indicates number of seconds before the epoch.
1970 seems a long way away now, but when Unix was just getting started it was very much the near present, and people did want to store dates prior to 1970 on their machines. Linux took a lot of Unixisms, and the epoch seconds time was one of them, so for compatibility it followed the same concept.
Using signed integers, in this case 32bit (there were other bit widths for time_t) with the "00:00:00 UTC, 1 January 1970" epoch, allowed for a year range from 1902 to 2038, which seemed like a decent compromise at the time.
It doesn't even have to be a 64 bit OS, it just needs to be able to store a 64 bit value and the problem of how to stick in 16 bit values on 8 bit CPU's was solved a very long time ago.
OK, there may have to be a couple more operations when handling the data when its spread across more than a registers width, but that's part of the reason there is a carry flag in the CPU.
32-bit linux is more suited to embedded systems like your wifi router
And I think there are more devices than desktops running Linux, unless you count Android as a 'desktop', but even THEN, I wouldn't be surprised if more than half of the Android stuff is 32-bit (especially on ARM). So in reality, there are a LOT of systems using 32-bit Linux.
32-bit makes sense up to 4Gb, because it's slightly faster and slightly smaller (code footprint and RAM usage) depending upon how it's written.
Personally I've been making noise about 64-bit time_t for a while now. I'm glad they're doing it. I want to see the BSDs follow suit on this [and no doubt they will, since it makes so much sense].
More than likely it won't affect anything for end-users. Users who upgrade their kernels will almost universally be updating userland packages as well. And package maintainers will just need to make sure everything is recompiled for the new kernel with a kernel version dependency (I've seen that kernel version dependency before, with Debian, years ago, but I forget why it was needed).
All good!
Bergmann also cautioned that there were a few interfaces that could not be changed "in a compatible way", and needed to be configured to use CLOCK_MONOTONIC, which doesn't suffer from that 1 January 1970 epoch issue but has challenges of its own, or an unsigned 32-bit timestamp, which could choke in 2106.
Still, by then most of us will have been enveloped by the sweet, sweet embrace of oblivion. Or will be flying around in rocket-powered robo-braincases/frantically treading water (insert your own apocalyptic scenario here).
I can see it now, the year is 2106. someone's grandchildren are flying around in rocket-powered robo-braincases/frantically treading water when suddenly the rocket's computer goes bleep! and they crash into an asteroid/are stranded in the middle of the pacific.
Won't anyone thing of the grandchildren?!
A good example of a popular 32-bit OS that most people here might still use is Raspbian. Yes the recent hardware may be aarch64 but the OS is still 32-bit armv6
The reasons cited are:
"It's been well publicised for some years that Raspbian is 32bit. This is for a number of reason - firstly backwards compatibility with previous models and the Pi0. We only need one distro and it runs on all devices. This means a lot less maintenance work - we are a small team. Secondly, related, we are a small team! There's quite a bit of work involved with moving the entire distro over the 64bit, including some rather tricky work on anything that talks to the GPU, which is 32bit.
That said, we now have a test 64bit kernel - search this forum for information, but this will be combined with a 32bit userland, for the reason mentioned above. - the amount of work needed to update all the libraries that talk to the GPU."
What will actually happen when the timestamp wraps around?
I can see individual packets in transit as the wraparound happens getting lost, if a router thinks it has already dealt with them; and after the wraparound, human-readable timestamps will appear incorrect until the C library is recompiled with a new "zero" date (which will be in the future, now timestamps are "negative"). But missed packets just get automatically retried. It's conceivable a bad protocol implementation could require a whole file to be retried, if the fragments can't be reassembled properly; but really, it's just an extreme case of packets arriving out-of-order (with the first batch apparently coming about 140 years after the second batch) which happens all the time anyway, every time a packet gets corrupt along the way and has to be retried (that's why we split things into packets in the first place). In fact, it probably used to happen a lot more in the early days, when networking was all highly experimental and hardware was more temperamental and less reliable. The first three layers have already been pretty thoroughly tested, under harsher conditions than generally prevail today.
It's not as though you can't download a copy of the software and test it on a scrap machine with a deliberately wrong system time .....
It really depends on the code. You might get timer loops locking up for ~67 years depending on how they are doing the time comparison / arithmetic. You might get date conversion failing (or maybe just working), again depending on how the libc code (or others) implemented the year/mon/day etc to time_t (and vice-versa) conversion.
Best case it simply works. More likely you need to reboot post-Y2038 to reset timers and then its fine. Worst case some code never works after that data. Test, test, and test gain!
Well, if you have a backup program that does incrementals based on the dates of files that are on a file system with 32-bit time_t you might just find your backups taking a very long time, as everything suddenly appears newer than the last backup...
Don't wait for 2038 to fix it, though. Just as with Y2K, it's causing problems now. Banks handling 25-year mortgages have been seeing problems since 2013.
OOooh. 25 years seems short. So, could I use this to get a buffer overflow of
a) + 32 bit int into my account in £££?
b) - 32 bit int into my account in £££?
I have a feeling, I'd not make much of a good turn of it either way. I remember one College student getting up in the morning to find his account -64bit overdrawn. Weirdest thing was, it did so twice before the bank noticed.
I will post my comment again from another thread...
I had a VCR-embedded-on-a-CRT that borked itself, because its calendar only goes to 2020. It merrily reset itself to the year of its production, 1990, and kept on chugging. If it worked, you'd just need to sync it up to the leap year...
So, you don't need to go very far to find unsuspecting gear that might suffer from any form of Y2K syndrome. Embedded RTC oscillators that have a calendar might exist, and be installed anywhere, that were never designed to get past a certain date. Those CASIO wristwatches built in the 70's with permanent calendars are likely candidates, for example.
This stuff will either break outright, or reset themselves to the date of production and keep on going, like a car odometer... By the way, a car odometer is the first Y2K-suffering object ever, and very few people realized it.
Again... not just computers, but also purely mechanical objects could also have y2k issues about it, that very few people might know or care.
Beer, because it takes some time to make and drink a decent brew.
> "Those CASIO wristwatches built in the 70's with permanent calendars are likely candidates, for example."
Permanent calendars are most likely to fail in 2120 when if they have a simple divisible by four rule for determining leap years. You should have kept your receipt.
Those of us who have had many instances of the standard model of Casio wristwatch since the 1970s already know that, in a Leap Year, 28th February is followed by 1st March on their watch. Having to reset the date every four years is hardly a good reason to return the watch!
I had a VCR-embedded-on-a-CRT that borked itself, because its calendar only goes to 2020. It merrily reset itself to the year of its production, 1990, and kept on chugging. If it worked, you'd just need to sync it up to the leap year...
Right. In many (not all) cases it's very possible to just say "the date on that machine is wrong, ignore it" and just get on with your life.
Still, by then most of us will have been enveloped by the sweet, sweet embrace of oblivion. Or will be flying around in rocket-powered robo-braincases/frantically treading water (insert your own apocalyptic scenario here).
Nah, I want my brain transplanted into the body of a buxom Japanese schoolgirl...
Good question - especially as XFS is default for OpenSuse's home folders, which I have on half a dozen computers. Going to have to look it up. Wikipedia seems to confirm this in the info bar, but the reference given doesn't yield an obvious answer after two minutes' searching...
M.