Continuous Incrementation...
... / Continuous Deprecation
This February Google put out Chrome 98, closely followed by Mozilla releasing Firefox 97. Soon both will hit version 100. The memory of the web industry is short. This has happened before: when Opera reached version 10 in 2009, it caused problems, and just three years later, Firefox 10 faced similar issues. And it will happen …
An 8 bit signed integer would work for a while (exactly 27 more versions of Chrome), just like a quick and dirty solution to 2 digit year dates was to consider that any date whose year was under 50 would in the 2000's and that from 50 onwards meant it was in the 1900's - it brought its own problems with it but got into numerous programs anyway (even on some Y2K 'solutions')
Given releases occur monthly for Chrome with the occasional patch and Firefox isn't too far off, is there really any point in using anything other than say 2022.02.17 as a version number? It makes everything easy to understand and resolves the leading zeroes problem until the 10K bug bites us or interplanetary colonisation (and thus the need for a truly universal date/time format) necessitates change.
Or stop changing the major version number for every point release. Software authors/publishers seem to have "version envy" and all want bigger numbers. I'm not sure exactly what they are compensating for, but it's probably something small and inconsequential.
This. I fully expect Chrome to switch any day now to incrementing its version number 10 or 100 at a time, because "we've updated Chrome from version 98 to version 198" sounds so much more impressive, doesn't it, than "we've updated Chrome from version 98 to version 98.0.1".
Shortly followed by Firefox, because Mozilla seem determined to cater to those people who don't want Chrome, by making Firefox as Chrome-like as possible.
Better still, use some form of hashref of the code. (Recommended by the late, great Joe Armstrong.) Bugger incremental versions, branching in the repository makes a nonsense of that.
is there really any point in using anything other than say 2022.02.17 as a version number?
While I (mostly) wholeheartedly agree with you, that does preclude continuous integration builds from being released more often than once a day. Some development teams (that I've heard of) push releases live every few hours. So a revision number in addition to the date would be necessary for those: 2022.02.18.01
Meaningless version numbers are useless. All version numbers should be release date based. And when I say all, I'd include hardware as well.
Curiously, while Google Chrome is approaching 100 of goddess knows what, Google Android does it right. Android 12 is the twelfth yearly update and the just released the monthly (and user friendly named) February Security Update.
No, that's not the best approach. It works if you have one set of releases and everyone uses one of those and ideally the latest one. It works less well if you have different styles of releases. Here are a few examples.
You have a program that gets updated from time to time. Sometimes, you make a big change that breaks the workflow. Maybe you charge users for that big update. Maybe you just want to let them postpone making the shift until they're more comfortable that early adopters didn't find it broken. You might release updates for both versions. If the version numbers look like 2.2.391 and 3.0.5, it's clear which one they'll use. If they look like 2022.02.16.15.30.1 and 2022.02.16.17.38.1, how do you know which you want?
You can also run into string problems. How much precision do you need in the date version numbers? Leading zeros or not? Do you separate the date components or keep them together like in the ISO format? What do you do for beta versions? The major.minor.patch format is usually shorter and clearer, even if one of the numbers gets large.
Microsoft had included version checks in all versions of Windows so that newer versions could taunt previous versions. Then they released Win8 & suddenly realized they had a problem. The next logical release would be "Win9" but they already HAD Win95 & Win98, so that obviously wouldn't work.
MS then skipped 9 & went straight to Win10, hoping the version check thing would still work. After all 10 is newer than 8, right?
Now we've got browsers about to hit 100+ versioning problems in the same vein as MS had with Windows. And yet folks seem surprised that "Who could have seen this coming?"
*FacePalmSighs*
Who wants to join me for a pint? Maybe we can drown the VoicesInOurHeads that keep ranting & raving about the idiots... =-/
As I understood it, there's a lot of code around that pulled the Windows friendly version string and looked to see if the first character was '9', because 95 and 98 were pretty similar.
Two decades on, when they've dropped the names (95, 98, ME, XP, Vista...) and are just using version numbers...oh, hang on, what happens if the first character is a '9'?
Oh, well balls, that's not right!
WinME was a bugfix. Home users installed WIn2000 on their machines, and had to start dealing with NT4.0 issues. And they couldn't "downgrade" back to 98 without reformatting all of their data. So they hurriedly released "ME" built on NT, that was more consumer friendly, as a hold over until they could get XP fully tested and released.
God, I hated tech support in the early 2000's
"ME was built on Win9x."
Exactly. I remember that, before it launched, the *original* plan was for the next NT-based version of Windows (which ultimately became Windows 2000) to entirely replace the DOS-based line and become the "mainstream" version of Windows for all users (*).... except that they never quite managed that.
Compatibility issues et al (IIRC) meant Windows 2000 wasn't quite ready to take over, and it took a little longer until the NT-based Windows XP came out and they were able to entirely ditch the DOS-underpinned versions.
In hindsight, I assumed that was the only real reason for the (still DOS-based) Windows ME's existence and why it was so pointless and short-lived- it was little more than a stopgap and backup plan.
(It might also explain why Windows 2000 has a more "consumer-friendly" style name that *sounds* like it's the directly replacement for 95 and 98).
(*) Something which its Wikipedia article appears to confirm I remembered correctly.
How
is
decimal
digit
overflow
still
a
problem
If you think you are a good dev, and you wrote customer-facing code that is affected by this issue, then you are wrong. You should not assume that a number you don't control will fit in X number of digits, especially if you know that it is just going to increase over time. Why does anybody even need to say that?
If you are looking at a number encoded as a string, your number one goal in life should be to parse it to a numeric data type as quickly and cleanly as possible. If you don't know for sure that number will fit in one byte for as long as the universe still exists (or at least until next time you can recompile your code), then use 2 bytes (and so on for 2, 4, and 8). This is both obvious and not hard.
JavaScript's silent type conversions strike again.
... be sure that at some point in the future, marketeers will decide that un-numerical strings [1] are cooler, and start identifying browser versions that way instead :-)
[1] or possibly even emojis, or go the full Monty and use blockchains or NFTs :-)
"You think you are a good dev," and yet you persist in using Eich's abomination. It's not like it's the only option any longer. (Of course somebody has to use that crap to implement the other languages, but that's their problem.)
A bad workman blames his tools you say? That's because a good workman buys from Snap-On not Wilko.
Because a lot of people who write code for a living are, em, full-stack developers. They have to work with a lot of technologies, some of which they are competent in and others, well, not so much. Don't get me wrong, some full-stack are amazing people who have mastered multiple fields and code efficiently & quickly. And then there are those who pick it up as they go along. Why bother learn SQL when you can use an ORM. Saves lots of time. Aren't Javascript frameworks the greatest thing ever to have happened?
Companies who hire them don't seem to want people who have specialised in a particular field or maybe these companies are content with software that runs 'meh' but at least runs. And more hardware can always be bought to compensate for the meh-ness of it all.
Then again, the Chrome and Firefox maintainers could emulate early Linux kernel numbering. In 1993 it felt like Zeno was numbering the kernels: http://www.oldlinux.org/Linux.old/docs/history/0.99.html . One hundred and five versions, from "0.99" (13 Dec 1992) to "0.99.15j" (2 Mar 1994), separated "0.98.6" from "pre-1.0". Perhaps "Chrome 99.99991" could follow ""Chrome 99.9999" -- or should it be "Chrome 99.99999"?
.. and testing for user agent is stupid .. It can be spoofed
If you are foolish enough to use JS for anything fundamental then test for the particular functionality not a browser / version (hint - your website should not break if user has JS disabled, it should just gracefully fail back to decent functionality just missing a few bells and whistles, e.g. if I get a blank page when JS is disabled & view source shows little in the way of content just lots of JS trying to pull in data then I **** off to a different site instead).
Do a proper job & code for trying to support most browsers - will also lead to a codebase / architecture that's simpler & easier to maintain.
.. and testing for user agent is stupid .. It can be spoofed
Several years back I set my User-Agent to that for IE6 to test a particular web site and then got called away before resetting it. By the time I got back to my desk I'd forgotten this. Most sites worked fine, but the next time I went to Google it was like being dropped through a time warp to circa 2000.