Give me about 20 minutes to become familiar with each acronym
And then I'll post my resume on the webs.
WTF, even MVS/MVM weren't mentioned? VM360?
How about TSR (Sidekick).
Oh well, I think my old skills aren't in alignment anymore.
IBM introduced several significant new elements for its Linux server stack last month: support for KVM on its z Systems mainframes, Linux-only models in both the z Systems and Power Systems ranges, and a new purchasing model. The most technically interesting new development is mainframe support for KVM, the Linux kernel’s …
Why are you downvoting him? He's right.
No, he isn't.
First: "acronym" is widely used for initialisms and other abbreviations that are normally pronounced as individual letters rather than phonetically.
Second: etymologically, "acronym" simply means "across-name". It implies nothing about pronunciation. An initialism is a name.
Third: You're both committing the prescriptivist fallacy. Since English lacks a recognized authority, English usage can only validly be claimed to be "wrong" on teleological grounds. If an utterance achieves its purpose, the usage is good enough.1
1Even this is an oversimplification, since it suffers from naive intentionalism. A more precise formulation would consider the communicative and intellectual value of meaning-production without assuming a dependency on intention.
Etymological fallacies I can buy. "Decimate" is my favourite.
Merriam-Webster seems to accept that initialisms are considered acronyms these days, so fair enough, I'll try not to automatically consider people who misuse it as under-educated.
I'm sure it's only a matter of time before they converge "your" and "you're", "their", "they're" and "there", and you won't mind that will you?
One thing about the article was puzzling.
I know that FORTRAN has an EQUIVALENCE statement, so I can imagine that some x86 higher-level language programs may be endian-dependent, and thus the fact that the POWER supports little-endian data may assist converting programs with a simple recompile.
However, that things like two-byte address displacements in instructions are constrained to be big-endian... the only way that should matter to a program is if it involves self-modifying code.
Now, one can declare code pointers in C. But since C is a higher-level language, one normally wouldn't know exactly what machine-language instructions they point to, so I'm not aware of anyone (except perhaps virus writers and enthusiasts doing insane things for the fun of it) using that for purposes of self-modifying code.
So, while this little-endian feature may make little-endian purists happier about the POWER architecture, I'm mystified as to how this helps in converting higher-level language programs.
"So, while this little-endian feature may make little-endian purists happier about the POWER architecture, I'm mystified as to how this helps in converting higher-level language programs."
Same here. The article says something along the lines that "now all you have to do is recompile". Well, that's all you _ever_ had to do to "port" from little-endian-code to big-endian-code. It's the endianess of the _data_ that matters and sometimes requires source code changes to deal with. Like John said: as long you're not doing something stupid like writing self-modifying code, the endianess of the _code_ is irrelevant.
I believe things go wrong with endianness of code when you get involved in casting types in C and its derivatives. The details are obscure and dull – I once looked it up, but is has pretty much fallen out of my head now.
That sort of coding always leads to trouble anyway, so stick to F77 is my advice.
I think it's not so much things like self-modifying code, as making life comparatively easy for systems software maintainers who regularly have to treat code as data and vice versa. Having mismatched endianness is a serious turn-off for compiler etc. maintainers, and I'd imagine that IBM would far rather not have to devote resources to tweaking other people's codebases- and then having to argue that the tweak was in everybody's interest.
In practical terms, I've seen far fewer problems on big-endian chips running Linux etc. than I have on those architectures with strict alignment requirements, e.g. SPARC and (some) ARM.
p.s. John, like the crypto etc. on your site ;-)
"A bit like a wrinkly grey-haired T-800, you find yourself rooting for it."
I am wrinkly, and definitely grey-haired, but after the IBM mainframe execs dragged the glorious AS/400 brand through the dirt (let's call it e-server iSeries; no, just iSeries; no call it "i for business", no, just call it "i"), I feel no desire to root for IBM anymore.
I do miss the stone cold consistent TLA commands compared to the childish Unix/Linux nonsense, but the RPG language not at all. A good ride, and thank you Dr Frank Soltis for all the fish.
Come on, IBM *ALWAYS* uses different terminology for everything. Who else would call a transparency a "foil"?
Yeah. I have a dataset somewhere on my Winchester with a whole bunch of these from my days at Big Blue.
Some are included in The New Hacker's Dictionary, which of course is worth perusing anyway just as a repository of computing folklore. There's also the IBM Jargon and General Computing Dictionary, but I find that, despite its size, it isn't very comprehensive. It has "foil" but not "Winchester", for example (though it does have "fixed disk"), and it has "data set" only as two words, and then only metastatically.
No, the Rockhopper Mainframe goes only up to 10TB RAM, which is not the equivalent of an Xeon. Intel Xeon 8-socket E7v3 goes up to 12 TB RAM, surpassing IBM Mainframes.
BTW, I dont understand why any new customer would want to use IBM Mainframes to run Linux? x86 are much much cheaper, and the Intel cpus are at least twice as fast as Mainframe cpus. So, Mainframe cpus are much slower, and Mainframes cost extremely much more than x86. So what is the use case of using an Mainframe running Linux over x86 servers? I dont get it. Sure, Mainframes have much better RAS and I/O throughput, but is that worth paying $millions for?
If you want reliability go with IBM.
If you want performance go with IBM. The Intel architecture (that crud called a Northbridge and Southbridge), is inherently slow.
CPU performance isn't bad - but I/O is horrible. 12TB of memory shounds impressive... until you find out it takes 15-20 minutes to load a database into memory, or get it out of memory when you have to shutdown (which can actually take longer due to having to scatter the data to appropriate storage locations).
Before you use the word "cheaper", you might want to ask CIOs and such how cheap is it to have to upgrade or replace hundreds, maybe even thousands, of x86 servers. Such equipment is really only "cheap" when it's used in small numbers; in the aggregate, in large numbers, it can get quite expensive - to the point of making mainframe(ish) systems look like relative bargains. Add in large management overhead and relatively low utilization factors and such related to server farms and it starts to become no contest.
The Mainframe cpus are much slower than a high end x86 Xeon cpu. Typically a high end x86 cpu is 2x faster than the fastest Mainframe cpu. Considering the largest IBM Mainframe sporting 24 sockets, there is no way 24 of those slow Mainframe z13 cpus can replace more than a 8-socket x86 server - if we talk about cpu performance. I/O wise the Mainframe is much faster than x86, but cpu wise, Mainframes does not stand a chance.
IBM claims a mainframe can replace 1.5000 x86 servers, it turns out that all the x86 servers are old antique Pentium3 cpus with 256MB RAM and they all idle. At the same time the Mainframe is fully loaded 100%. But what happens if a few of the x86 servers start to work? The Mainframe cpus could never catch up a loaded x86 cpu. There is a reason IBM never releases benchmarks comparing x86 to Mainframes - because Mainframes are slower. But you pay much more for the absymal cpu performance. No, if you need cpu performance, you dont go to Mainframes. If you need I/O you go to Mainframes. Mainframes dont stand a chance to a 8-socket x86 server today, which costs a fraction.