Re: Why didn't he use RSCS?
Because at that time RSCS would have required FEP hardware and probably a pair of bisync modems.
52 publicly visible posts • joined 30 Mar 2010
It's free, standardized and Usenet works just fine at scale. Also no stupid line length limits. If your idea fits into 160 characters, you haven't put enough thought into it.
Got my popcorn bucket ready -- looking forward to less containers of that which promotes growth.
Me as well. I have about 150 students currently, in multiple sections, and recently taught a structured programming course for the first time in decades. Their comment on the course were "why didn't someone teach us this before? this is the easiest way to write understandable code I've ever seen".
The COBOL 90 spec relaxes the line length and column location restriction to allow freeform code, so most of the current compilers no longer require the 80 column restrictions.
To me, what's interesting is the attention paid to backward compatibility -- programs written to COBOL 66 standards still compile and work correctly in 2022 WITHOUT change. Ain't broke, don't fix it.
It's also interesting that HBCU's (historically black colleges and universities) are investing heavily in teaching this tech to students -- guaranteed jobs for their grads.
At times, I really start to resent the Reg "ancient tech" labeling of older technology that does exactly what it's supposed to do. COBOL is (IMHO) the first example of a domain-specific programming language, and for the problems it was designed for, there is no better choice. Admiral Hopper herself put it perfectly: "If people are going to describe their problems in English, the computer should be taught to understand English."
Don't knock it if you haven't used it. You can learn COBOL in less than a day, and it works. Is it good at everything? No. Does it allow business people to express problems in language they can understand and end up with workable code. Absolutely. Is it on every useful platform known to man? Yes. Not many things can say that.
> You can't agile deliver, with sprints, on what are effectively safety critical systems
This. "Fail fast" application development is *failure* where lives are at stake.
One thing that seems to be missing from modern development is the role of the business analyst (someone who understands the business AND application development well enough to write implementable software requirements). Those people aren't cheap, but they're makable if the business invests in them as a priority. If those people are good, Agile vs waterfall vs ? is a non-issue.
Since this takes the whole topic of software licensing off the user (if you're willing to accept IBM products for the various tasks), there's no negotiation involved. IBM gets the best deal for them, and you pay a arbitrary price depending on how much processing time and resources you use. Timesharing v.2.
My only real question is why z/OS plays such a large role (other than the POK z/OS uber alles crowd). The combination of z/VM and Linux is a lot easier to sell and understand to the cloudies; it's more like lots of discrete boxes so you don't have to think about it much.
It's not so much new customers as it is existing customers that don't rewrite the world for every BS popular trend getting to tell the PHBs "ooh, shiny buzzword compatible!" computing that still gets the paychecks printed on time. Nobody gets hurt, and there's no change is risk for the corporate crowd.
The point is that the glossy magazines are selling the change of capex spending to opex spending and it's a way for IBM to maintain and market their flagship operating system and hardware to bean counters to cater to that (nonsense) worldview.
Something I've never completely understood: isn't there a provision in UK law about products and services being "fit for purpose"? IANAL, but this would definitely seem to be a demonstrable failure to deliver on "fit for purpose". Can someone elaborate on that?
10 quid says you're right with all of these. The amount of kludge required to handle the transition is going to be so large as to force them to kill the project after tens of millions of pounds pissed away.
There's a reason why mainframes continue to exist -- they work. Glossy magazine BS may work, but you're going to spend a lot more getting it to meet SLAs than you expected.
One possible answer: cost of support people. For a majority of small businesses, this could be managed without support staff that you have to pay and give benefits to. There are a fair number of people that don't need more than this offering and a QuickBooks install (since QB went web-based, you could even skip that). No more faffing about with hardware or upgrades or anything like that, and you can bet someone will build a ARM-based dedicated appliance with just enough horsepower to run the RDP client to access it, kind of like the Wyse RDP terminal.
Welcome back to timesharing. You'll figure out that it's more cost efficient to own your own, but until then, the meter is running.
I really fail to see why there's such a fuss about the availability of COBOL programmers. It's not a difficult language to learn if you're a English speaker, and the formatting rules are no worse than PHP's whitespace significant syntax. The "shortage" says more about what managers think programmers should cost than finding the talent.
Reality check to Sarah: I do Linux device driver development for very nice pay on a machine that is literally on another continent separated by two oceans and 4 satellite links. That machine cannot be located nearer to me; my living room isn't big enough and doesn't have sufficient power to run it, there are no better communications options available and I don't have several million dollars for the equipment. Cross-compiling is not sufficient to deal with issues so close to the bare metal. If you want to change my tools, I have some non-negotiable requirements: 1) the tool be publicly documented in full, zero closed source or binary only bits at all, 2) it must be compilable and fully functional on non-PC hardware without a GUI of any kind, and 3) it must be completely functional and usable over a low-speed high-latency network involving significant round trip times for character echo. If your 'improvements' can't cope with a 2-3 second RTT, then I'm not interested.
Plain text is used because it works seamlessly over highly degraded communications scenarios and Linux has to be buildable and supported on non-PC systems. Until you can deal realistically with that concept, you've got no business trying to change things that work. It's also characteristic of really large development projects that new developers have to learn the conventions being used by the project; if you're not a team player you don't have any business being there. If new developers can't learn how the project operates and the tooling involved, then there isn't much hope for them to progress successfully.
TL;DR: Ain't broke, don't fix it. Your proposal doesn't work or progress the development process in an environment that has to live on so many different platforms and environments, and those environments are getting more diverse, not less. The current setup does. Working code beats style in every conceivable case.
> I disagree. You do it right the first time and having the licenses you need when commissioning
> the system is very important. If you wait to sort it out, the line goes down in a short period of
> time because it doesn't get taken care of and then you have a full compliment of staff standing
> around looking at screens notifying them that management has borked up something else.
Preach it, brother.
I'll add that you also design and script the d*rn thing so it's fire-and-forget to do an install. It may take a little longer to roll out, but you're not wasting pub time on figuring out what some yahoo forgot to do this time. Borked machine? Fix: wipe the machine and reprovision. Installers that insist on a GUI are evil and anyone designing such a crock should be forced to provision it for a medium-sized university to learn the error of their ways.
The Google folks don't get out much, do they? If you have to deal with non-Unix systems, ssh and sftp are not widely available, and it's not a trivial coding exercise. FTP has been mandated as part of the TCP client suite since the 1970s, and can be relied on to be present everywhere (you may not run it, but you can bet the business do).
For Mike, NJE over TCP isn't that hard to implement. It works just fine, supports encryption, and is completely documented. Unlike ssh/scp/sftp. If you have to look at the source code of another implementation, it's not ready to be a proper Internet standard.
> People with disabilities can't "just move on" as easily as those without.
> I realise it may be quite a stretch for you to consider that people who are
> legally blind just cannot see the world in the same way that you do.
Amen, brother.
It's a fun party trick to blindfold the loudest proponent of Win10 in a room and say "Good luck." as he tries to actually navigate that steaming pile of dreck. Every software designer should spend a week using nothing but the "accessibility interfaces"; it's vastly enlightening as to what really needs to be done.
Main reason (and the thing that drove wide acceptance) for these machines is office environments where different people want different kinds of drinks at different times. Instead of having 3 or 4 different pots to serve the decaf/weak/strong/tea drinkers and having to wire, clean and maintain different brewers (and listen to the bitching about cross-contamination and drinking coffee that may have been sitting on the burner for *hours*), you have one machine, and always have fresh (relatively) coffee of exactly what the user of the moment wants. No fuss, no muss, everybody gets what they want when they want it, even if it's not the ne plus ultra of the type. It's passable (mostly), and minimum whining. Once the Kcup machines became popular in the offices, people wanted to take them home. Simple as that (same reason we all have to deal with Microsoft Word -- it's not the best tool, but it's the tool we encounter most often, and there's a lot of pressure to not reinvent the wheel. Inertia is a powerful force.)
The DRM thing is just stupid. Keurig has the dominant market lock on these machines as the lowest common denominator (many of the better machines like the Tassimo gear have simply dropped off the market in many places). The K-cup is simple, and ubiquitious, and Keurig is still getting a decent volume of the market. It's a classic example of the Hayes modem or VHS videotape format phenomenon -- may not be the best option, but it is low cost and ubiquitous. Introducing a new format for no observable improvement is just silly -- as Keurig is finding out the hard way. There's no advantage to the K2 cups -- so you can make a pot instead of one cup? Defeats the original purpose. More options for feeding the machine, the more market share Keurig retains for the machines and the design.
>>> easy to replicate, by the way.
>> It was, actually. "
>So, problem solved ?
Given that Gavin created this article from his interpretation, I'd wonder who thought there *was* a problem. We're not done.
Wrt to replication, the main value to having the same source as the one used in the Sun OpenSolaris release was to credibly be able to sign code modules and not have to do all the crypto certification that the business types want for commercial systems. They're funny about that, and given that the Intel releases had a binary-only version released (that's what Nexenta and the other OpenSolaris variants used), we were hoping that we'd also be able to have the same binary-only version available. No such luck, apparently.
Wrt to the secret admirer, Gavin quoted someone in the article that chose to remain nameless and seems to be the source of all this sturm und drang about the z port. I was wondering who it was, and what he or she thought was the point.
Someone must be pretty scared to produce that much FUD.
-- db
Well said, Robert.
I'm certainly not happy with the current situation, but I'm not crying in my beer either.
I'm a little surprised by the vitriol of some of the posters here. It's a setback, but it's not a permanent one. And the source is out the gate. They can't call it back. So, if they don't want to play, OK. If one reporter misinterprets a situation based on a unsigned note, whatever. We got people to talk to.
It's also one of the funniest things about the Z port -- libi18n is owned by IBM. Sun had to get IBM's permission to let us compile it for IBM's own hardware. So the flow went from SNA->Sun->IBM->Sun->SNA, generating lots of lawyer food along the way.
If someone really wants to build a SysV-compatible Unix, there is that alternative, and IBM seems to have bought a clue on licensing intellectual property.
--d b
> Thanks for that clarification, David Boyes, but it does seem to confirm that you
> are still being right royally fcuked, although you seem to be bending over and
> resigning yourself to it.
Won't argue that point at all. I was pretty steamed during that discussion with the Oracle person who's contacting the various project leads.
But, as I noted in another post, they can't kill the project because it's not theirs to kill. It's mine to decide, and it ain't over yet. It's under review, and we had a strong business case when we started, and we *still* have a strong business case for doing it. Asking for more voices is just good strategy, especially when some of those voices write 6 and 7 figure checks to Oracle regularly.
If Larry's reading this, I'd be very glad to discuss the project and the business case with him or anyone at Oracle. We're still willing to do it.
> Adding memory to your mainframe? $6000/GB. Adding memory to your
> SPARC server? $100/GB. Which would you willingly choose?
The one that's already paid for and amortized into my budget. The one that doesn't take up any more space, or cooling or power, or management infrastructure. The one that doesn't take any more people to deploy and manage.
Sometimes it's not about acquisition cost. It's about how much it costs to maintain the environment around it over time. HVAC and buildings and people are a LOT more expensive than any computer system these days. We have a pretty diverse data center (everything from a SGI Origen to a Superdome to a E15K to a Symbolics, and the obligatory Intel horde. We have several new SPARC boxes in the lab. Know which ones hurt the most? The Intel and PA-RISC boxes. Heat output vs performance on those systems are far, far worse than the others. Our z system supports about 50 virtual machines, and consumes about half the power and cooling of the Intel plant.
To the posters point, I like the newer SPARCs from a hardware designer's point of view -- it's a stable architecture, is a good general-purpose CPU. I'd go bankrupt if we replaced the Z. I couldn't afford the cooling bills for the quantity of SPARC gear that it would require to replace the Z we have. Replacing the Z with that much Sparc or Intel hardware would also require me to expand my data center. Suddenly that cheap hardware isn't so cheap after all.
As far as I'm concerned, user-programmable opcodes would be the way to go for the future. Why have only a single instruction set? Why shouldn't I just microprogram the set I want, and run whatever I want (modulo device emulation)? Stay tuned for the next drop of OpenSolaris for Z -- there's a lot of neat new stuff in there that explores that idea.
> Can't they just fork their own "Zolaris" ? Apparently they want to continue to get Oracle
> code for their z/Solaris version, which they do not get. libc should also be pretty
> easy to replicate, by the way.
It was, actually. The bits that are still closed source are:
libi81n (owned by IBM, needed to build a kernel. replaced by cleanroom implementation)
the cryptographic code (which is export controlled, and licensed from another company)
POSIX versions of a few common utilities (replaced with the GNU equivalents, also on the Intel platform in the Intel OpenSolaris)
the extra bits that come from other parts of Sun that don't do open source (not really part of the OS)
So, in short, the same missing bits in all the other OpenSolaris distributions, except for the one that Sun produced where they could use their existing licenses for the missing bits. Funny thing, that.
I'd be interested to know who this secret admirer is.
> If there was *one* person working on this port, and he's SO IMPORTANT to
> your business, why can't you afford to hire him? Or are you just upset that your
> free lunch is now gone, and he now has to be on your payroll and not Sun's/Oracle's?
Ad hominem snark aside, the simple truth (other than the fact that he's happily retired) is that his half of the partnership was to navigate the Sun bureaucracy and get code to where it needed to go based on his position in Solaris Engineering. If he no longer works for Sun/Oracle, there's not much chance that he can still do that. From conversations with some friends in the Bay Area, apparently the bloodletting at Solaris Engineering was pretty thorough. (Gun. Head. Fire, IMHO -- reminds me of the Compaq/DEC merger.)
As for the other 5 people working on this project, well... they're still here. As are their paycheck expenses.
--d b
I would, but he's decided to retire and goof off for a while. I can respect that. I'm told that some people don't work 18 hours a day. It's a strange thing to consider.
I've been paying all the rest of the people (and doing late nights and weekends myself) who work on this project full time for two+ years. I'd seriously consider paying someone at Oracle to work with us -- but that assumes that Oracle is willing to do so. Which is the exact point we're discussing, and where Gavin went a little overboard with his quote editing.
We (SNA) want to be a partner, not a supplicant. The Oracle half of the partnership is no longer available, and it doesn't seem likely that a new contact will be appointed, or that they want the partnership to continue. I do, but it takes two to work together. Don't think that's too much to ask.
-- db
Was anyone serious? I'd say so. Main reason: in most sites that this would be interesting for, they're facing facility upgrades to add more x86 capacity, which is not low cost. They already own the Z hardware -- because there's still no credible replacement for high-transaction-rate z/OS-based systems, execpt, maybe, Solaris (hmm) -- and it's paid for.
To be fair: Oracle is NOT blocking the project; it was never theirs TO block. Evaluating the project is what I would expect them to do with a new acquisition. Whether they continue to participate is another question, WHICH IS THE ONLY THING UNDER DISCUSSION.