Posts by cschneid
95 publicly visible posts • joined 18 Oct 2009
Honeywell, I blew up the qubits: Thermostat maker to offer cloud access to 'world's most powerful quantum computer' within months
If you're writing code in Python, JavaScript, Java and PHP, relax. The hot trendy languages are still miles behind, this survey says
Re: COBOL
Most of the StackOverflow COBOL questions seem to be from students.
Two former co-workers were telling me a story about one (a 30 year COBOL veteran) working his way through some CICS COBOL code that used raw sockets to talk to an external provider, with the other (a 20 year Java veteran) looking over his shoulder. The Java vet was having no trouble keeping up with the COBOL vet, somewhat to the mild annoyance and impressed surprise of the COBOL vet.
COBOL is just another programming language. It's not hard to learn, it's not hard to understand, it's just out of fashion despite being really good for its problem space.
You don't write a regex engine in COBOL. You don't implement the Quicksort algorithm in COBOL. I mean, you probably could, but that's not its problem space so just don't.
If you're VISA or American Express doing OLTP and you need serious speed, reliability, recoverability, and securability then COBOL and CICS on a z15 are your jam.
Built to last: Time to dispose of the disposable, unrepairable brick
I hate computers
Things I learned from Y2K (pt 87): How to swap a mainframe for Microsoft Access
Re: A System/38 aint no mainframe, boy!
System/360 -> System/370 -> System/390 -> System z -> IBM Z
My recollection, not backed up by anything, Wikipedia disagrees and I say they're wrong in no small part because they simply redirect System z to IBM Z and retcon the z900 as the latter.
None of this is helped by the conflation and confluence of architecture names and marketing names.
System/38 is not a mainframe. A static copy of a database is not a replacement for the source system from which the copy was obtained. I wonder if a copy of the CD was made by any of the staff.
In deepest darkest Surrey, an on-prem SAP system running 17-year-old software is about to die....
Lack of received wisdom
<sarcasm/>
I thought the received wisdom these days was that the solution to a Government IT problem, any IT problem really, is Outsourcing, DevOps, and Cloud.
Repeat after me, in every meeting, at every opportunity: Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud. Outsourcing, DevOps, and Cloud.
Your training is now complete. Pick up your diploma at the printer on your way out.
'I am done with open source': Developer of Rust Actix web framework quits, appoints new maintainer
PL and TL
> The episode demonstrates that expert developers are often not expert in managing the human relations aspect of projects that can become significant.
Prior to leaving the codeface for a seat by the fire, the last place I worked was developing the concept of a Project Lead and a Technical Lead who would work in tandem. The former would arrange meetings, record notes, was the keeper of the project schedule, communicated with the user community and with management, and handled the myriad of complexities that surround a project. The latter was the architect, designer, editor-in-chief for the myriad of complexities that are the project.
The Curse of macOS Catalina strikes again as AccountEdge stays 32-bit
AppSheet. Gesundheit! Oh, we see – it's Google pulling no-code development into a cloudy embrace
Visual Programming
This cycle has been repeating itself since at least the early 90s (if you include the Information Center concept, then a decade prior). From a "catalog of parts," drag and drop icons representing {database, dumb terminal, files in various formats, et. al.} and "wire" them together to produce a result. Paint a GUI and connect it to the inputs and output of those icons.
Security is hard. Data integrity is hard. Compliance is hard. Maintenance, ownership, and governance are necessary.
There seems to be a misconception that the bottleneck in application development is a lack of bodies to do the work, and that any old body will do.
Ditch Chef, Puppet, Splunk and snyk for GitLab? That's the pitch from your new wannabe one-stop DevOps shop
Santayana
[Sijbrandij:] "It is not so much that applications get moved, because that's super costly. But you could say all new applications will be on a different cloud from now."
And thus it was acknowledged that IT shops must encompass the skill sets to indefinitely drag the baggage of applications developed for what previous generations thought was to be the one and only platform, each with its own quirks and foibles, never actually migrating to a single underpinning, forever dealing with integrations reliant on tenuous agreements and bits of string,
Just like their ancestors, victims of management decrees. "All new systems will be built in PL/I." "All new systems will be built with a client/server architecture." "All new systems will target OS/2 as the client." "All new systems will be web based." "All new systems will be cloud based." "All new systems will target AWS." And so on and so forth, ad infinitum, from now until the end of time, world without end, forever and ever.
IBM looks to boost sales the same way it has for 65 years – yes, it's a new mainframe: The z15
The results are in… and California’s GDPR-ish digital privacy law has survived onslaught by Google and friends
Re: "Will never happen, because, free speech. "
I believe the Citizens United decision established that it does.
Now that's what we're Tolkien about: You need one storage system to rule them all and in the darkness bind them
not so much counterpoint as supplemental
"A single version of the truth for your business is a lofty yet essential goal to maximize business opportunity." This is the author's conclusion, with which it is difficult to disagree. Some of what precedes this is, however, at odds with history as I experienced it.
Many organizations started out with one single version of the truth in a centralized database. Then came the 1980s and relatively quickly many people had hitherto unheard of computing power on their respective desktops and the ignorance to equate capability with ability.
Much has been made of the "PC Revolution" and the empowerment of the end user whilst slathering pejoratives on centralized IT. Suffice it to say that organizational culture kept these two essential parts of the whole at loggerheads.
The move away from a single source of truth was due, not to caution, but to the perceived neglect from central IT. The end users needed to perform what-if analysis and central IT was not forthcoming with applications to do that. Enter Lotus 1-2-3, dBase, et. al. and what came to be known as "shadow IT."
There was no source code management, there was no test version of the "database"; these were not IT people and they knew not of these things. Ignorance can be remedied, but no one saw fit to do so.
And just why was central IT not providing a single application to access the single source of data? Management prioritized those requests far enough down the queue that they were never addressed.
Again, I don't disagree with the conclusion that "[...] it's understandable to have secondary data sprinkled everywhere. It's also a smart move to unify it into a single source of truth."
And again, I think some things are missing from the provided two routes to the truth: data administration and governance. It is essential that whomever is accessing the one true source of data understand what it is they are getting. If there's a "current status" column, as of when is it current? Also, GDPR, PII, HIPAA, SOX, et. al.
Pentagon makes case for Return of the JEDI: There's only one cloud biz that can do the job and it starts with an A (or rhymes with loft)
Elsewhere on this site...
LzLabs kills Swisscom’s mainframes – but it's not the work of a vicious BOFH: All the apps are now living on cloud nine
Interesting. One of the advantages of CICS is its resource management, where an application can update a DB2 table, a VSAM file, an IMS segment, and then send an MQ message only to encounter a problem, abend, and all those updates never happened. LzLabs claim to be able to do the same.
There is much talk of load modules, no mention of program objects which is the format of any COBOL application recompiled with IBM Enterprise COBOL v5+. That may not matter, as the LzLabs seemingly has an emulation layer. I say seemingly because their product data sheets are not available to the hoi polloi.
Customers are, however, still stuck with one vendor, just as they were with their IBM Z. Also, I didn't see a mention of cost comparisons. I presume LzLabs is cheaper, at least for the honeymoon period, taking into account TCO and not just TCOWICAFE (Total Cost Of What I Can Account For Easily).
I wonder about SMF, which is useful for post-event analysis.
It seems like an awful lot of effort is being put into mitigation of a perceived problem: lack of mainframe skills. I think it's probably cheaper to just train the new staff, but that would make them skilled labor instead of fungible resources.
This move by Dropbox will reduce users' files to tiers: Rarely, regularly accessed data now kept separate
'Java 9, it did break some things,' Oracle bod admits to devs still clinging to version 8
Santayana
In the mainframe space, famous for its backward compatibility, IBM sells a COBOL compiler and has for the last half century or so. In ~1985, when IBM introduced their VS COBOL II compiler which implemented at least some of the 1985 standard, they broke some things that worked in their OS COBOL compiler which VS COBOL II replaced. Not necessarily everything that broke was standards-related, but still, VS COBOL II was a complete rewrite of the COBOL compiler from the ground up. It was kind of a big deal to migrate from one product to the next.
Over the next 30 years IBM continued to release new major and minor versions of their COBOL compiler under various names (COBOL/370, COBOL for MVS and VM, COBOL for OS/390 and VM, et. al.), implementing new features as they went. These products all had essentially the same "engine" powering source code parsing and object code generation. Upgrades were essentially a doddle.
Fast forward to the release of v5 of IBM Enterprise COBOL. This product breaks some things because it is a complete rewrite from the ground up. And customers are surprised to discover they have code that has relied on decades-old non-standards-conforming undocumented behavior which went away in the rewrite. It's kind of a big deal to migrate from versions prior to v5 to v5 or later.
That it only took Java ~20 years to reach a version/release that broke things is the surprise, what with time moving at the speed of the internet and all.
Apple: Trust us, we've patented parts of Swift, and thus chunks of other programming languages, for your own good
Official: IBM to gobble Red Hat for $34bn – yes, the enterprise Linux biz
Hey cool, you went serverless. Now you just have to worry about all those stale functions
Slack bots have the keys to your processes. What could go wrong? Well...
The many-faced god of operational excellence, DevOps and now 'site reliability engineering'
Causes of software development woes
IBM melts down fixing Meltdown as processes and patches stutter
Remember CompuServe forums? They're still around! Also they're about to die
Guess who's now automating small-biz IT jobs? Yes, it's Microsoft
Java SE 9 and Java EE 8 arrive, 364 days later than first planned
Blame Canada? $5.7m IBM IT deal balloons to $185m thanks to 'an open bag of money'
Oh, the things Vim could teach Silicon Valley's code slingers
Is security keeping pace with continuous delivery?
UK's Universal Credit IT may go downhill soon, warns think tank report
Why we should learn to stop worrying and love legacy – Fujitsu's UK head
magpies
[W]e shouldn't necessarily assume something is irrelevant because it is old.
Lest we become magpie developers.
More succinctly, in the words of William Inge...
There are two kinds of fools: one says "This is old, therefore it is good"; the other says, "This is new, therefore it is better."
Cutting edge security: Expensive kit won't save you
follow the (lack of desire to spend) money
Corporation X will not be willing to pay for the skills outlined in the article until a well-publicised breach occurs. Staff possessing those skills will then be acquired and kept until the next round of redundancies. Repeat until this is so commonplace it is no longer news.
It's cheaper to hire someone who shouts random quotes from a NIST manual.
You, yes YOU: DevOps' people problem
Hey techies! Ever wanted to adopt a Congresscritter? Now's your chance
Typical duties may include:
Briefing Members and staff about technology issues
Writing legislation
Preparing for hearings or markups
Meeting with stakeholder groups and building coalitions
[the above is from the Fellowship's website]
People with a CompSci background might be okay at that first item, the rest seem to require skill sets antithetical to a CompSci background.
When the Schmidt hits The Man: Look what the NSA made Google do
Job for IT generalist ...
We don't want your crap databases, says Twitter: We've made OUR OWN
IT departments are BRATTY TEENAGERS
The secret to getting rich in 2012: Open APIs
Cobol cabal will take over THE WORLD Australia
IT recruiters warn over migration caps
One possible translation
One possible translation of "we can't find those skills locally" is "none of the locals is willing to work for the crap wages we're willing to pay."
Research at the University of California - Davis showed this in the H1-B ramp-up to Y2K in the USA. Source: http://heather.cs.ucdavis.edu/h1b.html.
Yes, it's from 1998.
IBM countersues Neon over zPrime accelerator
Capacity based pricing
The root problem is capacity based pricing. That, and the tug-of-war between vendors (including IBM) wanting to squeeze as much money out of their customers as possible and the customers wanting to pay £0,00 for software. But that's present in most markets.
Get rid of the abomination that is capacity based pricing and there is no need for specialty processors, z/Prime doesn't have a raison d'être, and everyone lives happily ever after in mainframe-land. Neon's obviously talented staff can write some other clever and useful product.
Welcome to the out-of-control decade
Danger lurks in the clouds
faith, et. al.
Ah, the cloud, a faith-based computing initiative. Have faith that the provider actually knows what they’re doing with respect to backup, security, redundancy, etc. Have faith that the provider won’t be purchased by another company in order to kill the service in favor of the acquirer’s – the one you deliberately didn’t choose for reasons of your own. Have faith that the provider is actually a responsible business – reputation in the Internet Age meaning having a corporate history that can be measured in months.
Me? I’m an atheist.
On the (forgive me) client side, the concept of degrading function gracefully will apparently have to be resurrected. Despite what your "mobile apps for dummies" book told you, memory isn't unlimited, persistent storage isn't unlimited, the network isn't always available, or as fast as you'd like, or as reliable as you'd like.
T-Mobile's compensatory offering of $100 (presumably that's USD) is interesting. Woefully inadequate, but interesting. Exactly how much is that industry insider's private number worth?