The mainframe is dead, long live the mainframe!
LzLabs kills Swisscom’s mainframes – but it's not the work of a vicious BOFH: All the apps are now living on cloud nine
Swiss software upstart LzLabs says its first customer has successfully kicked the mainframe habit and moved all of its big iron applications into the cloud – without having to rewrite or recompile any code. Swisscom, the country’s largest telco, has replaced 2,500 MIPS (Millions of Instructions Per Second) worth of IBM …
COMMENTS
-
-
-
Thursday 16th May 2019 13:00 GMT Caff
Re: mainframe switch
My only experience is with the enterprise server from MF, ran well could run JCL, batch and cics regions. Cobol was just exported and wrapped up as a DLL.
Would be interested to know what the pricing / features comparison is like but trying to get that out publicly from those companies is nigh on impossible.
-
-
-
-
Thursday 16th May 2019 13:00 GMT Caff
Re: Yes but--
Generally a re-write is too expensive or risky, so legacy cobol is wrapped up for each function/feature. New applications then written to the new system and wait for the legacy parts to die off naturally. This bit the die off naturally depends on the lifescale of the business, bank/pensions products have such long lifetimes that the code lasts decades.
-
Thursday 16th May 2019 15:00 GMT Bronek Kozicki
Re: Yes but--
I do not think that the COBOL itself is that much of a problem. The age of the hardware platform is, the fragility of code is, but the choice of the language does not necessitate either. Lots of people apparently use PHP for serious applications, despite the fact that it is much worse than COBOL.
-
-
Friday 17th May 2019 15:03 GMT Anonymous Coward
Re: Yes but--
Based on the picture their mainframe is z14, can't be any more than 2 years old. So the"age" of the mainframe is not old. IBM's zSystems are designed with darn near redundant everything. If one component fails, there is another that is either active or in hot standby. Code is fragile on any/all platform. We started a migration from our mainframe to x86 under Linux using Java 10 years ago because the code on the mainframe was to fragile to change. If you changed one program it could cause 5-20 other programs to break. The programs on the mainframe were 25+ years old.
Three years after going production, with the new Java based the director in charge of the migration said we have to come up with a better way because after just 3 years the Java code was too "brittle", if we changed one thing in one program it would cause the whole system to just die.
-
-
Thursday 16th May 2019 23:40 GMT alanplayford
Re: Yes but--
Not really missed the point, Keith, but consider this .....
Usually migration has involved recompiling the source, but can you trust that the source is totally up-to-date?
More to the point, it buys valuable time to consider modernization paths, using cheaper resources, to enable legacy stuff to be re-written in modern languages which CAN be supported now and in the future.
-
Saturday 18th May 2019 20:12 GMT A.P. Veening
Re: Yes but--
Usually migration has involved recompiling the source, but can you trust that the source is totally up-to-date?
I was part of a team that solved a similar problem a couple of years ago. We solved it by running everything in tandem for a while and comparing the results (automated). The night jobs were easy, we just restored a back up made immediately preceding the night run and kicked it off when we felt like it, setting the appropriate system date and time on the new system. And yes, we caught a couple of differences, the problem was usually that somebody had modified the sources on production without backporting those changes. As a result of our efforts, IBM learned how to do something it considered impossible: a one step OS/400 migration from version 5.3 to 7.1.
-
-
-
Thursday 16th May 2019 10:10 GMT Quentin North
Tools
THe key thing I remember from my mainframe days, is that it is not just about the CICS COBOL apps, which Microfocus can support, what about all the JCL and batch scheduler loads etc? Also, key tools of IBM mainframes like RACF really do make the environment robust, so I do wonder if this platform will be nearly as reliable or secure.
That all said, I remember when Baby/36 came along and allowed System/36 RPG II applications and OCL to run on networked PCs, it practically killed IBMs then ageing midrange platform. Still, the successor AS/400 trundles on.
-
Friday 17th May 2019 20:33 GMT Michael Wojcik
Re: Tools
Micro Focus Enterprise Server has JCL support (JES2, JES3, and VSE variants), and has for years. Batch support includes REXX and TSO, and scheduler integration.
ES has a security mechanism which provides functionality similar to RACF, though since it's not tightly integrated into the OS it assumes your migrated mainframe applications aren't hostile.
LzLabs isn't our only competitor playing in the "migrate mainframe applications to the cloud" space. I don't know anything about their offerings beyond what's in the article, though. (I don't spend a lot of time looking at our competitors; I'm focused on improvements that customers actually ask us for, or that we identify internally. Other people research the competition.)
-
-
Thursday 16th May 2019 15:00 GMT cschneid
Interesting. One of the advantages of CICS is its resource management, where an application can update a DB2 table, a VSAM file, an IMS segment, and then send an MQ message only to encounter a problem, abend, and all those updates never happened. LzLabs claim to be able to do the same.
There is much talk of load modules, no mention of program objects which is the format of any COBOL application recompiled with IBM Enterprise COBOL v5+. That may not matter, as the LzLabs seemingly has an emulation layer. I say seemingly because their product data sheets are not available to the hoi polloi.
Customers are, however, still stuck with one vendor, just as they were with their IBM Z. Also, I didn't see a mention of cost comparisons. I presume LzLabs is cheaper, at least for the honeymoon period, taking into account TCO and not just TCOWICAFE (Total Cost Of What I Can Account For Easily).
I wonder about SMF, which is useful for post-event analysis.
It seems like an awful lot of effort is being put into mitigation of a perceived problem: lack of mainframe skills. I think it's probably cheaper to just train the new staff, but that would make them skilled labor instead of fungible resources.
-
Friday 17th May 2019 20:33 GMT Michael Wojcik
One of the advantages of CICS is its resource management, where an application can update a DB2 table, a VSAM file, an IMS segment, and then send an MQ message only to encounter a problem, abend, and all those updates never happened.
Coordinating multiple resource managers in a transaction is not unique to CICS. It's pretty common, in fact.
-
Thursday 16th May 2019 15:00 GMT James Anderson
Hercules
For decades now you could run mainframe software on x386 and up using the open source Hercules emulator.
The main problem is IBM won't license the software to run on the kit -- or if they do they charge eye watering M/F prices.
So it looks like they interpret machine code but trap the CICS, DB2 type calls and emulate them with Postgress.
-
-
-
Saturday 18th May 2019 20:11 GMT x-IBMer
Re: Reliability?
To be truthful, you have the exact same issues with your legacy system - you still need an Internet upstream to actually have customers access the system, and you still need a properly managed datacentre. Swisscom manage datacentres and they are a major Telco, so you’d hope they know something about Internet, and connections.
-
-
-
Thursday 16th May 2019 17:08 GMT Anonymous Coward
So many questions, so few answers
For those not up to their armpits day to day in matters mainframe (as some of us are)... some perspective.
2,500 MIPS* is puny. In the current generation (z14), a one-engine box is over 1,800 MIPS, and a two-engine box is 3,400 and change. The largest z14, a 170 engine monster, is over 140,000 MIPS.
So, to begin with, this is a very very small mainframe environment. One wonders how complex the workload is. Things you can do (easily or not) with small and simple workloads may be impossible with large and complex workloads.
Curious as to how they are emulating SVCs, CICS APIs, MQ APIs, IMS, DB2, and the plethora of crucial 3rd party software (e.g. job schedulers, console automation, print managment) most mainframe shops rely on to process the workload.
Also curious as to how they manage the resource management/conflict controls built into the mainframe environment, within a single zOS instance and across instances through SYSPLEX technolgies.
Also curious as to how they intend to manage running critical workloads inside hardware platforms with less internal redundancy than a zXX box.
Also curious how they intend to support the same level of I/O throughput without the physical and logical capabilities of the mainframe platform.
For context: I toil in operations in a shop with four z13s with a total (active) rating north of 30,000 MIPS.
* IBM doesn't use the term "MIPS" anymore, their 'equivalant' is PCI, as seen in their published LSPR tables.
https://www-01.ibm.com/servers/resourcelink/lib03060.nsf/pages/lsprITRzOSv2r2?OpenDocument#z14
-
Friday 17th May 2019 13:29 GMT YetAnotherJoeBlow
Re: So many questions, so few answers
When I last used CICS, I had a lot of respect for its abilities in a truely love-hate relationship. So this company services the CICS calls AND all of the other APIs as well? Yes, I have a lot of questions too! Had I read this article anywhere else, I would have called BS.
-
Saturday 18th May 2019 20:11 GMT x-IBMer
Re: So many questions, so few answers
And none of the points you make prevent larger workloads also being migrated in the same way. A particular focus many of us mainframers have is on the traditional Reliability, Availability and Scalability (RAS) strengths used to market the mainframe and justify its enormous costs. However it’s long been the case that properly configured x86 based servers can also meet these RAS needs. The same applies to the, again traditionally touted, mainframe I/O rates.
-
-
Friday 17th May 2019 15:04 GMT Anonymous Coward
Not sure what they mean by "... find a way to drag the ancient mainframe architecture, kicking and screaming, into the 21st century. " The underling hardware uses the same memory modules as x86 servers and the same PCIe bus, just more of them and designed in a way that no failure of a single component will bring the whole system down. In fact the I/O modules on a modern mainframe are small computers running their own little OS, probably a stripped down Linux kernel.
Last time I checked, and I still work on IBM mainframes, under all operating systems that can run on a mainframe you can write in programming langues other than COBOL. You can run different OS's on IBM's mainframes: z/OS, z/TPF z/VSE, s/VM, and Linux. I'm not sure about z/TPF, but the other 4 all support Java and all of them support C/C++.
The modern mainframe hardware is designed for resiliency. It has redundant array of independent memory (RAIM, think RAID but for memory), spare CPU's, multiple I/O paths to the same device. In fact other than RAIM, mainframes have had spare CPU's and multiple I/O paths for decades. That I am aware of there is no SPOF on a zSystem (can't say that for a pizza box x86). It's not the mainframe of the 60's, but it can still run code that was written in the 60's.
There are just so many hardware features that zSystems have that x86 systems don't.
The hardware platform does not change the fragility of a program or application system. If it is fragile on a mainframe, the same code is fragile on any other hardware platform. Any application that is written and maintained for decades will become more and more complex and thus more and more fragile. It does not matter if it is running on a mainframe, x86, or SPARC based hardware.
It interesting that it took multiple x86 servers to replace a small mainframe when a single z/14 can run thousands of virtual Linux systems.
-
Saturday 18th May 2019 20:12 GMT x-IBMer
The problem has never been about how great the mainframe hardware and integrated software is - it’s always been about the cost. We had to teach our new manager at IBM to answer the question he constantly got from our customers about why it was so expensive with the answer “expensive compared to what?” - which is the correct position to take when analyzing which hardware/software combination solved your business problems at the most cost effective point. While no-one denies the mainframe is a great computing platform, the question is whether an alternative platform is ‘good enough’, especially if the costs are significantly lower.
If you dig a little into the PR, I think you’ll find that either 4 or 8 virtualised x86 cores were enough to rehost the 2500 MIPS - that’s a pretty good ratio considering how cheap intel is when compared to SystemZ cores.
-