back to article IBM looks to boost sales the same way it has for 65 years – yes, it's a new mainframe: The z15

IBM this month officially unveiled the newest addition to its Z-series mainframe lineup in roughly two years. Big Blue's z15 family of big iron apparently features improved data security controls and better cloud integration along with the usual array of hardware upgrades you would expect from the first major update to the …

  1. FuzzyWuzzys

    Like so many I cut my teeth on an IBM mainframe many, many moons ago in the late 80s. So long as it's a got the obligatory COBOL "included in the box", it'll still sell!

    1. RancidRodent

      "In The Box"?

      COBOL has never been included "in the box" - nor has ANY OS - it's an orderable feature. z13 gave us a massive jump in COBOL performance (with COBOL 5&6), there's a similar leap with z15 as the new compiler generates architecture optimised code (the old COBOL compilers always generated bog standard 370 assembler that would run on a 30 year old machine) now you can specify the oldest machine you want to support with the "arch level" parameter - the result is code that can be anything up to twice as fast as generic 370 code. (PL/1 has worked like this for years), z15 also adds to the list of Java instructions supported in hardware further boosting the already impressive Java performance on Z including extensive SIMD support for popular mathematics classes.

      The really useful (but undersold) feature of z is the ability for network traffic on the same "box" to bypass the network and communicate at memory speeds, this means your (thousands and thousands of) Linux machines running on z can "talk" to DB2 on z/OS at memory speeds (no network latency) - direct access to your core business data from "modern" applications. This isn't just SMC-D (which z also supports) this is baked into the network stack. z/OS 2.4 has a Kubernetes docker/container stack built in - so again - your hybrid cloud apps can talk directly to your core business data at source without the usual latency. Of course this data can be compressed and encrypted on the fly with practically no loss of performance as all these hardware features are baked into z15.

      1. hammarbtyp

        Re: "In The Box"?

        "no network latency"

        In the words of Scotty "Ya cannae break the laws of physics"

        Greatly reduced network latency is probably what you meant

        1. IT Hack

          Re: "In The Box"?

          Rules of teh fiziks

          Not sure there are the normal network components at the bus level within mainframe architecture.

        2. Stephen Beynon

          Re: "In The Box"?

          If the data is not going onto the network then by definition there is no network latency. There will be memory latency for sure, but presumably that will be vastly lower than normal network latency.

          1. Lusty

            Re: "In The Box"?

            Not true. The in memory networking will still hit the network stack of the VM, it just won't ever touch the physical network stack of the host. This means that you will still have a network FIFO queue on the vNIC and therefore latency will exist even if very low.

            Intel based virtualisation has the exact same technology, and they also make the same spurious claims.

            1. Blane Bramble

              Re: "In The Box"?

              Ahh, but that isn't latency introduced by the network (those wires and switches and routers and stuff).

              Hence. no "network" latency.

          2. Anonymous Coward
            Anonymous Coward

            Re: "In The Box"?

            This is similar to Kubernetes rather than VM communication, at least for TCP/IP.

            In VM's, you usually have a TCP/IP network stack in each VM and a virtual network on the hypervisor.

            In Kubernetes, you effectively run a Berkley packet filter (BPF) firewall between your different containers within a pod, greatly reducing latency because the BPF kernel code can be safely called directly from containers avoiding intermediate layers of the hypervisor and container networking stacks.

            In both cases, you have different levels of host-to-host communication latency but "zero physical network latency". The latency differences between the approaches can be significant assuming the majority of the communication is within a pod on a single host.

    2. aqk

      COBOL "In the box"?

      You must have retired before the IBM software unbundling occurred - somewhere back in early 1970s. At least in North America- in Europe it was a tad later.

      Ask your systems programmer- the guy who installed this COBOL and Fortran stuff for you. If he's still alive.

      Somehow, I still am.

  2. James 51

    But, the Cloud!!!

    1. Anonymous Coward
      Anonymous Coward

      Haven't you heard?

      I recently saw an IBM systems group presentation, after IBM announced it's Hybrid Multi Cloud strategy, explaining that the ultimate hybrid multi cloud platform is Z.

      1. Lusty

        Re: Haven't you heard?

        I don't think hybrid cloud means what they think it means. In 2019 if your request to "the cloud" requires an engineer with a screwdriver and a change control approval to complete then it's not really a cloud it's just hosting and colo.

        I say in 2019, really the year makes no difference because that's always been the case. What I mean is that in 2019 this sort of shit should really have ended a decade ago and everyone should know what a cloud is by now.

        1. cschneid

          Re: Haven't you heard?

          I think "hybrid cloud" means whatever the marketing arm of the currently speaking vendor says it means.

        2. eldakka

          Re: Haven't you heard?

          it's not really a cloud it's just hosting and colo.

          Cloud is just a fancy marketing term for hosting and colo.

    2. seven of five

      to quote Alice, from Dilbert

      Someone: Do you even know what the cloud is?

      Alice: The cloud is where you´ll soon be playing your harp if you don´t shut up.

    3. Ian Michael Gumby

      @James 51 ... Put down that pipe...

      I know your post was supposed to be in jest...

      But if you follow the link, the disk latency is 18 microseconds. That's twice as fast as the nearest competing technology.

      Mainframes have been capable of running linux for years. So imagine putting a bunch of linux lpars each able to run docker containers. Now you have a cloud in your data center that is going to be faster than anything you can get from AWS , Google, or Microsoft.

      Just something to think about.

      Would love to run the numbers and to see how the costs breakdown and to see if it could be competitive in terms of price.

      1. Jellied Eel Silver badge

        Re: @James 51 ... Put down that pipe...

        Would love to run the numbers and to see how the costs breakdown and to see if it could be competitive in terms of price.

        I did this years ago for a pre-'cloud' virtual hosting platform using AS/400s vs piles of Sun servers. I think IBM's always undersold itself as a cloud provider given it's long history of flexible resource management on it's platforms. Downside in that project were IBM's licence & support costs, but they were close to Suns. Upside would have been pretty simple automation of provisioning & MACDs, and better resource utilisation.. especially in terms of physical space, heat & power.

        But it was IBM, and the Internet was built on Sun.

        1. Ian Michael Gumby

          @Jellied Eel Re: @James 51 ... Put down that pipe...

          AS/400s and Sun Servers doing virtual hosting is not the same thing.

          Mainframes are in a different class of computing and require a bit more to standup and maintain.

          But the performance is fairly good when you want to consider low latencies at scale. So you end up paying a premium.

          The irony... are there anyone around old enough to remember Timenet? As a kid, I and a couple of friends would dial in and see who/what we could connect with. ;-)

          Now we are repeating ourselves with the 'cloud'.

          1. Jellied Eel Silver badge

            Re: @Jellied Eel @James 51 ... Put down that pipe...

            AS/400s and Sun Servers doing virtual hosting is not the same thing.

            Mainframes are in a different class of computing and require a bit more to standup and maintain.

            Depends... I cut my teeth with IBM (ok, Amdahl 5990) with 3174 cluster controllers networking remote offices to the apps running in the big iron. So very much 'cloud' business. And being shared compute boxen, explains why IBM developed stuff like JCL, RACF etc to allocate/manage and bill resources. Which translated into a fairly easy life as a shared web server, ie if nobody is looking at the site, it's not using much in the way of resource. Or we could sell packages based on usage + storage etc and be pretty confident the system could manage all that fairly. Alternative was renting users S/M/L Suns that ate up rack space, power and cooling.

            Biggest challenge trying to do stuff like that often came down to licences, and definition of 'users', ie vendors wanting licences restricted to a single contract rather than being sort-of resold as a multi-tenanted box, and especially challenging when vendors like IBM were inflexible when it came to wanting licence changes.

            (And yup, I remember those days.. Plus running a Blue Board BBS :p)

    4. Anonymous Coward
      Anonymous Coward

      And blockchain. Don't forget blockchain!

      1. Ken 16 Silver badge
        Paris Hilton

        I didn't forget blockchain, I just forgot what it's supposed to be useful for

  3. NohSpam
    Thumb Up

    A user may not know they're on a Z hosted cloud but if they specify qualities of service that can only* be achieved by Z, then I guess that's where they'll probably end up - where Z is an available cloud platform

    *QoS claims may be made by other cloud providers but, ... Z - robust, scaleable, fast & secure, smaller maintenance footprint

    1. Anonymous Coward
      Anonymous Coward

      Scaleable - only really counts if you need scale up. The rest of the industry moved away from scale up architectures a while back because they are unsustainable, complex, and prone to failure.

      Fast - only straight line speed for a single process really. For everything else, there's scale out and cheap and wide wins on speed every time for big jobs.

      Ask any UK bank about robustness in this kind of architecture. I feel like the concept is robust but the real world implementation of a monolithic, complex systems only supportable through outsourcing to large consultancy outfits is begging for the massive failure scenarios we've seen play out over recent years.

      Security is enhanced slightly I guess since there are so few people bothering to learn about the systems. Obscurity isn't really secure though, it just looks that way until someone decides to own you.

      Smaller maintenance footprint? LOLZ

      1. Jim Mitchell

        "Ask any UK bank about robustness in this kind of architecture. I feel like the concept is robust but the real world implementation of a monolithic, complex systems only supportable through outsourcing to large consultancy outfits is begging for the massive failure scenarios we've seen play out over recent years."

        As I recall, the recent UK banking failures were due to people and processes. ie, they would have happened if the bank was using IBM mainframes or Amigas.

        1. Anonymous Coward
          Anonymous Coward

          @Jim, quite right. The difference being that P&P on common architectures includes people being comfortable changing things and familiar with the system. On Mainframe, they go decades without maintenance or change which leads to FUBAR if anything goes slightly differently, and nobody knows how to sort it without handing IBM half a billion and hoping they fix it

      2. Ian Michael Gumby

        @AC ... Huh?

        Sorry mate, but you do realize that while you're in the cloud, your VMs are really containers on very large machines. So the cloud provider scales up so they can then deliver you containers which you can then scale out.

        But at the same time, for on prem, there's a concept of server consolidation. So that as you scale out your clusters, you start to realize that you can replace a rack of older servers with smaller more energy efficient new servers so that you're scaling up individual nodes in your cluster, reducing your footprint in the data center (reducing costs).

        So yes, looking at building a Linux cluster on the Z makes a lot of sense if done right.

        Only you'd never know because its all magically abstracted from you.

  4. Anonymous Coward
    Anonymous Coward

    Selectric Income

    Don't care for most of the IBM menu, but if you like mainframe... hmm hmm good.

  5. cschneid

    "super-expensive mainframes"

    Just how "super-expensive" are these new mainframes? I mean, doing a TCO (not TCOWICAFE (Total Cost Of What I Can Account For Easily)) comparison with commodity hardware, software, and support contracts.

    1. Anonymous Coward
      Anonymous Coward

      Re: "super-expensive mainframes"

      A mainframe is like an aircraft carrier.

      Highly proprietary. Everything that touches it is burdened by the cost of proprietary protocols, interfaces and applications.

      Needs a large ecosystem of "stuff" to operate, hence is always surrounded by peripherals, servers, that make it operational and managed.

      1. RancidRodent

        Re: "super-expensive mainframes"

        In most datacentres I've worked in over the last 15 years - you have the mainframe running the core business with perhaps 5-20 people running it with a relatively small power and heat footprint - then on x86 you have megawatts of servers with 1000 people supporting them providing the flashy front end that breaks every third week. "propriety protocols" - what are you on about? The mainframe supports just about every API/interface and protocol you can shake a stick at.

    2. StargateSg7

      Re: "super-expensive mainframes"

      I remember when an IBM Z13 Mainframe started at about $1.5 Million US dollars when you counted in the support contract which gets renewed every year at about $50,000 to $75,000 per year (at least for us!)

      Just recently, an old used IBM Z-13 mainframe was put on sale for about $100,000 US ... I'm kinda tempted to put it in my basement but POWERING the thing is a pain! Even though the Z-13 is old (2015), you can put 10 Terabytes of RAM into it and can STILL run way over 2019 era 100,000+ full-graphic website processes on it. We NEVER got it that high because our incoming connections in the mid 2010's maxed out at 20 Gigabits or so.

      i've always liked the big iron for concurrent workloads ... those ibms are GREAT for that.


  6. Christian Berger

    There's a nice introductionary talk on that architecture

  7. LeahroyNake

    It's a bit late

    For the on the ground engineers that they have 'let go' over the last few years. Thankfully Im not one of them but the ones I know are the usual subjects for forced retirement. The sentiment that I get from them is similar to x HP INC employees. The grass is greener, you don't get to keep the laptop but at least the mortgage is paid off.

    COBOL, I know some people..

    1. RancidRodent

      Re: It's a bit late

      Yes, Rometty has been a disaster for IBM - but lucky for them all the other big IT companies have got rid of all the experienced staff too!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like