back to article Canonical: OpenStack is dead, long live OpenStack

In a slightly curious blog post, Ubuntu recently dismissed the idea that OpenStack was no longer relevant and had been "abandoned" – making it the second outfit in the past six months to state how confident it remains in the IaaS platform. The post, titled "OpenStack is dead?" echoed a similar statement from Docker Enterprise …

  1. VoiceOfTruth

    Too complicated is not the whole problem

    -> Part of the issue is that OpenStack is complicated

    We looked at OpenStack a few years ago. Indeed it is complicated. I'm in favour of learning new technologies so that we don't become bored or stale. But OpenStack is just so complicated that we would need to hire at least three dedicated OpenStack people just to run it (to have 24 hour a day availability). You can't get away with one or two members of staff who are OpenStack trained - they will want a holiday from time to time. And good luck getting somebody who knows UNIX to start digging away at a problem with OpenStack if they haven't spent a lot of time already dealing with it. Your average UNIX guy is going to say 'what a great big steaming pile that is'. If we hired dedicated OpenStack people we would be afraid they would leave. And then we would be back to square one.

    1. Nate Amsden

      Re: Too complicated is not the whole problem

      same can be said for kubernetes. Too complicated for most orgs. Though that hasn't stopped the hype around that tech(yet anyway). I felt hadoop was similar as well back when it was at it's peak hype. I actually had a VP of my group at the time suggest that HDFS could be our VMware storage, we didn't need a SAN just run the VMs from HDFS. Company built a 100+ node hadoop cluster back in 2010 after I left (using hardware from a vendor I was against), I was told it took them over a year just to get the hardware stable(after suffering ~30% failure rate in the cluster over a extended period which resulted in ~50% of hardware being offline due to quorum requirements?). New VP came in decided on different hardware. They still struggled with writing correct jobs I was told, several people complaining why it was so slow, turns out in some cases the jobs were written in a way that prevented them from being run on more than one node. But at least they had the data, probably 15TB of new data a day. One of the folks I knew at the time was at a company which deployed hadoop as well but they had something like 500GB of data total. WTF why are you deploying that, he said they wanted it.

      Some forces at my current org wanted Kubernetes. Not because it was a good tool for us but because it was cool. VMs were old school they wanted the fancy auto scale up and scale down. I knew it wouldn't work the way they expected. Spent at least 3 years I think on the project, even got some services to production. All if it was shut down last year when the main person working on the project left. Had tons of problems, one of which they spent 6+ months trying to resolve(ended up being a MTU problem on the containers that were built). Auto scaling didn't work as advertised(perhaps due to lack of performance testing, something I warned about many times but was ignored). Lots of kubernetes alerts saying oh hey I'm low on CPU, or I'm low on memory I can't start new containers. Look at the host and it has TONS of CPU and memory, in some cases there was 10GB+ of available memory. But because of bullshit like this bug open since 2017 (https://github.com/kubernetes/kubernetes/issues/43916), the systems complained regularly. Also had a problem with data dog monitoring where it would consume large amounts of disk i/o (upwards of 10k+ IOPS) took again months to track down they eventually found the cause it was running out of memory in the container(not sure why that would cause the i/o as there was no swap on the host) but increasing memory on the container fixed it. Data dog could not suggest to us how much memory was needed for monitoring X metrics so we just had to monitor it and keep increasing memory over time.

      The complexity of the system grew even more when they wanted to try to do upgrades without downtime. The people behind the internal project eventually acknowledged what I had been saying since before they even started - it's WAY too complicated for an org our size, and offers features we do not need. So they gave up.

      My container solution I deployed for our previous app stack(which was LAMP). LXC on bare metal hardware back in 2014. Took me probably 6 weeks going from not even knowing LXC existed to being fully in production running our most critical e-commerce app. Ran for 5 years pretty much flawlessly, saved a ton of $$ and really accelerated our application. I proposed the same solution even if as interim on our newer Ruby app stack but they didn't want it. Wasn't cool enough for them. I said fine, you can build your kubernetes shit and when it's ready just switch over. I can be ready with LXC for this app in a couple of weeks and we have the hardware already. But nope they wanted to stick to VMs until Kubernetes was ready. And of course it never got ready.

      1. VoiceOfTruth

        Re: Too complicated is not the whole problem

        -> something like 500GB of data total. WTF why are you deploying that

        Exactly. The wrong tool for the job, but because OpenStack/Kubernetes/etc is fashionable, let's find a use for it.

        -> took again months to track down

        Yep.

        Even trying to get OpenStack to run properly takes a lot of time (= money). I don't doubt that in some circumstances OpenStack is useful. But to roll it out, you need a good use case, and fair number of people on hand who know a great deal about it. If you run into a problem, there won't be a eureka moment. It will be 'is that how it is supposed to be?'

        1. Nate Amsden

          Re: Too complicated is not the whole problem

          I had high hopes for OpenStack back when VMware had their brief trip down the "vRAM tax" road(looks like that was in 2012). I was convinced that vSphere 4.1 would be my last version of VMware and I'd jump to OpenStack or more likely to plain KVM(as a long time linux veteran and 90%+ of our VMs were/are linux).

          But I learned a bit more as time went on, and VMware backtracked on their vRAM tax(my org was never affected as we didn't deploy the versions that had the tax), and decided making that technology jump didn't make sense for us anymore.

    2. Anonymous Coward
      Anonymous Coward

      Re: Too complicated is not the whole problem

      They did their best to make it as complicated as possible. From silly fashionable naming to bad tools and UIs, to configuration files where the same settings must be replicated over and over. The learning curve is very steep, and the documentation doesn't help much.

  2. Anonymous Coward
    Big Brother

    Control

    "It's interesting to look at where the customers Canonical cites in its blog post are situated: Russia, Pakistan, and Saudi Arabia."

    Using OpenStack certainly gives you more control, but it's significant that those countries that are most interested in it are countries that are interested in controlling their users rather than just their databases.

    1. sreynolds

      Re: Control

      I thought a user had some idea and/or knowledge that they were part of the system - victims is probably more appropriate.

    2. Ken G Silver badge
      Facepalm

      Re: Control

      That aged well, didn't it?

  3. Anonymous Coward
    Anonymous Coward

    By coincidence, I've just been listening to Robin Ince's podcast 'Laws the Aren't Laws'. Ep4 covers Betteridge's Law of Headlines and Betteridge himself said that while it applies to print media ('you are supposed to be informing the public') it isn't so clear cut when applied to things like blogs as they are more interactive and can foster debate about a subject

  4. F. Frederick Skitty Silver badge

    "It sounds much friendlier to suggest they go serverless."

    Please think twice about doing that. Something like Amazon's Lambda feature should only be used for very, very specific cases. If you go in big time for an event driven architecture and lambdas as your default way if doing things, expect it to be slow, buggy and expensive.

    1. Ken G Silver badge
      Trollface

      slow, buggy and expensive?

      We can sell you services to address two of those issues...

  5. Anonymous Coward
    Anonymous Coward

    "turn it into your own little private company cloud"

    "Clouds" aren't "Private", and you only can make a company infrastructure "more cloudy" by ratcheting up the cost of operating the same exact platform over time and paying a premium over that for the privilege....oh, for complete "cloudiness " for your "private cloud" you should host it in somebody else's data center that you pay too much for.

  6. Doctor Syntax Silver badge

    "When multiple companies strenuously deny something, it tends to cause more suspicion than it allays"

    Never believe anything until it's been officially denied. Yes Minister

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like