I dunno, I'm going out on a limb here
Perhaps we could start to entertain the thought that maybe DevOps just isn't a good idea?
You know, if people can't make it work after a decade.
On the 10th anniversary of its first survey, the multi-vendor State of Devops report concludes that DevOps is still "rarely done well at scale," and that "almost everyone is using the cloud, but most people are using it poorly." The report is opinionated and there is no vendor pitch, perhaps because it is sponsored by a …
I often ask myself if developer productivity has improved. I'm not sure it has. Everyone is now obsessed with using containers everywhere. In the past I was waiting for my code to build and deploy to a web server. Now I'm waiting for code builds, plus Docker image rebuilds.
"Well-run companies set clear goals, we know which team does what, people communicate, they have autonomy. They tend to do really well at DevOps."
There are a lot of poorly-run companies (and IT groups) by those metrics. What's remarkable is that they still manage to stay in business.
Actually I've been at companies where devops works great, devops isn't the issue. The issue is most companies just renamed their existing IT staff to devops and said 'we do devops'. The whole point of devops is that the software writers hold some responsibility for the operation of the software, and most companies that "do devops" still have totally separate groups doing those things.
I'd say I'm surprised that managers think nomenclature is the important part, but I'm not.
In companies that don't screw that up, the problem is usually that they throw extra responsibility at the teams without giving them extra resourcing or empowering them to make decisions related to their new responsibilities.
It's actually kind of amazing the sort of smooth running and uptime you can get when you do all those things correctly though.
Biggest problem I've see in it is that Devs who've never done ops suddenly think they're ops.
The places that do it well have people experienced in both sides of the picture, so actually know ops as well as knowing dev. That, however, is an expensive skillset. Most places simply slap DevOps on developers and tell Ops to keep out of things (or worse, whittle down ops as now it's all DevOps) and reap the 'reward'.
Sorry, but I thought that the quality of applications was usually determined by USERS.................
So........USERS need to say whether application changes every five minutes are GOOD or BAD from a user perspective.
It's NOT ABOUT TECHNOLOGY..........it's about BUSINESS VALUE.
I'm confused......please tell me how "agile" or "devops" is even relevant to a typical business user. Just saying!
You seem to be critiquing continuous deployment practices, which are different from either devops or agile. If CD is done properly (dark launches, feature flags, small incrementals under the hood, etc) users won't feel the difference much except in time to fix for bugs.
That works for many fields, but not all..
Clinical apps for example, you really don't want changing under the hood without full change control processes in place to let people know something's changed. Fixes have the unhappy habit of breaking something else not tested for, and the last thing you want is a patient on the bed scenario in treatment, with a new problem, or change in behaviour to that expected in the decision making process.
And yes, I have seen that approach advocated in a clinical environment. And yes, they did get the eyebrow raised treatment.
While I get where you're coming from I think I disagree a bit.
On the one hand I don't see much reason why you'd want to do CD to a perfectly working MRI or something, but on the other hand I don't see a reason it couldn't be done responsibly if you were willing to set up the right gates. Typically that would mean your CD promotion pipe deploys through many layers of staging, automated testing, manually signed off testing environments, etc before hitting live. There would be a lot of tooling to develop, a lot of effort to maintain and I'm not sure it would really give you a lot of benefits in that context, but in principle it could be done.
I'm curious as to your point here. Why is it impossible to QA changes just because the person who made them will also later deploy them? It seems like your QA process should be entirely unaffected (and in my experience that's pretty true) by moving to devops.
If you're not doing QA it's because your managers bought into the stupid war on QA that was going on a while back where the theory was that automated testing would make QA obsolete (pro-tip: that idea doesn't even make sense), not because of devops.
Its all about signing your own homework, pointless. also QA needs to be done but mostly isn't.
Projects generally are the same no valdation, no planning, no real testing, no real managementt.
DevOps can work, but is only as good as the weakest link and there a lot of links where it can go worng at speed and where it is badly implmented through lack of fundemental understanding of how it should be implemented
In my experience, the main problem with DevOps is that it was conceived of by programmers who went to top-tier engineering schools and thus have a reasonably deep background in programming theory and practice and a great deal of comfort with reducing projects to a series of discrete tasks. Unfortunately, there are a lot of other people in the industry who learned organically and came from other backgrounds and thus are lacking the same innate expertise and comfort with the DevOps approach. This problem will probably sort itself out with time; of course, there's a good chance that some new hotness will come along to replace DevOps entirely.
The other side of the coin, of course, is that programmers don't really like dealing with the operational side of the house, where things break for no obvious good reason, and some poor sysadmin/SRE has to spend hours chasing a fault only to find that it's a bug in a third-party driver or library that's no longer supported. In the ideal world, these problems get sorted out by a continuous feedback loop which leads to the removal of faults over time, but that assumes the organization is willing to spend the time and money to remove the faults.
I think the key to DevOps failure is twofold. Competing backgrounds and cultures, as you point out, is one.
The other is that, like many well crafted theories, it doesn't scale well. A few decades ago, before the cloud, I trained and worked in Information Engineering. It was an elegant way to describe the data connections within a company. But when you tried to make it work within an enterprise it was overwhelmed by the number of connections.
DevOps has a similar problem. In the real world there are just too many things to keep track of.
"In the real world there are just too many things to keep track of."
For a dev? Yes, I'd say that's true.
If you're going to implement DevOps, you've got replace their normal chairs with electric chairs. This way you can signal them when another one of their updates has failed spectacularly without having to rely on their proper logging configuration.
"In my experience, the main problem with DevOps is that it was conceived of by programmers who went to top-tier engineering schools and thus have a reasonably deep background in programming theory and practice and a great deal of comfort with reducing projects to a series of discrete tasks."
That's not devops, that's agile. (emphasis on reducing projects to a series of discrete tasks).
Devops I think came more from programmers who were tired of being "held back" by operations folks(such as myself perhaps being in ops since 2003, and internal IT before that). They wanted more control over things end to end. Which if they know what they are doing that can be effective.
More often then not(90%+) they don't know what they are doing from an ops standpoint and so the result you get is shit(shit composed of massive complexity, tribal knowledge and huge costs associated with cloud technologies).
I literally don't even need two full hands to count the number of developers I've worked with over the past 21 years that had a really good grasp at ops. All of those that knew ops were excellent, worked really well with them. Many other developers know they don't know operations and establish a trust relationship with the operations team to defer to their expertise on those topics, they were great too lots of mutual respect there. Then there are the devs that think they know it all, when they don't and just make trouble. Certainly there are ops folks out there that are similar in those categories as well though in my experience again(limited I suppose to smaller companies) the issue is much more on the dev end of things. Also if you have ops people who don't know what they are doing that can be a big issue too. I've run into several system and network admins over the years who are clueless at times and make it worse by not admitting to it(just like the dev end of things with ops related stuff). I guess the point is you need someone(s) with good operations experience/knowledge(depending on your scale of course) and they are super rare.
At the same time the number of ops people that have a really good grasp at dev stuff is low as well. But I don't typically don't see the ops folks getting their hands dirty on the dev end of things(or even trying to get their hands dirty). If we're approached for an opinion or are in some meeting on design or something we can provide input but ops folks don't generally try to dictate application design and stuff that's not our thing. I remember the first development architecture meeting I attended as an ops person back in 2003. I practically fell asleep and wondered why they hired me at the time so many terms I had never even heard of (enterprise java application). Super complex app stack the most complex I've ever dealt with even to today. I ended up being the company wide expert at that stack I knew everything and everyone knew it, quite a double edged sword burned out hard there. But had a good time and learned a lot too.
Devops isn't for everyone(as it is defined most recently - not for me either), too bad so much marketing bullshit is behind it and other things like public cloud that many people feel compelled to try to adopt it, often with poor results. Same goes for agile as well.
Speaking of cloud usage, both the company I am at now and the previous company started out in public cloud. There was no "lift and shift", as in they had nothing to lift from. Day 1 was in public cloud at both places(app/DB/server/etc designs were set before I started in both cases). I/We did lift and shift the current company out of public cloud(including waiting many hours for a mysqldumps from RDS mysql instances) a decade ago and as a result have been saving easily over $1M/year every year since.
Previous company's(now defunct) board didn't want to move out despite $1.6M savings in the first year alone(had support from all levels of the company but CEO and CTO didn't want to fight the board). Over the years several executives have tried to push for public cloud again as it sounds cool but they couldn't make the math work even remotely.
It is also my experience that Dev's in general are terrible at dealing with Op's. They cannot comprehend, or probably don't care, that you cannot just treat an OS and hardware as throw away items without consequences. Or that changing it in favour of 'your current favourite thing' can massively impact other 'things' sharing the resource, or even break the OS and/or hardware itself. They're not even slightly interested in doing configuration or fixes at 3am on a Sunday morning. Many think they're entitled to their very own instance of a VM to do what they like. They'll get in there an tweak everything they can to the point it looks nothing like where the application will eventually be deployed and then wonder why it fails at implementation or causes wider problems. They'll usually blame it on Op's in this case.
Most Op's people I have known over the last 40yrs are not really interested in trying to force change on Dev's other than reminding them of common sense things like efficient use of storage for example. And generally Op's are always accommodating Dev's as much as possible, juggling the competing team requirements while being kicked from all sides. They will have some red lines though, particularly in production, and this is the thing Dev's cannot abide, they'll flail about trying anything to be the 'special exception'. But Op's know that exception would break something, and they know they will be blamed for it. Op's are stuck with this common piece of the puzzle, the OS's and hardware that ALL the applications run on. They cannot just say I've finished with this application and move on like Dev's. They support all the users day in day out and have a much wider view of how all the workloads affect the operations.
I mostly agree with your points here. Two things I'd add:
1) devops doesn't mean no dedicated operations folks, it just means that if you need operational experts they tightly consult with or embed in your dev teams. If done right devops can be mutually beneficial and both sides can learn a bunch of new skills along the way, including getting rid of one of the things that annoys ops folks the most in my experience, which is that devs throw things over the wall and wash their hands.
2) a lot of the key reasons for devops also boil down to your other point about ops folks not being great at writing their own software (most ops folks I've met can program and will knock simple apps out of the park, but few and far between can really handle deving complex software) which means that when you want to start end to end automating everything the ops people also start needing dev resources.
Done well devops is more about recognizing that the software lifecycle is disjointed and merging the two disjointed segments. As I noted in an earlier comment, most of the time "devops" means that a company took their existing operations engineers and renamed them to devops engineers and kept the 'throw the software over the wall' mentality intact, which is a huge problem.
"More often then not they don't know what they are doing."
That's most of the people I've worked with in IT. They can bodge together something that sort of works for just long enough until they move on to fuck something up elsewhere. They leave behind an unmaintainable mess that (often conscientious) people then have to struggle to maintain and extend, but management only want to have new features and cant comprehend the concept of technical debt.
Sadly, many of the bodgers are also good at bullshit. So they'll sell themselves as good at "Agile" or "DevOps" and give otherwise decent ideas a bad name. I've seen agile and devops done well, and they are great, but require competency. They are *not* a silver bullet for systems already beyond reasonable repair. All too often I've seen managers suddenly declare "we're agile" or "we're doing devops", but not actually giving the people doing the work the training needed or acknowledging the decades of crap that went before them that hinder them applying these principles.
Frankly, I've had about enough of this shit industry. No, that's not true. I had enough of it years ago, but I'm too old to go and do something else that still has a degree of professionalism. So I'll continue to chip away at the mountain of shit code until either ill health or my so far suppressed urge to kill myself can be contained*.
[*] said urge a balance of despondency at contributing anything really significant to society and my disgust at the state of the wider world
They're neither top tier, nor capable of delivering large projects, which is why they're pitching another crappy vague methodology, instead of their development skills.
That methodology is ONLY pitched to failed managers. Successful managers deliver products and don't want or need some vague twaddle from an outside salemans.
Devops are 'clipart' artists of management. If you don't know how to manage a large project, here's a vague roadplan of what to say and do.
And top of the list? Well failure is not a devops failure, its the teams failure to not use devops correctly!
'Highly evolved means frequent deployments, very short lead time for application changes ("less than an hour"), quick response to failures ("less than an hour")'
If the application I was using before lunch might not be the same as the application I am using after lunch then I'm not going to want to use that application.
A section in the report on "overcoming the burden of legacy" has more detail on this, saying "invest in your legacy environments so they are no longer a constant inhibitor of progress."
The best that can be said of that section is .... What a load of utter bollocks ! [Please excuse my French]
Bypassing and ignoring legacy environments/dead wood is the sensible answer for novel strong growth and progress uninhibited by undergrowths infested with weeds.
That truth however makes for an uncomfortable read, although one does very well to realise that to deny and ignore it renders one as defenceless vulnerable prey to developments in, and developers of novel progressive operations free of blighting weeds.
Biting the hand that feeds IT © 1998–2021