So Dev Ops fixes everything, huh?
This article does a pretty good job of explaining how bugs can be a major problem. And then it comes along with the following line:
"In a carefully architected DevOps process for a web application, [...] the cost of fixing a bug found late may not be too bad."
Let's discuss this. A dev ops process has no good definition. I've read the dev ops articles here. They have essentially put the dev ops™ label on every known good coding, management, or systems concept under the sun. Unit testing? A primary precept of dev ops™. Ensuring security? Meetings with managers where the developers are listened to? Having firm documentation about policies for development and usage of the code? All dev ops™ concepts. This tends to assume a utopian ideal of code development and management style, anyway. The problem with this is that none of these things are actually connected. Articles about good policy simply have dev ops plastered on them. So there is no clear way to identify what exactly about dev ops makes these bugs so easy and painless to solve.
Or is there? Let's fill in that gap in the quote.
"where a code change can be made, tested automatically and deployed into production rapidly"
So that's a no. Dev ops articles frequently mention agile as a development style. That quote above clearly describes a system that works similarly to agile, in the sense that code is supposed to change often and get into production quickly. That does not have any benefits with bugs. Bugs will still happen. If and when agile is done wrong, bugs will happen *more* often, because managers think that agile coders should always be moving on to some new functionality rather than repairing things. Agile does at least mean that bugs should be patched more quickly, which has a bit of logic behind it. However, it does not have any specific way of ensuring the bugs are less dangerous.
Let's talk about the "tested automatically" stuff, too. You can't test everything automatically. Unit tests are great. I expect competent devs to be writing them and to make any changes go through them. But unit tests do not catch every bug. There may be unit tests that nobody thought of, or someone thought of but then nobody wrote. Worse, there may be a bug that you either can't test for or you won't notice until things are put together. Consider that heartbleed bug discussed in the article. It doesn't really have a meaning on its own. Unit tests of invalid data could have caught it, true, but there are a lot of types of invalid data, only one of which triggers this. Only when combined with a thing like a webserver does this bug become so noticeable.
That's not a thing a unit test, written by one person and never looked at again because "the automatic system will handle testing" is going to notice for you. That's a thing where you want devs writing unit tests and manually running a test suite, looking at the output to think "I wonder what would happen in this case, but there is no test for that. Let's see." and people doing larger real-world testing on larger components. An automatic system cannot possibly try all types of standard input to a large program and properly interpret the results, but a QA department can.
By making testing simply a speed bump on the road to production, rather than a required turn, you make it a lot easier for things to get through inadequately tested. Write fast and fix things when you find them won't work for rocket launches either.