One Consolation in this.
At least it only affected those stupid enough to develop with it. Just awaiting the next cock up that pushes crap out generally.
NPM – the biz behind the Node.js package management software used to wrangle JavaScript code and various related frameworks – on Thursday undid a code update less than 24 hours after it was issued because the software was messing with Linux file permissions. The release of npm 5.7.0 on Wednesday – under the company's pre- …
k. I know it's popular to rag on node/npm hereabouts.
I know I struggle enough with JS testing and failure hardening that running a backend on it would not be my choice.
But, if you have to write complex web frontend behavior, you're stuck with JS there. Old JS: no import or rquires, no modules, no TypeScript or Coffeescript. <script src=https: tags :(
Serving up old style JS browser client code from a saner codebase using JS15/Typescript/Coffeescript needs a transpiler. Most of those use node/npm modules that get it to act essentially as a compiler.
I.e. use of npm does not equate to writing server code in JS.
"But, if you have to write complex web frontend behavior"
then you're stuck with an idiotic/clueless design that should be scrapped for something that does bulk of the work server-side instead, and without JavaScript.
but yeah that would require more serious developers with *REAL* skills... instead of pretend "developers" who "program" using JavaScript (read: slap together several bloatware packages into a chimera-monster and call it 'programming').
DevOps is certainly part of it, but my guess is that the modern "security" mantra is just as much to do with it. People are bombarded (correctly, in context) with exhortations to keep everything up to date and always apply the latest patches, so it becomes second nature to pull the trigger on any update as soon as you hear about it. Failure to do so (Equistrutsup) can be career limiting. No PHB on a sacrificial goat hunt after a security breach is going to be deflected by hearing a patch was still in internal beta.
"What is this word "Equistrutsup"? Your comment seems to be the only Google result for it. Conglaturation!"
I assumed it was a portmanteau word using "Equifax" and "struts" (as in Apache Struts) and "f*ck up". Google can fill you in on the rest.
"People are bombarded (correctly, in context) with exhortations to keep everything up to date and always apply the latest patches..."
I have strong reservations with this mantra too. What really bothers me is that it seems to encourage the normality and acceptance of ignorance; every now and then I'll find a problem and need to search through support fora to gain further insight, see whether anyone else has run into it and whether they solved it, and how. Regardless of the type of problem, and in nearly every thread, there will be someone who will simply recommend "updating all your drivers and apply all the latest updates".
It's as though they believe there's some sort of patch fairy who simply magics away bugs.
(There's also an astonishingly large number of people who seem to think that "a full wipe and reinstall" is an acceptable solution!)
I'm afraid that it has to said that this approach to problem solving seems to be more prevalent amongst the Windows community.
It has become the latest "hey, I know how to work computers too phrase.
They always get grouchy when you explain that it was the update that broke it in the first place too. Then they look horrified at the suggestion of a rollback to the WORKING previous version.
It's the licensing model, and MS isn't only one at fault. Because setting up prod incurs eye-watering fees per core or seat, a lot of shops both large and small skimp on their test envs. Over time they don't look much like prod, or get borrowed (stolen) by "special" pet projects of some exec or another. Got spoiled running free ae s in beer OSS for many years, and was shocked to see this so prevalent
It's the licensing model, and MS isn't only one at fault. Because setting up prod incurs eye-watering fees per core or seat, a lot of shops both large and small skimp on their test envs.
We have at least one customer I personally know of who has paid for prod, test, and dev environments (with us, you have to, it is basically fully comp) yet the dev and test environments have the latest release, however, they do nothing, so are not testing anything ... I told the customer, I called multiple times for her to consider to setup the environments properly, warned her that she will have a problem at some point .... She has no time ... all she has to do is copy over from prod to dev, adapt some settings (can easily be setup to be done during copy) and make changes there .... she prefers making changes on prod directly ... I did not want to touch her prod when she asked me to investigate an issue ... I said, let's reproduce in Test, she said we cannot ... I asked what about dev, she said dev was empty env ...
You cannot help stupid.
Anon, obvious reasons ...
If you automate a process that lacks proper testing, it's not going to suddenly gain proper testing. If you automate a process and fail to include part of it (like testing), you're going to have a bad time. If you hear some popular term in the industry & decree that you have implemented it, your company is going to have a bad time.
So yeah, if someone is flinging the term "DevOps" around instead of doing proper engineering, that will be a bad thing. Same thing with any other term.
I'm pretty sure my understanding of DevOps isn't that it means "no testing" or indeed "no test platform".
I thought DevOps was basically an automated integration of the release process from development to deployment. Admittedly when you boil it down to that, it rather loses it's cutting-edge sexiness.
DevOps more or less started out as automated deployment processes, especially CI/CD, but in fact is a bit more. When proper software engineering is brought to bear, and the basic tooling is in place, new and better ways of doing things evolve.
For instance, suppose the devs are pushing out an app which is linked to a version of a library that now has a CVE on it. Pre-DevOps, the process would be for the security team to flag the issue and send it back to the devs. Which might spark a series of meetings to argue about what should take priority. If you automate that process (somehow) by having the pipeline flag the issue, you've not gained much.
But suppose you have confidence in your tests. Your build pipeline can simply grab the latest patch release of the library & build against that. If the tests pass, out you go. If not, the failed test triggers a ticket & maybe you have meetings. But if the devs want to deploy with an older version of the library, the onus is on them to prove that its the right thing to do.
What other kind of user would need sudo?
...and that's what proper QA is about. Root has no reason to use sudo so that's exactly why you have QA specialists who think up all the stupid, illogical and documentation defying things a user might try and test them to ensure no unanticipated code paths get triggered. QA isn't about checking that something works - it's about trying to out-think the developer and break his code.
Yes, proper QA is about thinking about what kind of idiotic things might happen. So to that extent, npm should accept 100% blame. HOWEVER, who is already root & then runs sudo to root?
Bueller? Bueller?
Unless I'm really missing something, there is an ID10T admin involved, and this ID10T also gets 100% of the blame.
Least privilege philosophy. Ubuntu baked it in. Expensive security products enforce it. There are a lot of enterprises where no one is allowed root login, until they have to because of something like this happens. Yeah, the paradigm is f*ed up, but look at the stock price on those publishers hawking that snake oil!
Yeah, and not only you but the all users working on that server would have the root password.
Su is only good if you are the only user on the server. And with su you can not limit the commands that can be executed with escalated privileges.
Erm, are by chance you a developer ?
Not 100% the same thing but when I patch the network kit (anything from humble Layer 2 switches up to ASR9K) my general rule of thumb is if the code is less than 30 days old I just don't use it. Let someone else be a beta tester.
Of course exceptions apply - the recent Cisco 'perfect 10' ASA vulnerability meant having to install freshly baked code and take the gamble of potentially unstable code vs nasty exploit.
That's the chef model (at least the old one), and past a certain point, it sucks. If I'm using containers, I've can have one set of bits I use in all environments. There is no "update", only, "tear down & replace" at the server level. Of course, you use canaries and the like as you update prod to the version currently in preprod.
Aren't you referring to "Uninstall" and "Install". Aren't those already two separate but supported actions of basically any installer?
Does the installer have to have special logic to support that action, when whatever is calling the installer could easily do two steps instead of just one?
Not a NPM user, but with apt-get and whatnot, uninstalling "unused" dependencies is optional, right? So "installing" to "tear down and replace" seems like a kluge.
At G, it was about fifty key strokes to tear down the applications on thousands of boxes and replace them with new containers running the latest version. This is being done AFTER the image has been built. In prod, you don't update libraries, packages, or, frankly, anything but the set of containers that is running.
If the underlying OS (or Borg) is to be updated, all of the containers get kicked off. I presume that again, a fixed image is getting rolled out, but I was not working at that level.
To be fair to those who had to reformat their Linux systems courtesy of this bug, I remember well from my previous job how incredibly hectic the pace is with JS dev. It's at the low-paid end of the market, so there's no real time (or budget) to invest in careful testing and development.
Add to this the enormity of the dependencies NPM downloads, the incompatibilities between NPM versions (e.g. the modules) and the commonly seen attitude of 'it worked on their system, so it should work on mine'.
As a result, JS dev all too often results in a frantic mashing of keys, repeatedly removing the node_modules folder, changing NPM versions, repeating 'npm install' until hopefully things Simply Work Again (tm).
I cannot say that I would care to ever return to that world.
Usually those are the ones which seem trendy or are discussed during training. That's why you had lots of ultra crappy Windows desktop software in the 1990s, and later lots of crappy Java software.
Today early programmers typically mess around making Websites or mobile apps.