Safe & snug? Not when connected to teh Tubes, no.
But a healthy dose of measured paranoia seems to have managed to keep my connected life as boring as possible all these decades. Ta for asking.
The ransomware problems reported by The Reg over the past few weeks are enough to make you, er, wanna cry. Yet all that's happened is that known issues with Windows machines – desktop and server – have now come to everyone's attention and the bandwidth out of Microsoft's Windows Update servers has likely increased a bit relative …
Perhaps for security reasons people hide the server type tag on a Linux distro as part of PCI compliance so you have no idea what server a website is running on.
Did anyone bother to count the Linux Kernal in Android phones? Something like 432 million smartphones in 2016
Seriously?That doesn't smell right at all.
Check the source, and the data set.
That 12% figure comes from Spiceworks, who provide server monitoring software. Server monitoring software which can only be installed on Windows.
So it's far more likely that what this particular statistic is actually indicating, is that Windows-centric companies use something other than Windows on 12% of their servers.
That number is just as ridiculous as the web statistics companies that claim 90%+ of web servers run Linux. Their methodology is bunk, because SpiceWorks has no way of knowing what servers people who aren't their clients are using. Just like the web statistics companies only detect the OS on edge node servers so if you use a Linux-based web balancer (like almost everyone does regardless of app server) they think you're running Linux.
You're right to be skeptical of statistics like this, because they're in general, totally unreliable. No one has enough data to compile such wide-ranging statistics.
It is a reasonably good bet that the Five Eyes and similar signals intelligence agencies elsewhere have done the research and have a good idea of real usage, as well as the usage among their respective target populations, which might be significantly different. For a number of reasons, however, they won't be publishing anything about it.
I expect that Google and other search portal operators also would be able to report such information pretty accurately.
Thanks for the feedback - you're probably right that Spiceworks is biased in favor of Windows-centric orgs (it does offer Linux monitoring tools, though). It's something we'll keep in mind.
We've tidied up the section on Linux/Windows web server stats: there Unix-ish OSes rule the roost.
C.
"Thanks for the feedback - you're probably right that Spiceworks is biased in favor of Windows-centric orgs "
Forbes says 75% of company Servers run Windows which sounds about right to me, if not a bit low. Clearly the vast majority of most company servers run Windows anyway.
It's BS is why it smells bad http://www.zdnet.com/article/linux-foundation-finds-enterprise-linux-growing-at-windows-expense/
One of the limiting factors on Windows servers outside of Azure & AWS is licensing costs. Nobody with their head screwed on is buying windows servers, they either rent Azure or AWS or are wasting money by not engaging brain.
I think the last time I had a major problem upgrading a CentOS server was with early releases of CentOS 6.0 or 6.1. After that in version (major version) updates drop in quite nicely.
The same can't be said for stuff that is not in the main OS release.
It seems that I have just finished testing a new release of Wordpress when another one is released. The problem is that WP like many other bits of software releases a complete new version. The update process is aimed (quite rightly) at those who use WP via a hosted system and you have to do it via some control panel (or worse). That does not work very well with my hosting my own WP server on an Intel NUC that sits on a shelf in my home office. So I have to manually update it, test it all and then switch over the HTTP server to use it. I just get that done and another PITA bit of work comes along. Rinse an repeat for thousands of other bits of software on systems all over the world and the scale of updating these essential bits of software is a OMG moment.
I'm lucky in that I have a test server (and old EE-Box) that I can use to make sure that the update works AND that I have only one internet facing server to deal with.
In a past job, I had many more to deal with and it was a right PITA. The OS wasn't the issue, it was everything else that caused us a great deal of angst and late nights.
Posting AC as my server gets enough hacking attempts as it is. Added another 500+ IP addys to the firewall only last week.
Yeah, well, if you're using Wordpress it's hardly worth bothering to patch the OS :-)
That's a joke for the humour-impaired. But with more than a grain of truth, as someone who was once responsible for a Wordpress-driven site that got defaced. Turned out that a theme had bundled a plugin which had a vuln in it that had been patched 2+ years earlier, but the theme author hadn't updated the plugin. Of course, when someone found that unpatched plugin in said theme.....
Overall, I wouldn't trust Wordpress with anything important. Especially e-commerce, where you rapidly enter a maze of twisting plugins.
My previous employer used the "never touch a running system" approach to their customers' machines.
They were still distributing new servers and VMs with a 2000 vintage version of SLES on it! Why? Because they didn't want to bother having to update their applications to run on more modern Kernels or system libraries. Security? Pah, it's Linux!
They only switched to a new distro (CentOS), when the hardware would no longer boot the ancient SLES / they couldn't get any RAID controllers with drivers that worked on the ancient SLES.
Same experience here the last time I experienced any kind of issue with upgrades was on the transition from RHEL 6.0 to RHEL 6.1, in my case it was related to a blunder on LVM assuming certain defaults, you could work-around it easily and got fixed in 24h though.
This maybe shocking to Windows people, but usually upgrading a Linux server if you know what you are doing (this is if you're experienced) is completely painless and very, very quick.
The problems in Linux come with commercial software from 3rd parties, some which insist on using abnormally large amounts of shell scripts with lots and lots of assumptions (and no fail check whatsoever), seem to have odd libraries that have strange dependencies, and support personnel who think the Linux shell is a more complicated version of MS-DOS.
That is why some people keep running their RHEL 5.x boxes happily for years and years, it is for fear or screwing these applications.
One way in which people keep these stupid turds running is, they buy new hardware running RHEL 7.x, virtualise the old RHEL 5.x server, stick it in a VM and access it via proxy software running on the physical RHEL 7.x side.
If you think about it, is like a brute-force container. I have seen that done to work around applications that can't be upgraded and depend on old versions of OpenSSL. As the proxy is running on RHEL 7.x you offload the SSL to the RHEL 7.x side and voila, new cypher/protocol support on an old application.
The problems in Linux come with commercial software from 3rd parties, some which insist on using abnormally large amounts of shell scripts with lots and lots of assumptions (and no fail check whatsoever), seem to have odd libraries that have strange dependencies, and support personnel who think the Linux shell is a more complicated version of MS-DOS.
Might those products be made by Windows programmers dabbling in reproducing their mistaken ways in a brand new environment ?
Yes, it's the problem with Linux - you get into troubles whenever you install anything useful on it.
Linux relies too much on the "local compilation" model and availability of source code. Just, not all companies are willingly to give it away, and shut down business immediately after.
It's worsened by distro like Debian that usually just have last century code.
Because these issues are magnified by desktop applications, you get the one digit percentage of Linux desktop systems.
"Linux relies too much on the "local compilation" model and availability of source code."
*groan*
no. commercial packages can easily be distributed either with local copies of all shared libs, or by staticallly linking everything [avoiding the problem], or by compiling separately for different distros if shared libs MUST be used for some reason.
I think Oracle mastered this kind of thing a long time ago, as one example.
/me runs into the 'Linux Binary Compatibility' thing on occasion, being on FreeBSD. Usually one of the 'CentOS' compatibility ports gets it done.
I think that when it comes to patching servers Debian has the best strategy. They backport security fixes so that holes get closed without affecting any of the functionality of the software in question.
This provides the smallest chance for problems. Because of this, I know companies who have auto-update enabled for their production servers running Debian.
So do I, and they were impacted by the recent...I want to say dovecot/postfix problem on Debian that affected their entire platforms ability to send emails.
That said, that's only the second time they've had any problem with that process in three years that I am aware of - every other security update they've run has gone through without a glitch.
No update process is entirely without risk, which is why so many orgs have patch paranoia - god knows, I work at one and thanks to a custom software stack which the entire business runs around, on which they have almost complete technical debt - it's a fucking nightmare to keep on top of security of servers that aren't fully supported any more, and which weren't implemented well in the first place (no snapshotting capability, no staging environment, flat network so no test vlans etc)
But hiring a dev to pull the stack apart and re-implement it in modern platform/environment? Why would we need to do that? It's not broken!
*bangs head against desk*
I'm currently runnng Debian Jessie on one of these, it's headless and has gone through multiple dist-upgrades during its life, it started off as Etch and has never broken during any updates. So I totally agree that Debian is rock solid in their patching methodology, it's stable and works.
They backport security fixes so that holes get closed without affecting any of the functionality of the software in question.
All decent OS vendors do that. RedHat do the same (Red Hat Enterprise, CentOS), as do Suse and, I suspect, other Linux/Unix distros. Microsoft seem to as well ('seem' - this is what I read, I don't use any MS product).
Where they vary is how quickly they backport fixes and how far back they do it - ie how long something is supported for.
Destktop/mobile OS's v servers - it's the difference between stealing car stereos and robbing a bank. Harder, but vastly more rewarding* And Linux and Unix are pretty popular for servers.
* Ransomware has changed the balance somewhat - potentially $300 a time for fairly easy pickings!
Sorry to shout, but it is annoying that people keep quoting this useless link thinking it means something. CVE reporting is voluntary, and every company has a different process by which they determine whether to file a CVE for a security bug or not, and whether they file a CVE for each individual issue, each affected subsystem, or a single CVE that covers tons of unrelated stuff because they happened to be fixed in the same patch set. DIfferent companies ship different amounts of stuff as part of the "OS" as well.
Anyone looking at that list who has half a brain can tell easily how useless it is - notice that Windows 10 has more CVEs than Windows 8.1, which has more than Windows 7. Does anyone really believe Windows is getting LESS secure?
Quite.
Also, the numbers for Debian (which includes over 51,000 packages) to those for Windows is no basis for comparison. Of those 51,000+ there is a very long tail of packages that get installed in only very small niches, so if there's a CVE for a package that is only installed in compute cluster nodes that are never exposed directly to the Internet, that is in no way equivalent to a CVE that is reported against Windows which has _much_ less variation in the set of software installed across the herd.
The top-50 list pointed at includes a large element of double-counting, since Ubuntu and many other Debian derivatives will often inherit any flaws in Debian, since they are closely derived from Debian.
I note that 2016 was chosen rather 2017 -- Could it be that the wedge of Microsoft products in positions 6--11 in the 2017 list is not so easy to spin as being somehow good?
http://www.cvedetails.com/top-50-products.php?year=2017
Saying embedded systems and systems with "sysadmins" who don't patch is somehow a Linux problem is akin to saying that it's Ford's fault so many of their cars break down due to people using crap oil from supermarkets.
What this actually is is a money problem. "We've done the R&D, we have a product, flog it. Updates? WGAF? We've got their money."
One more reason I run LEDE. I'm not relying on anyone else for my security.
Quite possibly, patrickstar. I don't, though. I usually blame the user, since it's ultimately the user's responsibility not to click unknown links, open unexpected attachments and generally act like a bellend. Granted, there are times when it's the admin's fault and there are times when it really is Microsoft's fault, given that most of the services are embedded into the OS. That said, it's still the admin's job to disable any that aren't needed.
Ultimately, whose fault it is matters less than having the right information to mitigate holes. Playing the blame game only gets in the way, which brings us back to my original point.
I usually blame the user, since it's ultimately the user's responsibility not to click unknown links, open unexpected attachments and generally act like a bellend.
Personally, in this case I blame the admin. Windows comes with tools bundled (free of charge FFS!) to disable scripting, file downloads, you name it. You can even make windows only run anything vaguely executable on a whitelist by either path or via hash which makes it much more tolerant of users well, using it.
On a separate note, most of the exceedingly loud "*nix is better than everything else" people haven't worked in IT, aren't working in IT and aren't likely to in any capacity above 1st line support.
You can tell because they fail to understand basics like "The business requires $software-A. This runs on $OS-B." End result, company deploys $OS-B to run $software-A. Instead of accepting this, they advocate designing and creating a new bit of software to run on their preferred OS instead of actually doing what the management who own the company have decided to do, presumably thinking that IT has more control over the company than the CEO does and doesn't need to worry about timescales or budgets.
Well there's a limited view, but lets take for a moment, maybe why you need multiple tiers of support. The software and training don't meet a business case.
I'll agree that there are some pretty high-profile F-ups with migrations to Linux, but there are always high-profile F-ups with existing system migrations.
I'm one of those "linux people" that isn't "working in IT and aren't likely to in any capacity above 1st line support." except I am. I've had people working on government IT fly in to ask my advice, so I don't give AF about your autistic screeching against non-windows advocates.
If we're going to talk about large companies with legacy, non-windows is the main-case. Ford uses mainframes running COBOL that came about before Windows existed; Facebook uses Linux, Netflix server uses linux in all their open-source projects. The top 500 super-computer index windows last was listed in 2015 in position 436.
You are basing your judgements likely on anecdotal experience from within a windows oriented business that might dabble in Linux, treating it's IT like a candy-shop. If your CEO is educationally subnormal enough to believe that spending min 5k per-server and .2k per-seat is a good idea (that's just pro windows and office at volume), then I worry for the future of your business, because software does augment business massively.
You seem to be doing the same. In my employers business our sector specfic software is about 2K per seat and there is a fair choice of suppliers but all depends on windows. So the windows cost is a small part. Most people probably think the only software we need is Word to produced documents and most of our employees "earning" the money aren't to capable of using Word either.
Anon because you don't talk about your employer.
"If you're paying for a Windows Server license, why would you only use it for a short period of time?"
If you have DataCenter edition of Windows Server, you can run any number of Windows-based VMs on it, without paying for a separate licence for each VM, so bringing up a Windows VM in that situation for a short time is not technically "free", since there's cost to the original covering licence, but there's no incremental cost for another machine for a short time, so in that sense, the extra machine is free.
"If someone wanted to spin up a server to test something quickly, that's when they'd use Linux."
Not if you're testing Windows software...
Windows is of course the natural entry point because it has the biggest click-first-think-afterwards user base. It is also an inherently less secure user environment with less sandboxing, APIs which reach deeper into the OS and file management, etc. etc.
Linux has its enemies and its vulnerabilities. It has rough parity with Windows in web server space, each Windows box tends to host more sites, though many of those may be inactive. You can prove what you like from those statistics.
Nevertheless, more Linux servers have rootkits installed than its user base realizes. And you can include the poorly-quantified but generally vast IoT in that.
So for ransomware go Windows, for DDOS bot go Linux.
Now, about that vulnerability reporting-patching thing. Open Source products such as Linux and most of its apps get heavily scrutinised and many holes reported and, usually, promptly patched. Closed source products such as Windows and most of its apps get little scrutiny, reported vulnerabilities are often kept secret and unpatched for periods of years. Note how hole discovery tails off as a Linux product matures, while the same cannot be said of Windows products. Given the same crap code for v1.0, the Linux code base is much better at securing itself over time. Those CVE stats record only the surface activity, not the underlying strength. Rejoice in the activity of the Linux community.
Of course, any given device is only as secure as its OS installer makes it, and here the Linux-grabbing cheapskate gadget makers unravel all that good work. You can never fix human nature and stop dumbos clicking unsolicited email attachments, but what the IoT desperately needs is security standards and government legislation to enforce them.
The point is, someone noticed it.
We now know the NSA was sitting on a trove of Windows vulnerabilities it had found or gotten info on. Who could notice ? How could we have known about them if nobody had hacked their servers and found out ?
Yes, hackers would have found some things, one at a time, but hackers don't appear to share their discoveries. They horde them, like the NSA, to be the rare one who does know.
The Linux community and FOSS in general share their vulnerability discoveries so many more people can intervene once a problem is found and try to fix it. The fact that one vuln took 9 years to discover does not impeach the good foundation of this process.
"Actually, it was noticed 11 years earlier. See the commits messages here:
https://github.com/dirtycow/dirtycow.github.io/wiki/VulnerabilityDetails
Why do people insist on shooting off their mouths about things they clearly don't understand?"
Thanks for pointing out that "Dirty Cow" is actually a WORSE situation than anybody thought - Linus Torvalds knew about it for 11 years, but did nothing to correct the issue. And you are right - I clearly didn't understand that it was so bad.
Hell, even M$ try to fix things when they discover them........
I was trying to make the point that Linux can be just as vulnerable. And not only that, but it was something that wasn't realised for years. But, as we know now, Linus Torvalds knew about the issue 11 years ago and did very little.
Yes, Linux is more secure than Windows. But assuming that you are and always will be secure on Linux (especially because I only used one example) is also a logical fallacy.
It's the #1 desktop OS, so it's the biggest target. Although there have been some interesting ransomware attacks for Mac OS too. Very few people use Linux as a desktop OS and those who do are on average, fairly technically adept. These attacks require the user to download a fishy file to start off, so Windows is the primary target.
...someone who uses Windows a lot, and then applies the perspective of a Windows user to the *nix world.
It doesn't correlate.
Also some inaccuracies: "And CentOS is currently at release 7, but version 5 (whose latest minor version release was in 2014) is still supported until 2024."
From Centos faq:
"Full Updates (including hardware updates): Currently to Q4, 2012
Updates ( including minor hardware updates): Up to Q1 of 2014
Maintenance Updates Q1, 2011 - Mar 31st, 2017 "
Yes, everyone should patch. But no, patching *nix and the different boxes is not like patching a windows farm. And chances are, if your organisation is running Windows XP unpatched - then the organisation isn't being ran by *nix sysadmins because it is rare to have someone of that ilk unaware of their duty to their flock. And the approach/methodology is different. You don't wait for the Mighty Source to feed you. You have several sources that can indicate you need to patch, as well as your particular distro maintainer. From what I can see you are saying: Because Linux has a lot of long term software, you should patch them. Like XP.
If that is the case, I struggle to keep a straight face. Systems should be patched. So if you are really writing an article, on El Reg, stating: make sure you patch systems, then why not write an article for the Daily Mail stating: Make sure you feel moral outrage.
I don't know what the purpose of this article was. Unless it was to just write an article. In which case, you win.
@m0rt - The current issue aside, the basic problem with security is not everyone patches their boxes and too many have bad surfing and email habits. If you fall into one or more of those groups, the OS might buy you more or less time before you get nailed by a nasty. For those who do patch, have good surfing habits, and are careful with emails, you are less likely to be harmed no matter the OS.
"We talk about, say, RHEL 5 or CentOS 7, but each of these versions has sub-versions and they do fall out of support over time.... Now, there's a difference between applying patches for, say, version 6.2 and updating from 6.2 to 6.3: in-version patches will generally not affect applications, but minor version upgrades have a higher risk. "
I don't know about RHEL and derivatives but for Debian regular patching brings it up to the current version number. e.g:
cat /etc/debian_version
7.11
and yet it started out at 7.0.
This post has been deleted by its author
Not many linux users have stopped patching a linux distribution because the vendor started throwing dysfunctionality (like Skype borking, WGA bricking, & driver breaking) and spyware into the updates.
OK, KDE's become a problem. But, unlike the WIN8/10 UI, we uninstall it and use other window managers.
What "market" is he talking about? Not everyone buys their software through commercial sales channels. The figure of 12% for non-MS servers looks bogus. Did they use OS-fingerprinting to arrive at that figure? If not I'd be interested to find out what methodology they actually used.
Its only including servers their scanners can see and get a response from. Anything behind a firewall or with the services that respond to those requests switched off won't show.
However there are a hell of a lot of SMEs that are Windows shops.
So this is one of:
- theres more Windows boxes out their than us Enterprise IT types imagine and the figures are correct
- 'nix, etc admins are more likely to harden their web facing boxes so we can't see them on these stats
- Most nix is behind Enterprise firewalls so doesn't appear on these stats
- theres more Windows boxes out their than us Enterprise IT types imagine and the figures are correct
Having worked in some fairly large enterprises, I have typically seen from 100% Windows to about 60-70% Windows. That's including ~1,500+ server estates with ~1k Windows and ~400 Linux - the rest were either big strange beasts (mainframe and similar) or VMware hosts.
Perhaps you (as a Linux admin type), see more Linux servers in your roles because companies with a large Linux footprint recruit more Linux admin types, and vice versa for Windows.
Having worked in some fairly large enterprises, I have typically seen from 100% Windows to about 60-70% Windows. That's including ~1,500+ server estates with ~1k Windows and ~400 Linux - the rest were either big strange beasts (mainframe and similar) or VMware hosts.
Financial sector here. Large banks, and I mean large enough to be known globally, have all their stuff running on UNIX. At a certain bank where I worked at, the majority of servers were Sun hardware running Solaris, IBM blades running Linux, and a dozen Windows servers used as domain controllers. And of course, the core systems running on IBM Mainframes.
But really, the ratio of UNIX-to-Windows was something like 300 to 10, and I'm probably being generous to Microsoft.
I've been using FreeBSD for years already so... since the article talks about feeling safe & smug and all, I suppose that does somewhat apply to me. Of course, let's be realistic: no one is fully safe. Even with the ransomware issues: everyone seems to ignore the fact that the virusses didn't "just" take over whole networks, someone let them in.
But one thing I do worry about... Exploits which can spread usually focus themselves on one specific vulnerability and then exploit that 'en messe'. Windows is the easiest target because (generally speaking) the main structure of every Windows computer is more or less the same.
So here we are on Linux: where a majority of systems has adapted to systemd usage which more or less enforces the same kind of standards, and right during boot. And as we could read a few weeks back even systemd is not without flaws. Worst yet: because of its nature some of those flaws can even be exploited remotely.
With that in mind I can't help wonder how long it'll take for a worm to specifically attack systemd.
July 4th. My Day.
I've just done a full install of Windows 10 1607 from the downloaded MS 1607 ISO. In fairness (speaking like a MS apologist), most people may not bother to get fully updated there and then, post-install but even so. I didn't even make the post install step here to 1703.. Here goes...
I opted to update offline (just because I hate the annoying double selection delay in choosing the online update options, then rechoosing what to keep, aka. the spinning throbber* - "getting things ready" section). I then install updates - Post installtion via Windows update online. The HP laptop used was no slouch.
Initial installation was fairly quick, but (not even) a year of Windows 10 1607 Patches took 5+ hours (I'd say 7 once I got everything right) to fully make their mark, post installation, even via a FTTC Broadband connection.
FFS Microsoft, 5+ hours to Post install from the standard 1607 ISO install? And that is just one laptop.
Linux Mint 18.2 installs in 10 minutes, all done, fully updated. (and also nicely seems to start downloading in the background while you are selecting your username, password etc). Yep, its a clever clean installation routine.
Why are we all still using Microsoft bloatware?,
Linux Mint 18.2 Cinnamon, even (without being biased here) has the nicer, cleaner simpler interface (than Win10), with the latest update.
Microsoft seems more and more about restricting what technology can do for people, protecting its interests, using the money they have, to protect Micrsooft's incumbent OS.
People in positions of influence (that can make that change) really need to start pushing the Linux message home, enough of the Microsoft all things to all men, scatter brain approach, to see what sticks. By that, I'm actually saying bluntly:
"No More Microsoft Desktop Upgrades For the NHS, Let's Go Linux".
I think it's finally turning against Microsoft. Microsoft need to back off with the constant pointless changes.
*-Yes, throbber is its offical name, for the spinning process wheel.
I installed Windows 10 in 20 minutes last weekend. Everything was recognized, all ready to go. Not sure what your problem was. You did grab the latest version off the Internet first, right? That has all the patches already in it. Just like Mint 18.2 did.
I mean, I could install Mint 17 and get the same experience you got with Windows 10, but that wouldn't be a fair comparison.
Windows 10 1703 is still in pilot, or just hitting production (it's certainly hasn't been rolled out fully), according to Microsoft's own Windows 10 update timeline. Therefore Windows 10 1607 'is' the current production release.
> Linux Mint 18.2 installs in 10 minutes, all done, fully updated
Unless you are stuck using a satellite Internet connection, in which case it takes about 2 days. apt really doesn't like high latency connections!
Added a proxy pointing at the 0.75Mb BT connection and it only took 3 hours...
That the transmission vector was a flaw in SMB I protocols.
In my world, we don't install Samba *anything* (even cifs support) on a linux host unless, well, we might need it, like to maybe talk to windows. (I believe in keeping that collision surface as small as possible wonder why....)
Since it isn't there unless its needed we had no issues.... We just updated where it was installed.
I'll note that you can get extended lifecycle support from RHEL. It aint cheap but you can get it.
I'll note that you can get extended lifecycle support from RHEL. It aint cheap but you can get it.
Moreover, you can get any level of support you like from people *other* than RH; there is a competitive market there.
I do very little of that now, but I do still support a box that runs something that was once based on RHEL4 - the owner wants it that way, and with good reason, IMO. It doesn't look much like RHEL4 any more, but the applications don't know that...
Vic.
That the transmission vector was a flaw in SMB I protocols.
More specifically: the implementation of the handling of a particular request in SMBv1. "EternalBlue targeted an implementation mistake in the ancient version of the Server Message Block (SMBv1) message handler in the Windows kernel – enabled by default on any OS from XP to Windows Server 2016. ".
Samba is an entirely different codebase, and I have not heard of Linux servers being affected.
The Linux server stats are way out.
https://en.wikipedia.org/wiki/Usage_share_of_operating_systems#Public_servers_on_the_Internet
What goes on behind company firewalls is a different matter, but 12% sound highly dubious.
I'm not sure what the point of the article is apart from use a long term support version of a distro and keep your software up-to-date.
Someone mentioned it's hard on Linux because of the amount of local compiling you need to do. Perhaps that was an issue in 1999, but the only thing I've had to compile on a server in *years* is OSSEC - an intrusion detection system. And you compile that so that you can trust the binaries.
Live kernel patching is also available (for a price) which means you don't even need to reboot your server to apply all the latest patches.
As someone mentioned, for Debian/Ubuntu systems, apt-get update; apt-get upgrade will sort you out. apt-get dist-upgrade if you want to upgrade held packages, but careful with that because it'll restart services.
You'll also want to subscribe to the security advisory mailing lists for your chosen distribution. They'll let you know when vulnerabilities are patched and available to upgrade from the repositories.
After twenty-five years of supporting Windows for my clients I'm just starting to make some inroads into changing their minds about its future. What will be the catalyst will be the pricing of Windows as a service and the increased support costs of maintaining all those massive piles of spaghetti we call systems.
Netcraft's Web server survey shows that Apache (almost always run on UNIX machines) had 45.8% of the market vs. Microsoft IIS which had 7.7% of the market in June 2016. Yes, I know not every server serves up Web pages, but a lot of them do, which makes this 12% figure highly suspect in my books.
It doesn't surprise me that people fail to update point release distros. They're updated too frequently, even the LTR versions and I've never had a completely painless update from one major version to the next.
I think the rolling release model is much better in terms of security.
Attackers pick the most effective targets, that is, the ones with the biggest payoff for the amount of effort invested. Right now it's Windows because persuading Grandma to fork over $300 is easier than the other options. It doesn't make Windows *inherently* worse just because of that.
What makes Windows awkward is the sheer longevity of unpatched systems. Pre-SP3 XP systems in the millions for example. Yikes.
But it's not helpful when us Linux users get all smug, because we have serious issues of our own. It is simply not true that all the eyeballs on Linux are equivalently distributed over a distro: the kernel has a few thousand, but samba, quagga, cups? -- maybe a few hundred. If Linux was heavily distributed enough to be an attractive target, you can bet that attackers would be exploiting holes in this stuff all day long. OpenSSL is recent and disturbing proof.
None of this is to knock Linux, or Windows, but pointing the finger and laughing at the Other OS is not helpful. And there is something to be said for a unified OS + userland under very tight control of a small number of security-minded people. (Oh, hello Theo. Is it OpenBSD time?)
Why does the author add up all the Windows versions but not the macOS versions?
The line about 10% are non Windows, listing 2% Linux and 3.6 OS X seems really screwed up until you realize Linux is 2.36% and all listed versions of MacOS add up to 6.10%. Did the author really believe there is a mystery OS with almost 5% of the market that isn't listed??
I run Mint Linux, and v18.2 has just come out. It's based on the most recent Ubuntu LTS version.
There were a couple of big changes, but if I used the update process, I had the choice whether or not to use them. Do I switch the window manager? Do have make the jump from the 4.4 kernel to the 4.8 kernel?
Or I could install from scratch.
I used the update process, and then made the optional switches. It was all under my control.
I like being in control.
Servers are a red-herring, I think. Ransomware is mostly, I assume, an attack on desktop Windows machines, or servers running remote desktop. Probably the initial upload of the recent attack was via a compromised windows server at a Ukranian Windows software developer, but the overwhelming majority of infections would be on desktop machines infected by other desktop machines. And the most common vector of initial infection is via desktop machines too, typically via an Outlook attachment, of a download with some kind of Windows executable payload.
Keeping a fleet of Windows desktops updated is hard. How can you explain that the most commonly infected Windows desktop is Windows 7, which has pretty good update capabilities? I don't know, but I think we can say that for sure it is badly broken. I'm going to dump on Microsoft here. It is a very complex OS stack which is full of legacy code and truly incredible security holes (plain text passwords bouncing across the LAN; remote execution tools which basically default to high privileges), and an update mechanism which apparently doesn't work very well for the small business and consumer user base it aims to serve. Linux and to a lesser extent Mac desktop users have several reasons to feel superior: better quality software, security issues taken seriously for the last 30 years, not the last five years, with incomparably better update mechanisms, not to mention much more genetic diversity in the case of Linux. With a such a different mix of kernels and distribution configs, finding an exploit that can infect a critical mass of machines is really hard, I think.
Nearly all patches for Windows servers require reboots, a very different situation with modern Linux, and reboots on remote desktop servers are inconvenient, particularly for businesses which don't have dedicated admin staff. But we're stuck with Windows, so it's a smart choice to avoid it yourself if possible.
"about 12 per cent of servers run non-Windows OSs"
You missed the most important part...
"In the Spiceworks network (which are mostly on-premises), about 12 per cent of servers run non-Windows OSs."
"on-premises" as in desktops PC acting as servers !!!
"According to netmarketshare.com the non-Windows market share "
Specifially _what market_?
Usually "the market" is defined by amount of sold licences and of course only Apple and Microsoft are selling licences in bulk. Linux is sold as support contract and that's a corporate edition then.
No wonder there are only 10% of those. Of course all of the home users have been dismissed but that's how the bean counters operate: They don't pay, so they don't exist.