Should just kick out all legacy baggage and go into the brave New World.
If any old baggage or legacy stuff need to be run on ARM chips, use virtualization.
Like XP mode back in the day.
Microsoft now runs a bunch of Windows servers on ARM processors. Apparently, these ARM chips are quite good at their jobs and Microsoft might try converting entire categories of workloads over. All around the world the tech press has speculated on whether or not Windows on ARM will be showing up in on-premises datacenters. In …
Lol at dreams about SQL on Linux in someway meaning the demise of Windows Server even though Windows Server sales are still climbing!
We already know it's a cutdown version to keep certain devs happy. Production workloads that need any enterprise features are going to need to run on Windows Server.
"Or MySQL, Oracle, DB2 etc. on your choice of one *nix or another...."
I meant Microsoft SQL Server workloads obviously.
nb - MySQL and Oracle run just fine on Windows Server. Can't say I have seen an install of DB2 in living memory to know. And Oracle at least is much easier to install / initially configure on Windows Server than on *nix...
I use SQL every day at work. I don't really care whether the server is running Windows or Linux, just whether I can connect and run the queries I need and tune them.
To the extent that Windows is essentially finished, especially for RDBMS servers, this is the beginning of the end. That SQL is one of the "big dogs" of the database world, and that it exclusively ran on Windows was a big USP for Windows Server. It isn't any more.
You don't need Windows domain servers - AD in the cloud will do. You don't need Windows Server to run a DHCP server, a DNS server, a mail server (Office 365, remember), and now you don't need it for on-premises SQL (you didn't really need it for cloud anyway, did you?)
With the advent of Visual Studio Code for Linux, you quite possibly don't need the development stack I use every day, either.
You can only swim against the tide so long. The day of Microsoft dominance is ending. They are slowly, but surely, going back to being just one among the many, and the demise of Windows Server is simply another nail in that coffin. I suspect Windows desktop to be around a bit longer, but macOS, Chrome OS and desktop Android machines need to start selling in bigger numbers before that finally falls off the perch.
"Or MySQL, Oracle, DB2 etc. on your choice of one *nix or another...."
If your customers program uses MS SQL Server then that's what you need. If the Linux version lacks features that the customer's program uses then you're stuck with Windows server.
I'm interested in the Linux MS SQL Server for replacing SQL Server Express which gets installed for free when you install a program which requires a database.
Or ........converted to PostgreSQL/(commercial) EnterpriseDB for best performance, greater reliability and significantly better security.
An informal survey of enterprise Linux developers and companies has shown that only a few who are "contracted" for some Windows development are using SQLServer on Linux.
Furthermore, Linux SQLServer does not run natively - actually horribly, and is a total dud on Linux.
Let me boil the writer's argument down to one sentence: running Windows on ARM means that all servers will run Linux on ARM with non-Microsoft SERVER applications because Microsoft made such a mess of CLIENT applications Windows 8 for ARM?
MS are in a mess over all sorts of things, but I think they have a clearer view of what's going on than this author.
Windows Server on ARM is currently proof of concept for Microsoft. There has been no indication that it will ever see general release and Microsoft's strategy is clearly to get as many customers to stop running their own servers and use Azure instead. x86 support for things like Exchange is already available but Linux is where the growth is and already popular on Azure and even better suited to ARM than Windows.
I think what Trev is saying is that Microsoft's adoption will drive open standardisation that unintentionally benefits its competitors, who will further benefit from Microsoft's litany of recent failures.
Microsoft has always been hideously inept at nearly everything, but managed to maintain its monopoly by the circular dependency of proprietary standards, and legacy software locked into those standards. However, it wasn't prepared for this mass migration to new architectures, and now it's playing catchup, and losing, because it has little choice but to abandon its own legacy, and thus break the chain of dependency that kept an entire industry tied to Microsoft for decades.
Microsoft is still a big player, but it will become increasingly irrelevant over the coming years, and is inevitably destined for obscurity, if not extinction.
Personally I won't miss it, in fact I suspect most people won't even notice, as we head into an ever more abstracted Cloud environment.
Like Apple, Microsoft has the image of selling stuff that works. Even when it don't and they are just emulating others. Its all about the image and what decision makers (aka sheep) follows.
Just look at the uptake for Azure, As a techo you don't trust it and understand the complexities they must overcome before they can be AWS, but because your desktops and servers run windows, it must be the direction to go according to the decision makers.
Like a bad rash, Microsoft will be back.
I'm always confused by this kind of argument...as if people live on a completely separate planet to the one I live on. Microsoft positioned itself some time ago as the convincing number 2 in cloud computing, and has sustained increasing cloud market share over several years (i.e., they are gradually playing catch-up with AWS market share, though there is still some considerable way to go). Anyone who has had dealings with the company over the last few years (all of this current decade) knows that Microsoft long-since adopted a rigorous cloud-first policy internally. The issue for them is not how to maintain x86 legacy apps in the cloud - something that can be done technically, but gets little explicit attention from them. Azure, at the heart of Microsoft's ambitions, is very clearly not about maintaining some old x86 and desktop legacy! I don't get how anyone could seriously think that is the case.
My world is taken up with designing and implementing a modern microservice architecture for a continent-wide industry sector application on what is arguably the world's most advanced generally-available hyper-scale/high-availability technology platform, Azure Service Fabric, which is significantly more advanced technically than, say, Kubernetes (to which Microsoft contribute) or Docker Swarm (which Microsoft is committed to supporting). We are using the Windows version, but they ported Service Fabric to Linux because, very simply, all that counts in the cloud is consumption of CPU cycles and storage. x86 legacy? I think not. Service Fabric is in its fifth-generation and the foundation on which so much of Azure is built.
We all know the world has moved on since the PC-centric days of the 1990s. Microsoft has lost several battles since those days, to be sure. But the underlying assumption in the article and several comments here is that they have failed to grasp that the world has moved on. This is clearly untrue. It has been very clearly untrue for a very long time. It doesn't begin to describe their strategies and practices in recent years. It's really is high time people moved on in their thinking about market players like Microsoft. No point being stuck in the 1990s, now is there?
Yeh, the author sounds like a kid, or even a kid 20 years back going on and on how Java is going to cure platform awareness. Besides, when you think about powerhouse data centers, do you think Javascript ("web native")? No, you think about embeding Adobe Flash script in a jvm!
That is not how I read his argument. I thought his argument was basically that if the shift to ARM hardware is important enough for Microsoft to make the jump, then they are already losing the advantage of legacy application lock in, so they won't have any means to force themselves into a dominant position on ARM hardware. He believes that they will have trouble competing on a more level playing field.
Microsoft's choices regarding Windows 10 have ruined the brand name, and there's no need to carry any baggage or expectations forward to this new platform.
I guess that Microsoft will now join Apple (and no doubt a few others) in NOT inviting representatives of this site to Press junkets. How dare anyone criticise Windows 10, the Windows to end all other Windows for everyone on the planet? /s /s /s
The acceptance of ARM as a server platform rests no with MS but with all the application vendors out there. If they don't port the likes of SAP, WebSphere, Oracle DB and the multitude of other applications that businesses use to any ARM platform then it won't matter a jot if that platform runs Linux or Windows or something else.
Intel should be very worried about X86 but its day might be gone. At least (AFAIK) they have an ARM license but I would sense that inside Intel that there is a huge feeling of NIH , DejaVu and 'if only' all mixed up together.
now only if some of this lovely ARM Server H/W was available to us mere mortals at prices that were compatible to i5/i7 systems and off the shelf. That may be sometime coming though.
now only if some of this lovely ARM Server H/W was available to us mere mortals at prices that were compatible to i5/i7 systems and off the shelf. That may be sometime coming though.
Quite. I'd love to get my hands on few Cavium ThunderX2 boxes/boards to replace some aging x86 stuff with. At the moment the only realistic options for reducing power consumption are either crappy Atom (for some very light non-memory intensive workloads) or low power i5/i7 (or perhaps Naples, depending what AMD comes up with). Given a choice I'd opt for the Cavium over the others, given the goodies in the SoC.
This is huge. SBSA is the real threat to Intel.
It is huge.
It's also something Microsoft could have defined back in 2008 when they first acquired their own ARM foundry license. Had they done that (they even demonstrated Win 7 + Office on ARM) then instead of doing their level best to Utterly Ruin Windows by trying to be cool and down with the mobile kids by pushing Metro, Windows 8, etc (something that they continue to do to this day, plus they've added snooping into 10), we'd now be used to ARM servers and desktops, MS would still be top of the mountain, and we'd probably be happier with Windows too. Instead were seriously wondering about not bothering with anything MS at all.
Cock up.
This is definitely bad news for Intel, and by extension all current users of X86. That includes the supercomputer guys. Anyone who actually needs all that compute offered by Intel's biggest chips will be finding their lives becoming expensive. ARM is fine for what it's intended for, but a fire-breathing high performance general purpose CPU suitable for weather forecasts it is not.
Interestingly Fujitsu /RIKEN are contemplating ARM plus their own specialist extensions to make their next super computer. Expensive.
ARM is fine for what it's intended for, but a fire-breathing high performance general purpose CPU suitable for weather forecasts it is not.
HPC teams already use heterogenous hardware mixing x86 with GPUs because x86 hardly shines at parallel vector work. While x86 delivers great single-thread performance that necessarily the most important part of HPC. ARM chips already come with optional hardware acceleration packages, throw in FPGAs and GPUs and, at the right price*, the HPC crowd will be drooling.
* If you look at some the biggest HPC installs it's obvious that purchase price is not that important. Increasingly, it's important to have something that doesn't need its own dedicated power station.
I do not get this part "Anyone who actually needs all that compute offered by Intel's biggest chips will be finding their lives becoming expensive. ARM is fine for what it's intended for, but a fire-breathing high performance general purpose CPU suitable for weather forecasts it is not."
It is enough for ARM to be competitive on generic MT-friendly loads. On the other front you have competition from OpenPOWER, led by IBM. These two taken together, with Linux as a platform, are enough to provide some competition in the datacenters. Not much of it, but some is surely better than nothing. And who knows, Microsoft could port Windows to POWER architecture if they wanted to, that won't be very difficult (I know, I've seen the sources).
HPC teams already use heterogenous hardware mixing x86 with GPUs because x86 hardly shines at parallel vector work.
Well, it depends on the workload. Xeon Phi is quite a big beast, and we'll suited to some workloads. As ever, it depends.
GPUs are problematic for some workloads. Their downfall is latency; they're (still) all about loading up some data, doing a lot of math very quickly, and then unloading the results. For some problems this is less than ideal. Machines like RIKEN's K is very impressive because they did so much to reduce data sharing latency in the machine, which gave it an unparalleled peak:mean performance ratio.
ARM chips already come with optional hardware acceleration packages, throw in FPGAs and GPUs and, at the right price*, the HPC crowd will be drooling.
Drooling, but facing a massive code rewrite!
If you look at some the biggest HPC installs it's obvious that purchase price is not that important.
That's mostly because the chips they use are the same (more or less) as gamers / server farms use.
It costs Intel around about $6billion to do a step in their design, and it's about the same for everyone else doing circuits that complex and fast (be it GPU, Ethernet switch, whatever). If Intel stops bothering, or if NVidia give up because we're playing games on phones instead of PCs or consoles, the HPC community would have to bear the cost themselves. The cost is enormous.
The only reason NVidia engaged with the HPC community in the first place was a reduction in PC sales.
ARM is fine for what it's intended for, but a fire-breathing high performance general purpose CPU suitable for weather forecasts it is not.
Yet. The Mont Blanc Project ("European Approach Towards Energy Efficient High Performance — thank goodness they didn't try to bludgeon that into an acronym) is addressing just this issue, using the Cavium ThunderX2™ mentioned in the article. This press release is a bit more readable than the project's website as a whole.
"This is huge. SBSA is the real threat to Intel."
I will agree at least to that extent. For years, ARM has been a fixed-hardware architecture: hobbled for the most part by those vertically-integrated black boxes. To endorse and encourage the use of a general enumerated bus opens the way for ARM systems to be more general-purpose since ARM CPUs no longer have to, as the article notes, be bound to fixed hardware profiles.
"For years, ARM has been a fixed-hardware architecture: hobbled for the most part by those vertically-integrated black boxes."
Certainly that's part of it. But there's more to this than vertically integrated black boxes.
Who remembers the mess that was WinCE and its derivatives in the PDA (and allegedly in the set top and other embedded) market? PocketPC and HandheldPC2000 and so on.
Some of these designs were not "vertically integrated black boxes", some of these were meant to be able to run an OS and built-in apps plus whatever else the customer might fancy.
How well did they work out for MS and the MS-dependent hardware vendors who bought the Kool-Aid?
Was it ARM (or MIPS or ...) that hampered those things or was it more about Windows CE's all round uselessness, which just served to emphasise that neither Intel nor MS could do without each other.
Weeeellll... let's give it a couple more years, shall we?
I've still got a Jornada 720 somewhere, see e.g. http://www.hpcfactor.com/reviews/hardware/hp/jornada720/
Decent (albeit not Psion-class) hardware for its time, crippled by Windows CE-derived HPC2000. In comparison, just think what Psion could have done at that stage, if they'd had a chance to compete fairly against MS.
Yes, it's been problematic swapping an EXISTING install to to a different vendor's ARM system, in comparison with basic BIOS, then ACPI, then EFI based boot commodity x86-64 motherboards or even laptops.
This also is the route not just for servers but ARM based convertible tablets (with keyboards) or ultrabook style ARM running iOS to replace MAC OS, or Linux to replace Windows. At present it's trivial to download and install Linux from a USB stick to an x86-64 based laptop, but needs rather more planning and customisation to install Linux instead of iOS / Windows / Android / Chrome on an ARM based tablet, or Linux based OpenWRT on a router.
I look forward to the day of being able to customise a Linux distro for TVs instead of the garbage inflicted on users called "Android TV".
Great article.
I look forward to the day of being able to customise a Linux distro for TVs instead of the garbage inflicted on users called "Android TV".
This sounds very much like wishful thinking to me: in the consumer space Android has pretty much beaten Linux off because most consumers don't really relish the idea of customising the software on their TV. They want the easiest access to their favourite shows which will always come with some kind of DRM.
"in the consumer space Android has pretty much beaten Linux off because most consumers don't really relish the idea of customising the software on their TV. "
So how do you explain the fact that consumers really relish the idea of customising their Android (based version of Linux) software on their phones?
So how do you explain the fact that consumers really relish the idea of customising their Android (based version of Linux) software on their phones?
Because they don't 'customise' it, consumers merely do: a little tailoring (change settings), furnish (add app's) and decorate (change the wallpaper) to personalise their device...
'Customise' implies doing more, such as taking a Nook Simple Touch eReader and turning it into an e-Ink Android device - which doesn't mean that yesterday's customizations don't and can't become tomorrow's stock apps and features.
"So how do you explain the fact that consumers really relish the idea of customising their Android (based version of Linux) software on their phones?"
Dragging icons about, adding a couple of widgets and changing the wallpaper isn't really that much of a customisation. And I bet most people don't use widgets.
"Yes, it's been problematic swapping an EXISTING install to to a different vendor's ARM system"
Different vendor's ARM system? Make that [same] vendor's [allegedly slightly] different ARM system. The Raspberry Pi 2B went from v1.1 to 1.2 swapping the CPU but not the rest of the SoC from 32 bit to 64 bit with the consequence that images which will boot on the old 32-bit model 2 and images which will boot on the 64--bit model 3 fail.
As I've spent the weekend discovering.
@AC
You're missing the point about SBSA here. Yes you can retarget to a different hardware platform if there is already a port for the platform . However if your platform is not supported then you are out of luck. However with SBSA you no longer need to target a platform, you just target SBSA and it will work on any target that supports SBSA if there are suitable drivers available.
I completely agree with the importance of SBSA for Linux, but think the dead of Windows as predicted in the article is nonsense. Windows on ARM is a server-class products; the sysadmins and developers working with it will understand that it doesn't run x86 binaries.
MS' real problem is licensing. MS need to find a way of making ARM licensing as fair as x86 licensing without impacting revenue. This is difficult because ARM isn't as capable core-for-core as x86 but it might be capable-enough for many uses. Do they over-charge for ARM or lose revenue by scaling down the cost for ARM? Or do they make a single-user license free and bump up the CALs?
Windows may not be the future, but there is still a stack-load of profit in it. I think MS have miscalculated. If people are going to rewrite apps for the cloud, I think they are more likely to go AWS. What MS should have done is put effort into useful server stuff. Work with the vendors so that they can interrogate the power supplies in the servers and routers so data-centre power management becomes easy. They should have made some decent load-balancing - perhaps worked with Intel to build hardware load-balancing into NICs. They could have shifted their server pricing model to opex, rather than promoting cloud, which will eventually eat their lunch. They should have done "Automation for Small Business" (on-premise) where latency and accommodating legacy applications is key. Their server products focused on the large enterprise at the expense of making things "cloud-easy." A tweak to their licensing model away from per-core and per-cpu and they could have sacrificed performance for ease-of-use and made all those SaaS threats go away.
If SQLServer for Linux actually becomes a thing, it will morph into Postgres for Linux. First for the less critical applications (which will pre-package it) and then for for the more important stuff.
I smell the whiff of burning platforms. It is still quite a long way off, but it is there.
Instead, they were faffing around with Vista, Windows 8 and 10.
It's those I fear most.... the average application quality is far worse than it was before "web native" learnt how to use a keyboard. Most applications are nowadays built upon layers and layers of half-baked frameworks, and the "web native" is always attracted like a moth by the last shining one. More and more "applications" are really ugly, uncomfortable to use, slow, and ill-designed. But of course they contain all the latest buzzwords. IoT is a perfect example of where "web natives" will drive us all.
MS killed itself in the server area with its licensing policies - were CALs really needed after you bought an expensive server license, plus the client OS ones as well? Linux adoption was often driven by purely financial reasons - it was cheaper of free - despite replication many of the Windows features - i.e. AD - requires to cobble together many disparate pieces and pray them work together. Of course, if your business is just to publish thousands of cheap web hosting servers, that was not an issue.
But Linux is often a step backwards - while its kernel is good, a lot of the stuff built upon it is still truly ugly, developed by teams lacking the proper resources, and addressing only their subset of features - the only good thing being usually the commercial applications ported from Unix, which was another reason to replace those expensive Unix licenses with the cheap Linux.
So yes, in the long run the cheapest win. You also get what you pay for...
"""You also get what you pay for...""
You're implying open-source is free as in free beer, this is not the case, not by a long shot, maybe for a ma&pa shop if they have someone competent, certainly not for most medum/large business.
Difference is open-source will not trip you, you can use it whatever way you see fit and doesn't impose stupid limits like what can you, or can't virtualize.
Nothing is really free... While you need competent staff to manage open source, you also need competent staff to manage purchased software.
If you're trying to run any kind of software without having someone sufficiently competent managing it you're going to have problems. This notion that you can buy off the shelf software and not hire appropriate people to configure and manage it is part of the reason there are so many stability and security problems these days.
Legacy x86 Windows applications have been a millstone around the neck of the entire industry for ages now ....We've been bringing up an entire generation of "web native" developers who are all about writing applications without the crutch of platforms past....
To summarise - script kiddies rule - move over grandads - - OK it's all yours son. enjoy!
The upcoming availability of SQL server on Linux is all the proof we need that the game is over and, in the data centre at least, Microsoft didn't win.
Does the author truly believe that statement? Microsofts SQL on Linux play has nothing to do with giving up on Windows, it's about targeting the other DB players, in particular Oracle, whose licensing fees make Microsoft SQL look cheap.
Windows server still owns the Enterprise. It might lag in public websites, but most businesses have the majority of their workloads on Windows, and will either keep them on windows or convert them to PaaS or SaaS workloads. MS using Linux to run a switch, or using Win on ARM to run a particular workload at the backend of Azure, is about them using the best tool in every situation. Not about giving up on Windows.
This post has been deleted by its author
Seeing a lot of Win->Lin transitions as well: There has been a time, when businesses would easily accept a biggish app based on MySQL (or Postgress) and Java (or PHP), "as long as it runs on Windows", so that "our Admins can do the day-to-day stuff".
These Admins are now gone (e.g. selling Buzzwords to even less adept Tie-wearers), resulting in the next upgrade cycle bringing a new OS.
This post has been deleted by its author
"Since Linux was cheaper - even with third party support for the OS it was so much cheaper than Windows."
It's not - even when you don't allow for the lower TCO of Windows Server. Go look at the support / license costs for enterprise Redhat or Suse. It's more than the total Windows license + support costs.
"The price increases for MS SQL licence per core etc., was prohibitive - and this accelerated Linux adoption where possible."
LOL, have you seen the equivalent costs for a similar *nix focused product like Oracle? If you mean replace it with a freeware database like MySQL, then that can run just as well on Windows too...
"we're about 75% Windows."
one of the advantages of using Linux on ARM would be the reduction in wasted electricity on that 75% of your boxen running Windows. Just sayin'. Inefficiencies like ".Not" and Micro-shaft's operating system "overall" have hidden costs. And don't forget the constant UPDATES (and phone-home spyware) eating your bandwidth.
Last I checked, even a Windows 7 system wasn't immune to the CPU-intensive background re-indexing of ".Not" garbage, following even the most insignificant of windows updates.
"From my previous experience, of approx 25,000 servers, Windows was on 25% at most, and being removed and replaced by Linux since it is cheaper."
Not my experience. Windows Server use is still growing as it's TCO is lower, and Linux mostly only replaces midrange *nix systems or is used in digital / web / HPC niche uses.
I'd hope that Linux on ARM was taking over. I've a vested interest in remaining in work for a few years more. Also working with Windows gets more and more painful as time goes by. However Microsoft has a long history of pulling the sort of thing that IBM used to do, where FUD is used to convince the management layer that "No one gets fired for buying IBM Microsoft." PHBs like that sort of talk and I've encountered several projects which started off sensible but then were diverted down the MS route after a sales person had some words with the PHB and spread the FUD about "An OS designed in a teenager's bedroom".
""No one gets fired for buying IBM<<<Microsoft."
The FUD of which you speak might have worked reliably at one time, e.g. prior to Windows Vista.
If it still works after Vista and Windows 8 and Windows 10 and Office Ribbon and [you name it] (stuff which the average PHB or his kids will be aware of), the quality of business management must be even worse than I thought. Though I do happily admit that there are a lot of certified Microsoft dependent IT staff around whose worst personal nightmare is having to move out of their comfort zones.
"Linux and BSD has been running merrily on ARM since the last century."
And making little headway into the data centre where the hardware has remained dominated by Intel.
To save you reading the article again let me explain. The hardware has remained dominated by Intel because there was a standard configuration built around the Intel processors so that there was a generic platform for OS vendors to target. All that time ARM devices were wrapped up in a host of different platforms so the builds required customisation. That was not what DC operators wanted.
> dominated by Intel because there was a standard configuration
Perhaps this is part of it, but I think that the more salient point is that not many attempts have been made to make and offer "server-class" ARM systems. The PC desktop/server world has been constantly evolving its Industry-Standard Architecture, bringing in new types of memory, buses and peripheral interconnect the whole time it has existed. By contrast, ARM systems tend to favour the System-on-a-Chip approach, with features that are much more suited to embedded applications than being at the centre of a peripheral-focused/interconnected ecosystem. So you tend to see soldered-on RAM instead of pluggable DIMM chips, vendor-specific emmc storage (there are no standards surrounding how to interface with this class of flash memory) and no sign at all of standard PCI, SATA buses unless it's bolted on as an afterthought (daughter card going over USB, say).
For years, this has been fine. Nobody (apart from uninformed end users who, eg, expected their Windows ARM tablets to be a drop-in replacement for their x86 equivalent) expects the ARM systems that they buy to have a PC-like ISA, apart from some obvious consumer-level interfaces like VGA/HDMI/USB. Also, all these ARM vendors have been working in their own niches with little incentive to rally round some kind of ISA for more "internal" components (equivalents to DIMM, PCI, SATA, RAID controllers, discrete GPUs, etc.). It's only recently that ARM chips/SoC are beginning to be viewed as potential competitors in the PC-like/data centre role thanks to constant improvements in single-core performance, plenty of cores, and the step up to 64-bits. And, of course, energy efficiency compared to x86 legacy systems.
I'm sure that ARM in the data centre is definitely only a case of "when", not "if". I think that the author is right that this SBSA initiative will be a huge step forward for getting vendors to rally around and produce more PC-like architectures, but I think that it's only part of the way. You need standardisation not only at the SoC level (having a standard build configuration so you know how to address all the MMIO registers and such), but also at the level of having standards for physical/electrical interconnects for pluggable DIMMs and PCI-like peripherals.
Upvoted for a very pertinent point.
Thank you for reminding me of the early days of PCs when everyone looked at the IBM PC and the 8088 and laughed and built a technically superior PC with an 8086.
Each one was different.
Then eventually it sank in that any option card would have to be re-engineered for each hardware platform and each application would have to be ported to each hardware platform.
IBM was still the golden haired child and the safe choice so everyone developed for the IBM PC and then coined it as other manufacturers paid them for endless ports to endless platforms.
Then along came Compaq and built PC compatibles and it was game over for any other approach.
Until there is a standard target for ARM platforms they won't replace Intel.
Just look at the demand for third party OS builds for phones. There shouldn't be any need for this Cyanogen stuff. A symptom of what is wrong with the Android market place.
PCs run new software on old hardware. Sometimes decades old.
Until you can mix and match hardware suppliers and old and new kit and run run the same software on it all then ARM isn't ready to challenge Intel for the big stuff.
You're missing the point. Yes, Linux and BSD can run on a lot of ARM hardware, but there are no standards the boot firmware has to conform to for all the different SoC's. So, unlike with x86 hardware, You can't have an ARM bootable USB stick that's just, for example, Debian for ARM64. You have to get an image for the particular hardware you are targeting and flash it to an SD card (or perhaps a USB stick) or sometimes directly through a USB connection to the device. Once it's on there, you can install standard software packages, but the boot process is not guaranteed to be the same between any two different devices. SBSA is supposed to change that so that the same image of whatever operating system you wish to use will work on any of a number of ARM server SoC's/devices.
"Legacy x86 Windows applications have been a millstone around the neck of the entire industry for ages now and its long past time they were relegated to a niche and left to quietly slip away into the night"
Or as we like to call these 'legacy' apps the lifeblood of our business.
The same dismissal of the mainframe as being irrelevant legacy tech also made me laugh.
This is clearly an article written by a newbie programmer bought into the cloudy, linuxy, ARMy hype, as if every tech that went before it was made by dinosaurs.
I've got news for you - apps built on previous tech are not irrelevant just because some shiny new stuff turns up.
"a newbie programmer bought into the cloudy, linuxy, ARMy hype"
I don't know whether to laugh or cry.
I'd agree, "the cloud" is overrated, but for widespread sharing and other web-related things it's convenient and (ok, somewhat) fault-tolerant (example, github, google docs). Linux _IS_ _THE_ _MOST_ _POPULAR_ _OS_ out there for sheer number of "things" using it, from servers to routers to phones (and Raspberry Pi). That can ONLY be ignored at your own career peril.
ARM is just another platform. There are tasks it's well suited to, and those that x86 or x64 are well suited for. It just depends on what you need and how well the system performs.
I'm glad it's standardizing and entering the server realm. Competition with x86/x64 and against Micro-shaft's dominance will ONLY help us end-users and customers, in the long run.
And as a software [and some hardware] developer, I'll keep a close watch on it and upgrade my skills when necessary.
...that Windows Desktop on Arm is already gearing up...and it runs the "normal" version of windows.
I presume the server version, for at least the short term, will be very similar.
https://www.qualcomm.com/news/snapdragon/2016/12/07/windows-10-powered-snapdragon
Normally I agree with Trevor's conclusions, but I'm not so sure this time. The problem is not only that so many enterprises run so much legacy code that depends on Windows (on x86, natch) but that so many developers are specialized on that platform. Even if you can easily port x86 applications now to ARM, the Windows dependency is still there (any argument that business-critical code can be run in an emulator will be met with justified derision). The latest generation of "cloud-ready" developers may be using Linux, but there's still a ton of software that runs only on Windows, and there is a huge number of professional developers and sysadmins for whom Windows is their main area of expertise. Compared to retooling staff, retooling servers is relatively simple.
We'll see . . . I agree that most of the interesting work these days is being done on Linux, but (cue the downvotes) there are many ways in which Windows is still much easier to deploy and manage, and the reliability and security of Windows have improved dramatically in the past decade.
"... and the reliability and security of Windows have improved dramatically in the past decade."
That's a bit like saying that the reliability and safety of motor vehicles has improved dramatically in the last century.
Well of course -- but in both cases it wasn't a challenging target, and the BIG problem is the drivers.
"Windows is still much easier to deploy and manage, and the reliability and security of Windows have improved dramatically in the past decade."
This depends entirely on whether or not you're comfortably blindly trusting Microsoft and whether or not you believe in actually having control of your operating system. Microsoft is steadily moving away from administrators being able to control everything and towards just having to trust Microsoft because Microsoft knows best.
See: cumulative updates, as one example.
I have some lovely stories of Windows updates breaking things at fortune 2000 companies and, because of cumulative updates, not being able to subsequently update systems. Administrators fighting with Microsoft support for quite some time to get them to acknowledge there was a problem, hotfixes being slow and then the next cumulative update breaking things all over.
A couple of reasonably large orgs I know of have called halt to the idea of "Windows by default" and are now requiring justification for why Windows should be used instead of SaaS or LAMP.
Windows is the easiest for people who have built their careers on Windows. But there are now enough people out there who have built their careers on other technology stacks that they're simply not afraid of looking elsewhere. And it's starting to show, in enterprises and even in governments deployments.
Windows is a hell of a lot more secure than it was. But it has gone backwards on manageability, and that's hurting Microsoft in a big way.
"This depends entirely on whether or not you're comfortably blindly trusting Microsoft and whether or not you believe in actually having control of your operating system."
What about in terms of support? Unless you take the route of a Red Hat, what happens when there's a kernel exploit, you need to update the kernel, but that update runs the risk of breaking things, putting you in a dilemma (stay vulnerable or break your bread and butter)? And as someone else noted, TCO differences between a supported enterprise Linux and a Windows Server aren't as clear cut.
I don't need Red Hat support for all my instances. Just my dev and test instances. As long as I know that everything with a given config set works I can use those same config chains on CentOS. Desire state config is amazing.
And it's beyond TCO. There are multiple companies willing to provide Linux (or BSD) support. You can choose whom you trust. You can have one of your devs submit patches directly to an offending project. You don't have to trust one company that - let's face it - has done everything they can to destroy trust.
This would be a different conversation if Microsoft gave a bent fuck about trust and acted in a responsible an honourable manner towards it's partners, customers, or even staff. It doesn't. Thus it can't be trusted.
In the open source community there are always alternatives. From hiring freelancers to whack a particular mole to working with companies to solve your problems...where there exist professionals at most open source companies ready and willing to work in a professional and trustworthy manner.
Microsoft dug their own grave. Let the bastards rot in it.
Updating the kernel is not a great risk on most (non-rolling) Linux distributions, since they usually have a patched version of the same kernel that you are already on available. Of course, the kernel is one thing, and other parts of the system are different. However, distributions like Debian stable, CentOS, and Scientific Linux are so stable that there is very little risk in applying all patches as soon as they are released. It's always a good idea to have test servers (this means with Red Hat, Suse, and Windows as well), but there is much less chance of you having an issue with a patch in those distributions (Debian stable, CentOS, Scientific Linux, and Red Hat Enterprise Linux) than in Windows Server.
This post has been deleted by its author
I been hearing this same ole story for years. How Linux is going to take over Microsoft Windows. Until Linux create a nice GUI interface forget. I honestly don’t understand the hate for Microsoft. I honestly don’t mind using Linux but developers & SA’s please stop telling me how good the Linux OS is over Windows. It’s getting old.
It's not the UI that's the real issue. It's the application support. And by that, I mean mainstream, first-string applications like one would buy at a store. This is especially true of games (one of the few genres where you really need a genuine PC in your room to get the best experience due to their performance demands). Put it this way: it's saying something that a company as big as Valve, running one of the strongest online gaming networks in Steam, and well aware of the creep threat Microsoft poses, STILL can't convince developers like Bethesda to rally behind Linux and free themselves from dependency on a single OS. Meanwhile, the other gaming network companies like Blizzard and EA treat Linux like it's an afterthought. Last I checked, neither WoW nor Overwatch can be played on Linux (not even with WINE). And that's just for starters.
Wow can be played on WINE. I haven't tried it since WINE 2.0, but with WINE 1.9.x and Legion, it worked without a performance penalty when using OpenGL. Unfortunately, Blizzard's implementation of OpenGL is buggy, and since it's not a supported API, they're not improving it (by their own admission). Artifacts and visual glitches abound, and it's the same running WoW in WINE and in Windows with OpenGL (so it's not WINE causing the artifacts). It's playable, but it's not an ideal situation.
WINE can still run WoW using Direct3d without the visual artifacts, but there is a significant performance penalty incurred translating the D3d calls to OGL. I had to reduce the visual detail significantly compared to my Windows installation to get a similar frame rate at the same resolution (1080p). Again, it is playable, but not ideal. I'd certainly do that before going to Windows 10, but not everyone despises 10 more than they appreciate or desire performance.
I think this is fairly representative of the Linux gaming problem in general. It's the APIs... if Windows games ever used an API that was also native to Linux, WINE would run just about all of them without a problem, and with performance that is pretty close to that of Windows. Devs don't really have to release Linux versions of games to make Linux gaming viable... they just need to release them with an API that's available natively in Linux (which would be part of releasing a completely Linux native game too). Gamers, and particularly those who use Linux, will figure out how to get it to work once the APIs are in place. It could the begin to snowball, and perhaps get to where Linux native versions of games are economically feasible.
"Wow can be played on WINE. I haven't tried it since WINE 2.0, but with WINE 1.9.x and Legion, it worked without a performance penalty when using OpenGL. Unfortunately, Blizzard's implementation of OpenGL is buggy, and since it's not a supported API, they're not improving it (by their own admission)."
Last I checked, Battle.Net also BANS you for using WoW on WINE. So it's really a non-option. And let's not get started on their new ubergame, Overwatch. All WINE reports on it has been Garbage, so it doesn't look like Blizzard even cares. And like I said, Valve has little to show for all its efforts, and they're a company that KNOWS about Microsoft's plans. What do you do when no one seems to care the sky really is falling?
Someone did create a nice UI for Linux. It subsequently claimed the endpoint crown, completely crushing Windows.
It's called Android.
Yes, Windows dominates a specific chunk of the endpoint market - desktops - and leads in another chunk of the endpoint market - notebooks - however, both those segments are in overall decline. Not merely as a percentage of total endpoints deployed (which has been catastrophic from that standpoint,) but in terms of total units shipped per year. Not one quarter's decline, or a year. Desktop and notebook sales have been for over 5 years.
So go right ahead and cling to your Windows uber alles fantasy. Actual facts don't back you up. As for the rest, most ITDMs I've talked to in the past 18 months have been reigning in purchases of new Windows applications and moving towards either SaaS or in house LAMP applications for new projects.
Like mainframes, Windows will take a very long time to completely die...but it's days of maintaining datacenter market share - let alone growth - are over. It's about time.
@Trevor, or just look at it other way and try to answer a very simple question: why is Microsoft investing so hard into Azure that they even port Windows into ARM (again) to run a new platform there. Because that's where the growth is - and large portion (last year's number was 30%) of that growth comes from customers running Linux in Azure.
The growth is on Azure because Microsoft have sacrificed everything in order to force the growth to be there. See here.
Microsoft went to the cloud because of two things: a) subscription revenue and b) lock-in. Windows had reached the place where Office had been for some time: sure, there was some lock-in, but there was no room for growth and all the features that needed be already were.
With Azure, Microsoft could create a whole new form of lock-in, charge even more, and do it all as recurring revenue. Win, win, win for MS. Lose for customers.
Windows is legacy. It will shrink and then stabilize, and sit at that level with no appreciable growth for decades. Just like mainframes.
If a few application vendors would just port their stuff to Linux and make it available at comparable cost to their Windows versions then I could ditch Windows completely. I reckon I wouldn't be the only one who'd do that either. At the moment I compromise by using VMs but it would be nice to be able to dispense with those on the desktop machine.
Now... you will hate me for this but... Why cannot I reliably or otherwise transparently save and recover a "record" from 'memory' to 'disk'. Why do I have to go through some, no clothes other than the latest same old same old fashion, arbitrator and its overheads?
That may change, and change soon. We'll have to wait and see. I'm definitely out there pushing that stone up the hill as the tech is cutting edge, and too damned expensive due to that, right now. Lack of imagination is the other block.
So you update your Linux kernel and now you have to update (recompile) all your software. How is this a good thing? Or you don't update your kernel and you find your software is no longer supported on that *aged* Linux server because it's two years old and the software vendors don't want the cost of support such an old kernel.
Linux is great in some scenarios. It's great if you do a limited number of things and you can remember all the command line incantations for them. It is probably true that hard-core administrators do remember all the incantations. But for the rest of us, those with other responsibilities as well, Windows Server is great because in addition to the command line we can use visual interfaces so we do not have to be word perfect on every single command.
I know hard-core Linux types don't get it. When you know some thing or some task very well it's hard to remember how difficult you used to find it and then there is a tendency to think everyone must be or could be as fluent as you. I know I have had that experience (sadly not with the Linux command line yet).
Updating your kernel will basically never affect any of your existing software. It will rarely make it so any new software will run (perhaps there could be some software that relied on a particular new kernel feature). Kernel updates are much more likely to affect hardware compatibility than software compatibility, and that's most likely to mean a piece of hardware that didn't work will start working.
Updating your C library has the potential to affect your software, but between the software developers and the maintainers of glibc, they do a good job with backward compatibility. I've never had a program stop working after a kernel update or a glibc update. Even programs that you manually installed on your distribution will almost always continue to work after a distribution upgrade.
This subject needs a reality check.
We run windows servers of various types and will do so for as long as we need to support legacy software. There is no way on Gods good earth are we going to change a stack of perfectly good software that works day in and day out without problem just to get down with the freebie kids.
It'll be a waste of time and money.
The problem was that ARM systems were not built with modularity in mind. Because they were made for power-sipping, fixed hardware maps on SoCs were the systems in vogue there. But beyond embedded and portable applications, you need modularity because configurations can change. An SSD may crap out and need switching. Same with a DIMM. Or perhaps GPU tech moves up and you want to upgrade. For the REAL PC world, you need to be able to mix and match, and to do that, you need a more general hardware design: something like an enumerated bus, which SBSA is a key step to providing.