OK...
...what the heck is this going to mean for licensing? Are we going to pay per processor tick as opposed to the number of cores, or something?
Servers are about to devolve into bespoke collections of compute and storage, says Gartner veep and distinguished analyst Andrew Butler. Speaking last week's Infrastructure Operations & Data Centre Summit in Sydney, Butler said the days of buying a server to handle a specific workload are nearly gone. Instead, you'll soon shop …
...what the heck is this going to mean for licensing? Are we going to pay per processor tick as opposed to the number of cores, or something?
That's not where Google makes the money (always worth keeping in mind when examining a service provider IMHO).
No, they will probably want to have explicit, contractually enforced access to your IP, and to the personal details of all your employees. Just in case someone dares to wish the CTO good luck trying to get any sort of SLA or KPI of the path between your company and the Google spy factory.
I am very glad to keep a server park going - with the IP we hold even a hint of going cloudy would make us fail audit, not to mention that we'd be buried in law suits for exporting data to a US owned organisation which causes all sorts of legal headaches.
No thanks. Nice try, though.
It means not going with suppliers with stupid licensing terms!
Thinking of Oracle, MS, etc, etc. For some who are tied in to stuff they have no choice, for others they may still chose to place their gonads in Larry's money-extracting vice because of some aspect Oracle do really well. But most I think, will be looking at software without tie-in or usurious license terms. Even if they have to adapt or write something to do so.
.... if that specific workload is all that I need and core to my business?
I understand "cloud" providers need more flexible hardware, but a lot of business are very specific instead.
I can't see, for example, HFT users going after shopping for a "software defined server" instead of getting hardware designed to shave the last picosecond for their trading algorithms.
Because many servers need raw performance EVEN MORE importantly than they need an easy source. And in terms of raw performance, ARM doesn't even come close to Intel architecture, and there's a physical tradeoff involved here, much like how auto engines are designed. You can make a gas sipper, but you can't expect such an engine to rocket off the line. Similarly, ARM chips may not sip a lot of power for its performance, but it's lacking at the top end.
All CPU architectures out there when running flat out (as a well spec-ed server farm generally does) have very little advantages over each other. The rumors of Power being err more powerful (pun intended), arm consuming less power, etc end up being very exaggerated.
Arm's "less power" comes to play when it is idle. Same MIPS at full flat out Intel will use times more power in idle. That is a mode of operation which you will see in consumer equipment, media boxes, house server, etc. You will not see it in most datacenters nowdays. VMs, Cloud and Big Data have changed the way they are run. 10 years ago, servers were run at a fraction of their capacity to ensure they have enough spare grunt to handle incoming requests. Now they are run in an almost mainframe fashion where there is a low priority VM set with a batch processing job always crunching something in the background.
Why not ARM powered servers?
Because ARM didn't sponsor the report or shindig?
GFLOPs/Watt is now pretty close for most CPU designs. Intel is ahead in the process area and has better single-threaded numbers. Where ARM may yet shine will be in custom hardware extensions (encryption) and high-density, low-load areas.
> ComposTable Infrastructure....
CompostAble Infrastructure....
I've not long come back from a conference where they were talking about all this stuff and everyone I spoke to in the audience immediately looked at "Compostable" and thought does no one in the marketing department do any gardening? know anyone with a garden or even bother to see how people are bound to read this word?
Boxes looked nice though, OpenStack in HW.
I'm not really sure how this is wildly different from the way things are now?
To me it sounds like someone who doesn't know what they're talking about describing the way servers (and even desktop PCs to some degree) are built.
If you're buying say a 1U rack mount server, there are a whole variety of configurations, your front end drive bay module and backplane can be 8x 2.5" drives or 4x 3.5", the backplane they connect to can be either SATA only or SATA/SAS, you can have a PCI-E SAS RAID card or just use onboard.
The description holds even more true for blade centers, where each bay can hold either a traditional server with CPU, RAM and local storage, or have a disk storage blade installed to be attached to other servers, as well as other options like GPUs and tape drives.
In either case you pick the parts that suit what you're building for, and in theory you could get new parts and change them around later if the workload changes.
However in reality you don't because for the majority of the type of people who are buying servers in any scenario outside of being a cloud host, you have a specific workload in mind when you're buying with a specific budget related to that workload.
I think they mean that in the future you'll be able to hotplug HDD's and PSU's without opening the server case and have them dynamically added to the servers configuration so you can use them without rebooting!
Oh, wait. We've been doing that for ages. :/
Maybe Gartner will be right with a prediction for once. Even if it is predicting that "in the future, we'll have functionality that we already have had for decades".
They go 'meh' and get on with earning a living.
All this embiggening of Cloud, SAAS, DevOps is largely irrelevant to them
They'll give you a blank stare when you mention 'Big Data'.
the likes of Gartner live on a different planet to 99.99999% of the rest of us.
Still, it makes for a smile on a monday morning.
'“There is no inflection point coming that will increase demand for non-x86 and Unix,” he said. Organisations migrating from those platforms, he said, will see Linux as their natural destination.'
One of the nice things about inflection points is that they arrive without the Gartners of this world noticing until it hits them in the face.
A lot of people running Linux at the moment do so because it's a cheap Unix-like OS onto which commercial vendors have migrated products such as RDBMSs. And a good proportion of those users are not over-fond of it becoming less Unix-like with the incorporation of systemd (yes, I know there are systemd fans who welcome a more WIndows-like approach). There's also a proportion of those users who realise that there are now alternatives to those commercial products which have less vicious licensing terms and which are available on other platforms.
Some combination of these factors could quite easily form an inflection point whereby there's a migration to BSDs. At which point there'll be a whole new slew of reports from the Gartners of this world assuring us that there's a migration to BSDs, that chroot jails are the new containers and explaining the reasons for it being a natural destination.
Might be an idea for cloudy hosts running huge datacenters. You could cram the racks closer together if you didn't need to fit humans in between them to replace faulty hardware. A 50% increase in servers/m2 might justify the cost of a robot capable of pulling a failed server out and carrying it up to the service bay. Then all you need is a human engineer to come in every week to turn the pile of broken servers into a pile of working servers and put them back in the 'spares' rack for the robot to pick up again.
"You could cram the racks closer together if you didn't need to fit humans in between them to replace faulty hardware."
Now there's an idea. Ever seen document archive shelving like the stuff in the picture at http://www.mobileshelving.org.uk/ ? The racks of shelves run along a track which is a bit longer than the space the shelves need when they're closed up. You just roll them apart when you need to access a given shelf. To do this with servers you'd need to be able to provide enough wiggle room in the cabling. I don't know if anybody's tried this with servers - posted here as prior art just in case there's an attempt to patent it later.
Servers are about to devolve into bespoke collections of compute and storage, says Gartner veep and distinguished analyst Andrew Butler.
As a Gartner analyst, does he distinguish himself by occasionally (and it only has to be very occasionally) not talking about (a) what everyone knows or (b) alternatively bollox?
A lot of this "customizable" server acquisition is already available in a different but more flexible and less convoluted form - Servers with Docker or BSD "applications specific" Containers, for which separate memory , storage capacity, even ip addresses and networking configurations can be allocated. Even the networking component is now custom configurable in form of Software Defined Networking (SDN)
Since several applications and vertical software vendors provide these " containerized" solution, it is difficulty to see the needs for a hardware only approach.
Sounds like bollox to me. Why would we like to customise hardware? We already partition and customise our servers in software using VM's, containers and SDN.
Servers should just be one big slab of dumb hardware hooked up to equal dumb networking. Preferably 4 CPU-sockets with +200 cores and a gazillion GB's of memory.