Wow
Some have been PXE booting diskless servers for decades now.
Microsoft appears to be working on a new version of Windows Server aimed at web-scale operations. News of the product comes via Neowin, which noticed a deck of slides titled “Nano Server: The Future of Windows Starts Now” posted to a Russian blog. The name on the title slide is “Refaat Issa” whose biography at Microsoft's …
Some have been PXE booting diskless servers for decades now.
And? How is that remotely relevant? Did you even read the article?
This is modularising Windows and allowing you to ditch the crap you don't need and don't want. That is something altogether more substantial than where the boot image comes from.
If they can do it properly so that you can cut it back quite dramatically (e.g. run a few undemanding services in 128MB memory and a 1GB disk image) then that would be qute welcome. I suspect what we will actually end up with is a dependency graph that goes "You need feature X. That needs features A, B, C, D, E, F, G, H, I, J, K, and L installing as prerequisites." If it goes too far along those lines it loses its usefulness and becomes more marketing fluff, so it'll be interesting to see how it actually works in practice.
>If they can do it properly so that you can cut it back quite dramatically (e.g. run a few undemanding services in 128MB memory and a 1GB disk image) then that would be qute welcome.
<jokeAlert>Does nano have a footprint of only 9 Gb ? That would already be pretty impressive for Windows.</jokeAlert>
>I suspect what we will actually end up with is a dependency graph that goes "You need feature X. That needs features A, B, C, D, E, F, G, H, I, J, K, and L installing as prerequisites." If it goes too far along those lines it loses its usefulness and becomes more marketing fluff, so it'll be interesting to see how it actually works in practice.
Indeed, the whole point ... especially when you consider that a lot of enterprise apps do not run on server core, will it be worse on Windows nano ? I guess so ... Compare that to truly modular OS' (cf icon) where the database server's OS takes up mere megabytes, where you can install "X, along with A, B, C, D, E, F, G, H, I, J, K, and L" with a one-liner ... sweet dreams.
@Hans 1 - I suspect that it will be used to run very specific applications, not as a general purpose replacement for regular Windows Server. The main reason to have it will be to reduce RAM usage when running in a VM.
My own suspicion is that Microsoft will use it for certain Azure services, plus offer it to customers for things like e-mail systems and databases in VMs.
The big question is whether the on-premises version will give a discount on those licenses, charge full price, or only license it for use with a very few chosen third party applications. I suspect it will be the third option. No doubt feature set and license terms will be carefully chosen to avoid undercutting their existing server revenue stream.
"you might need an older distro designed for smaller hardware"
Nope, I have a dev server here running CentOS7 x64 in 256MB RAM (no GUI / minimal install). I hit it with Apache Bench a while back and it managed IIRC ~100 concurrent connections (with 95% served) before it started choking badly. That was serving a PHP page that did a simple DB query - to a separate DB server. That is concurrent as in 100 active connections at any one time which would need a serious employee/user count to manage in the real world.
Most of our live web servers at work (1500 employees) don't get that much action, the amount of resources we throw at Windows VMs is sickening given what Linux can run in.
I asked for a VM the other day to setup a test Linux web server, the admin's jaw hit the floor when I said I only wanted a 10GB disk, 512MB RAM and a couple of cores. It's weird how Windows sysadmins have taken the Windows requirements as normal and think Linux is impressive, personally I see it the other way, Linux is how it should be, wth are Microsoft doing?
Hopefully this is a step in the right direction.
"Can it serve hundreds of http: connections in a live environment?"
ApacheBench fired against the web interface of a Pi running XBMC (so yes, it's running a 3D accelerated UI at the same time as this test).
Short of it, over 1000 requests, it handled 50 concurrent connections without error, 75 caused error (on the last 12 requests) so I have no doubt a well configured Pi could handle 75-100 concurrent connections. Even so, 3 Pi's in a 2x WWW and 1x NLB setup would easily achieve 100 concurrent connections and still in <1GB RAM and <£100.
As you can probably tell, I think Pis are awesome.
# ab -n 1000 -c 50 http://xbmc-002/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Benchmarking xbmc-002 (be patient)
Finished 1000 requests
Time taken for tests: 3.236 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Requests per second: 309.00 [#/sec] (mean)
Time per request: 161.812 [ms] (mean)
Time per request: 3.236 [ms] (mean, across all concurrent requests)
Transfer rate: 42.25 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Percentage of the requests served within a certain time (ms)
50% 68
66% 72
75% 76
80% 80
90% 95
95% 300
98% 1073
99% 1082
100% 3003 (longest request)
Even if they get Windows Server back down to the footprint of the NT core, any services it runs are going to be quite a bit fatter than their Unix equivalents -- and assuming they can strip out the gui overhead and re-architect for greater stability (hmmm... maybe move video drivers out of ring 0?).
What MS needs to do is design and entirely new OS for the purpose.
Or just re-brand FreeBSD.