NT has never been a Monolithic type system. It has always been a hybrid micro kernel design. Only really old UNIX legacy based things like Linux are monolithic these days.
Windows Server 2012: Smarter, stronger, frustrating
Microsoft has released Windows Server 2012, based on the same core code as Windows 8. Yes, it has the same Start screen in place of the Start menu, but that is of little importance, particularly since Microsoft is pushing the idea of installing the Server Core edition – which has no Graphical User Interface. If you do install a …
-
-
Wednesday 5th September 2012 11:03 GMT Anonymous Coward
@richto
"Only really old UNIX legacy based things like Linux are monolithic these days."
You mean those old unix legacy things that MS has been desperatly being playing catch up with ever since it released NT? When - *gasp*- Windows went 32 bit protected mode & multi user. Not simultanious mult user mind, that had to wait. Along with proper remote login. And networked graphics. And then after years of being told the GUI is all you need they finally catch the clue train and come up with PowerShell. An oxymoron when you compare the unix shells but better than nothing. Now we have TA DAA! - Server Core! - Wow! An OS that can be run without a GUI - I assume - remotely. Now where have I seen that before... Naturally it won't be via ssh - that would be too easy and standard for MS. No doubt it will be some overcomplicated roll-your-own solution probably involving some GUI-for-idiots on the client.
-
-
Wednesday 5th September 2012 15:36 GMT Anonymous Coward
Re: @richto
"So looking at your list of things that UNIX and Linux had that Windows Server 2012 now has too, it seems you think that Server 2012 has now caught up with UNIX and Linux?"
When Windows gets a decent multi root file system , can load and unload drivers on the fly without a reboot, allows you to control EVERYTHING on the command line with full process daisy chaining and job control (mainly backgrounding jobs), has full remote command line access (ie not requiring a dozen different single task GUI clients), dumps that idiotic registry , implements a sane version of sudo, setuid bits in the filesystem and fork() , allows different desktops running under different users simultaniously on seperate screens (X windows on various version of unix has managed that since the early 90s) then yes , it will have caught up.
-
-
-
Wednesday 5th September 2012 13:16 GMT Anonymous Coward
Monolithic kernel isn't necessarily bad
Just as much as a microkernel isn't necessarily good. Even dear "AST" would admit this today, even after telling Mr Torvalds that he wouldn't receive many marks for a monolithic kernel submitted as an assignment. :-)
They're different ways of tackling the same problem. There are advantages in both. Performance is one disadvantage of the microkernel model it took Microsoft quite some time to get their "layered" kernel right. The earlier versions of Windows NT weren't exactly high performers, Windows NT 4 lumbered along a bit... Windows 2000 was better. Then they started piling on the rubbish in Windows XP and Vista. I observe some of this rubbish is noticable by its absence in Windows 8.
Portability is one of the strengths of a microkernel. It's therefore ironic that Windows NT, being largely microkernel-based, runs on so few platforms, compared to Linux which is as you rightly point out, monolithic. Windows NT did run on more, but I suppose they decided it wasn't worth persuing the others. Does make you wonder what it'd look like had they decided to keep an ARM port of Windows NT going though.
Where Windows is considered "monolithic" is more to do with the fact that the user land and front-end seems to be conjoined at the hip with the back-end kernel. I can take a Ubuntu Linux desktop, and completely strip away the GUI environment leaving only the command line. Indeed, I did this very act today.
Try that with Windows XP, or 7, or Server 2008. No dice, the GUI and kernel are inseparably linked. Same with MacOS X, although MacOS X without its GUI is essentially Darwin, so probably doable, just not obvious. Windows has been that way since NT was first released. Consumer Windows has been like this since Windows 95.
The fact that Microsoft are recognising this as a limitation of their platform however, and are now taking steps to remedy this however, I can say is a good thing. Now clean up the POSIX layer a bit, and we might even have a decent VMS-like Unix clone that will make running applications designed for Linux a lot easier.
-
Wednesday 5th September 2012 21:38 GMT Don Mitchell
The NT microkernel architecture was rejected very early. It was going to support Win16, Win32 and even POSIX, but no such system was ever released. NT and even DOS/Windows 9X were still much more modular than UNIX. Microsoft used DLL's, COM interfaces and device-driver interfaces (DDI) long before UNIX had them. Even more complex technology was developed later to support extensible GUI interfaces (linking and embedding, Visual Basic and such stuff). Microsoft did this to allow them to update small parts of the OS instead of shipping a whole OS release to their customers everytime.
I remember installing device drivers on my SUN workstation in the late 1980s, and you had to edit the interrupt vector tables and recompile the kernel, because at that point in time UNIX was still just a giant monolithic C program. NT was much more advanced when it came out in 1989 -- it supported threads and concurrency much better than UNIX, it had async I/O and events and light-weight coroutines (fibers), I/O completion ports, etc. These features were more or less added to Linux much later on (I've heard async I/O in Linux is pretty dodgy).
-
-
-
Thursday 6th September 2012 08:51 GMT Anonymous Coward
Re: the unix /dev directory
"It is a way of *naming* devices. Beyond the ability to open and close handles to the device (and thereby read and write data) it offer *nothing* in the way of interface standardisation."
The device handle IS the interface. Go read up on ioctl() & fcntl(), I don't have time to have a discussion with an idiot.
-
-
-
Thursday 6th September 2012 09:05 GMT Anonymous Coward
The NT microkernel architecture was rejected very early. It was going to support Win16, Win32 and even POSIX, but no such system was ever released.
That's not what defines a microkernel. You're describing a system call interception layer, like those that many Unix and Unix-like systems have had since the early 1990's. For example, FreeBSD uses this to provide the ability to run Linux binaries and NetBSD uses a similar technique to maintain backwards compatibility. I do recall a POSIX compatibility toolkit for NT, but it wasn't usable and simply allowed MS to sell software to the US government.
NT and even DOS/Windows 9X were still much more modular than UNIX. Microsoft used DLL's, COM interfaces and device-driver interfaces (DDI) long before UNIX had them.
DLL's are just shared libraries (albeit done in a very awkward way). Unix has had shared libraries well before MS copied the idea. COM is vaguely similar to the concepts that Unix derived microkernels such as Mach implemented years before, but MS did it in a way that wasn't clean enough and resulted in many security vulnerabilities.
Even more complex technology was developed later to support extensible GUI interfaces (linking and embedding, Visual Basic and such stuff). Microsoft did this to allow them to update small parts of the OS instead of shipping a whole OS release to their customers everytime.
VB has nothing to do with modularity - it's just a programming language and framework similar to Tcl/Tk in the same timeframe. As for OLE, I believe ToolTalk on Unix systems that used CDE predates it.
I remember installing device drivers on my SUN workstation in the late 1980s, and you had to edit the interrupt vector tables and recompile the kernel, because at that point in time UNIX was still just a giant monolithic C program. NT was much more advanced when it came out in 1989 -- it supported threads and concurrency much better than UNIX, it had async I/O and events and light-weight coroutines (fibers), I/O completion ports, etc. These features were more or less added to Linux much later on (I've heard async I/O in Linux is pretty dodgy).
So you're comparing SunOS to early versions of NT? On the one hand those early versions didn't offer many of those features you describe, and on the other hand it lacked many features of Unix - some of which it is only now gaining in the 2012 release 23 years later.
-
-
-
Wednesday 5th September 2012 11:04 GMT h4rm0ny
The de-duplication is awesome. Run 64 Linux VMs on it and have all the redundant OS code in each of those VMs exist only once on the disk, massively reducing storage space. Deduplication services already exist, but this is integrated and standard and based on conversations with others, it looks like it's significantly superior.
Also love how all the GUI elements are now wrappers for PowerShell and off in the preferred install. Makes it more like the power you get with Linux CLI. Very impressive.
-
-
-
Wednesday 5th September 2012 13:50 GMT wayneme
Re: Standard licence...
(I am the Windows Server PM for Microsoft UK)
Correct on the licensing rights mentioned above for Standard, for Datacenter customer has unlimited numbers VMs within their usage rights.
There are a couple of key things which have been missed from this article:
1) Windows Server 2012 goes beyond just machine virtualization. WS2012 addresses virtualization on compute, storage and networking. Feature such as Shared-Nothing Live Migration and Hyper-V Network Virtualization
2) Windows Server 2012 delivers simplicity in managing a server estate with large investment in Powershell, 2000+ new commandlets + Intellisense
3) A platform to enable multi-tenancy, high density website (CPU throttling for websites) and hybrid applications. System Center + Windows Azure complete the story but giving customers a complete Private, Public to hybrid experience.
4) A simplified rich VDI experience for users and IT Managers, dynamic access control to enable compliance within the organization.
the GUI is pretty :) but for many ITPro's and folk using servers everyday it is the power of what is under the hood that really makes Windows Server 2012 special....
Try Windows Server for yourselves at www.microsoft.com/windowsserver
-
-
-
-
Wednesday 5th September 2012 13:01 GMT h4rm0ny
Re: Does Windows Server Core 2012 really not have a GUI?
There are actually three modes, a pure, pure GUI-less instance, a version that has a Desktop but no real GUI per se, and then the full GUI thing with all the tools, et al. It's my understanding that the first is the preference if you're managing multiple servers - you just leave it without a GUI and manage it remotely using PowerShell scripts of the Server Manager tool which will handle all your instances.
They are different modes rather than different installs. So if you want to put the GUI on one of your instances temporarily, you can do that. It will be removed when you switch back to the GUI-less mode. I don't think it really makes a difference with a single server, but when you have a lot, it's significant. Just like I don't install KDE on all my Linux servers because I just manage them remotely, why have one on Server 2012 for the same scenarios.
-
Thursday 6th September 2012 09:47 GMT Anonymous Coward
Re: Does Windows Server Core 2012 really not have a GUI?
As someone else mentioned, there are three interface modes:
1) full GUI, 'nough said
2) minimal server interface. This will give you a powershell window, a task manager window, and best of all, the Server Manager. This is great if your system is up and running and you just need to check on things regularly.
3) Core. Just Powershell.
Best thing about this new setup is, you can completely remove the features you don't want, e.g. if you're not using IIS on a particular server, you can delete the installer files. It allows you to trim the OS down to the bare essentials. Use Powershell Web Access to perform actions on the server (there's even a nifty new Powershell menu which will let you control the server manager features through the console).
Frankly, this is the server they should have built years ago. I've never been a fan of Microsoft, but even I have to admit this one is pretty great.
-
-
-
Thursday 6th September 2012 00:46 GMT P. Lee
> You've been able to mount disk partitions as directoriesfolders in Windows since Jesus was a lad.
but have they sorted out the mess that causes with checking disk sizes?
50Gig: c:\
1000Gig c:\mybigdisk
I want to install a new program in c:\mybigdisk and the installer checks the disk size of c: and doesn't allow it.
I haven't tried it in a while, so it may have been fixed - answers on a postcard.
That said, I'm really annoyed at KDE, the way it holds volumes mounts in the GUI layer rather than updating the underlying OS. At least OSX bungs everything in /Volumes so the whole OS has access and not just the GUI.
-
Wednesday 5th September 2012 15:10 GMT wanderson
Windows Servr 2012 functionality?
Earlier this year it was reported that Microsoft was attempting to update/replace it's aging and limited NTFS file system with most, if not all of a license of Zettabyte File system owned by Oracle.
No-where in this or any other article written about new Windows 2012 by Microsoft Wow guys has this issue been addressed. More stories about perpetual "point & click" administration Interface that is nothing new on Windows, and "updated" Hyper-V virtualization that in September 2012 remains years behind VMWare and RedHat KVM virtualization are not worth reading for those looking for substantive information on Microsoft "great, improved, New OS".
Continuing propaganda of Gee-Whiz! about anything Microsoft is a waste of time for Theregister and technologists looking for true innovation. Everything said about this new Windows - it's "technical features", has been available from Linux and *BSD for several "years", and providing significant performance, reliability and security advantages over Windows by every metric.
-
Thursday 6th September 2012 13:10 GMT Anonymous Coward
Re: Windows Servr 2012 functionality?
Seriously? I find that very hard to believe indeed.
NTFS is a mature filesystem, there are new features arriving in it with every version of Windows, it's hardly limited or "aging" assuming that aging of a product is in some way bad. What makes you think it is limited?
I don't really care about the my OS is better than your OS. What does it matter as long as you've got an appropriate tool for the job?
-
-
-
Thursday 6th September 2012 08:43 GMT Ken Hagan
Re: Stupid question
Last I heard, Windows still has a registry and Linux still has an /etc directory tree full of poky little text files all in different formats. Also last I heard, *neither* is a single point of failure since both Windows implementation of registry hives and Linux' implementation of filesystems are generally pretty damn robust. Both, however, can be easily trashed by end-users who don't know what they are doing.
I believe that "best practice" on Linux is to run some sort of version control software on the /etc tree. This at least allows for reverting bad changes and documenting the reasons for good ones. I don't think Windows has anything equivalent. Linux therefore has an architectural edge in my book, but this is pretty marginal.
-
-
Wednesday 5th September 2012 15:59 GMT Anonymous Coward
Where are the admin tools?!
I smell a little fail here.
While I applaud the course Microsoft is taking where they're basically stripping the GUI (or what's left of it anyway) from the server environment I get the feeling they're ignoring an important part of the process.
Say I grab Server 2012 and put it on my network which currents contains Win7 and WinXP clients and some older Win2k3 servers. So how exactly am I going to administrate this server, considering that there are new tools and scripts to be used ? (put differently and more ontopic: new roles to keep in mind, new specific admin features to use (which should be addressed within the MSC admin scripts), etc.).
Usually you'd get yourself 'RSAT' (Remote Server Administration Tools) which are a very fine collection of tools which you can use to administrate specific aspects of your server remotely. But guess what? When it comes to Server 2012 there is a RSAT version which specifically addresses this, but its only available for the customer preview of Windows 8.
The Windows 7 version of RSAT still sits on version 1.0 SP1, thus being able to administer server ranging from Win2k3 right to the current Server 2008 release.
Wouldn't it have made a /little/ more sense to release both the server and the admin tools at the same time, especially considering that the default installation behaviour is core mode?
Oh, and don't my word for it. Simply search MS download center for rsat yourself (link to MS download center).
-
Wednesday 5th September 2012 17:26 GMT wayneme
Re: Where are the admin tools?!
(MSFT - Windows Server PM)
hey there,
Just to clarify you get the choice of how you wish to deploy a Windows Server, you can choose between Server Core Installation and Server with a GUI
here is an article which helps describe: http://technet.microsoft.com/en-us/library/hh831786.aspx
http://technet.microsoft.com/en-US/windowsserver.
There is another great resource for FREE training on www.microsoftvirtualacademy.com, feel free to also register for the tech launch event on the 25th Sept: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032523367&Culture=en-GB&community=0
-
Thursday 6th September 2012 09:26 GMT KroSha
Re: Where are the admin tools?!
Not what he asked. "We" want to be able to install as GUI-less Core, but still be able to run RSAT from "our" desktops. Y'know, remotely? Isn't that the point of Remote Tools; that they plug straight in to the command line, without using the command line?
Minor FAIL.
-
Monday 10th September 2012 12:11 GMT NogginTheNog
Re: Where are the admin tools?!
Doh! You shouldn't be running admin tools from your desktop as the same user you get your email and surf the web with.
Why not build a Server 2012 VM (locally or remotely)and administrate from that? Or A Windows 8 one come to think of it?
Agreed this is a bit of a biatch, but MS do have form for this: didn't they do something similar with previous Server 2008/R2 releases, making the admin tools only work properly on Vista or Win7?
-
-
-
-
This post has been deleted by its author
-
-
Wednesday 5th September 2012 21:40 GMT David Dawson
Re: I hate per core licences
Well more like if you were charged some kind of registration fee for being able to drive cars on the road, and if you had a more powerful/ bigger car, you paid more even though you don't particularly take up more space or go over the road more.
Yes, imagine if the government levied different levels of tax on more powerful cars! The tragedy....
-
Thursday 6th September 2012 01:23 GMT Jean-Luc
Re: I hate per core licences
> Thats like Shell charging me £1.50 a litre on an Astra but £6 a litre if i drive a DB9.
Shell is happy enough with the extra liters that sweet DB9 will be guzzling. Heck, it might even give ya a volume discount.
Plus, your local garage will correct Shell's oversight and not forget to overcharge you.
-
-
-
-
-
Thursday 6th September 2012 08:52 GMT wayneme
Re: @Prof Denzil Dexter
(MSFT - Windows Server PM)
Windows Server STD and DC are the same in terms of capabilities - what differentiates these two editions are the virtualization rights. High density virt = DC, Low density virt = STD.
A single STD license covers 2 procs and 2 VMs. You are able to stack multiple STD licenses if you want to build out small virtualized estates.
If you are building out high density virt environments then DC is the right edition as this gives rights for unlimited number of VMs.
In the case of a 4 proc box where you want to do high density virt then 2 DC licenses are required to cover each proc.
-
-
-
-
Wednesday 5th September 2012 22:43 GMT John Sanders
Some thoughts
2) Windows Server 2012 delivers simplicity in managing a server estate with large investment in Powershell, 2000+ new commandlets + Intellisense
That is not simple, simple is being able to join, mix and match tools that were already there, not having to learn a new 2000+ commands language.
Also in my opinion powershell is a strange mix of all that is bad about BASH/PERL/PHP and nothing that is good about windows. The old vbScript syntax was good, what was shitty IMHO was the backend.
Now Windows sysadmins have to ditch yet another technology and embrace another one, that will get ditched again in one or two releases.
Also what makes the Windows platform really good is the sheer vastness of 3rd party tools that you can use to deal with the many shortcomings of Windows. Take away the GUI and you are taking away half of the reason of why people run Windows.
SMB's that can not afford a SAN have been building them on the cheap using a regular RAID server and any, yes I repeat ANY Linux distro via ISCSI for peanuts.
Hyper-V is at best a mediocre product, I will give them the benefit of the doubt though thanks to the improvement of the integration components in Linux, which IMHO is the best improvement yet, albeit any other vendor had perfect Linux support for how long now? 2001!!!!
I will admit (The MS PM can be proud of this one) that the huge clock in the login screen of Winserver 2012 is very handy.
-
Wednesday 5th September 2012 22:46 GMT Anonymous Coward
Haters gonna hate
Kudos to MSFT for this release - their server OSes were always so much nicer than their client ones. To all those who say, "yah boo sucks UNIX had this 30 years ago" I say: you are absolutely right. And you know what? No one gives a damn.
I prefer administering UNIX boxes myself too, but Microsoft deserve credit for offering GUI less, scriptable access, so please let's not throw the baby out with the bathwater.
Now all they have to do is implement bash and a decent SSH server and I will be one happy bunny. Yeah, it'll never be UNIX, it'll never give me the feeling I get when I log into a strange UNIX box and know that I'm home amongst friends ("Bourne shell? Check. GNU C compiler? Check. Perl? Check. <Happy sigh> Houston, the Eagle has landed"), but it's soooo much better than it was.
-
Thursday 6th September 2012 04:04 GMT E 2
Microsoft new Windows release strategy:
1. Rename all the APIs and SDKs.
2. Rename all the control panels.
3. On the server end: everything that was a control panel is now a plugin: everything that was a plugin is now a control panel.
4. Make all the management windows fill a percentage of area of screen area in proportion to recieved bribes from the various LCD panel makers.
5. Submit a suite of minor changes to an unimportant network protocol to the IETF as a proposed standard.
6. Roll out Embrace-Extend-Extinguish response to the (5) from the last rondo.
7. Obsolete all the previous the MS certifications, raise price on new ones (gotta know where M$ moved all the buttons!)
@KingZongo ...
Yes M$ makes a decent app server OS. But it insists on re-obscuring & re-obfuscating reliable old stuff that Windows has done well for 20 years on every new release. Why? I can only conclude, after 20 years in the biz, that M$ believes it's bread is buttered by a constant stream of new sysadmins and old ones that must pay for the MS cert piece of paper.
Whereas once you know NFS or POSIX threads, well, you know NFS or POSIX threads. People who've been around a while know a snow job when they see it. MS does a snow job every release - it gets tired and obvious after a while and that pisses people off. That's not hate, brother, that's anger. There's a difference.
-
Thursday 6th September 2012 04:08 GMT E 2
Also @ King Zongo
My Windows sysadmin friends tell me that with every new release of Windows, of Exchange, of SQL Server their Powershell scripts break because the new release exports new & different COM/COM+/DCOM interfaces.
If UNIX (Linux, *BSD, what have you) broke BASH scripts with every major kernel release, well, I suppose you would see a lot of hate leveled at *NIX. But such breakage in *NIX-land has not happened...
-
Thursday 6th September 2012 07:18 GMT Brett Weaver
Unix vs Windows
Simple really.
So simple I may have been misled by the 5 servers I've tried this on.. If I have I apologise to Microsoft.
1... Write 8000 1 MB files on a DVD (like a backup)
2... Put it into a Unix Box and read every file name
3... Put it into a Windows box and wait 15 minutes (at least) to regain control of the box.
As far as I am concerned a fatal Windows flaw which still exists on Windows 8...
But If I have done something wrong please correct me.
-
Thursday 6th September 2012 11:25 GMT RICHTO
Re: Unix vs Windows
At an educated guess you are running AV software which is the problem....It is near instant to do this on a standard system.
If these are archive files, thne that will add to the problem with some AV clients as they will want to unpack them....
Either exclude the file type from scannig or install a better AV product....
-
-
Thursday 6th September 2012 11:28 GMT Anonymous Coward
Parity-based RAID schemes...
...are horrendous.
"parity striping, which is more efficient"
They are more efficient in terms of raw space utilisation, yes.
However, their write performance is terrible as calculation is required for every write. This would have been worth a mention.
You should look up the read-modify-write cycle and block alignment if you're not clued up on such things.
Please don't recommend the use of parity-based RAID. Disk space is cheap enough to use RAID 10 where performance or resilience is important, and it is far superior.
-
-
This post has been deleted by its author
-
Friday 7th September 2012 10:56 GMT Anonymous Coward
Re: Parity-based RAID schemes...
I was going to write a nice reasoned reply, despite your 'discussion' skills leaving a lot to be desired.
I might even have offered references to back up my arguments or some credentials that would help to support my position.
Then I realised you are just a low-level troll and are talking rubbish so I won't waste my time.
-