who uses safari on W7 anyway?
Safari Web browser...no danger of this bug causing any real damage then!
An unpatched critical flaw in 64-bit Windows 7 leaves computers vulnerable to a full 'blue screen of death' system crash. The memory corruption bug in x64 Win 7 could also allow malicious kernel-level code to be injected into machines, security alert biz Secunia warns. Fortunately the 32-bit version of Windows 7 is immune to …
I do quite a bit because :
- I don't trust IE
- I strongly dislike what Firefox has become
- I still like Seamonkey but it leaks memory like hell
- I like Chrome but not enough to let Google know every move I make (so Srware Iron is my friend too)
- In my experience, Safari is rock solid, keeps a reasonable memory footprint, very seldom renders sites wrong and still has an UI classic enough to fit my old timer's taste
This post has been deleted by its author
...your last para! Oddly, in my experience (& I have to use the 'big five' browsers daily, across an installed base of several hundred PCs), Safari blows. Hard. I'm an old-skool dev, weaned on assembly language & C, but I appreciate good UI design - which Safari just doesn't have.
Incidentally, no MS lover here, but IE9 is shockingly good. Really, really good. So the motivation to go hunting to install a 3rd-party browser on new machines just isn't there any more, for me. And it's simply not enough of a religious issue to get steamed up over, any more. IE is no longer the whipping boy of browsers.
Personally I think Safari for a Windows is a piece of crap not least because it inflicts a pale imitation of the OS X look & feel onto Windows. But I suppose some people might use it, for example Mac users who are running it from work or whatever.
Still, it sounds like an edge case. I assume that if it's something to do with the height of an IFRAME that safari is blindly trusting the content to be sane and then doing something stupid such as allocating memory which exhausts all physical memory, consumes all system resources or something similar.
"Edge cases" are exactly what an OS is supposed to be able to deal with.
The user or an incompetent programmer can always do something stupid. When this happens, it should not bring the whole system down. It's no longer 1984.
The system is there to be a gatekeeper, to manage resources , and to sensibly deal with problems including inept and malicious users.
The deeply depressing thing about windows has always been that it trained users to expect a level of quality far below what was the norm before windows existed.
It is no longer 1984 but I used computers in 1984 and the general purpose OSes available at that time were generally very solid and would not let a user program crash the OS. I have run computers running a variety of operating systems of that age for up to ten years with the only reboots following power or hardware failures. Can anyone say that about Windows? It was only when Windows became widespread that people began to accept that computer software was inevitably unreliable. Windows has become much much more reliable than it was historically but incidents like this continue to reinforce the impression that it was extermely poorly designed (if designed at all), with reliability and security being low priorities if they were considered at all.
I'm open to the posibility that you were running an OS on some serious iron (for 1984) back then, and not an apple/ibm clone/microcomputer, but except for IBM mainframes and things like that the OS's of the time pretty much surrendered all control to any program that were run... depending on the platform free access to memory and registers weren't that uncommon, heck, some (microcomputers) unloaded the OS at the load of a program and you had to restart the machine to get back into the OS.. in the meantime your code had direct access to whatever hardware it fancied.
U sure your specs haven't gotten a little bit rosy since then? ;)
"except for IBM mainframes and things like that the OS's of the time pretty much surrendered all control to any program that were run"
Except for UNIX (including Xenix). And Multics. And VMS, TOPS-10, TOPS-20, MCP, Pick, and GCOS. And those are just only some of the non-IBM-mainframe OSes of the time (that sprang to mind). Many ran on minis and workstations; Xenix ran on PC-class machines.
For that matter, 1984 was only three years before OS/2 1.0, and only four before Windows/386.
Lumping all general-purpose non-PC computing into "mainframes and things like that" is a bit like saying "except for cars and things like that, motor vehicles all have two wheels".
One or two new hardware devices have been added to the mix since 1984...
Most of the bluescreens that occur are triggered by third-party device drivers. Windows (unless something has changed recently) is the OS that "enjoys" the most third-party support. The number of developers qualified to write good device drivers can probably be counted on one hand. Fortunately the development kits comes with good fleshed out examples that often only require minor modifications to support simpler hardware devices.
That said, it is of course extremely serious if a usermode app running as a non-admin user is able to trigger a BSOD. I do not recall something like that happening to me in 17 years of Windows NT/2000/XP/Vista/7 usage. I have of course experienced many BSODs due to badly written device drivers though.
There have been a number of Windows user-mode BSODs. There was an entertaining one a few years back where just printing a short sequence to the screen of a "command prompt" (shell) window would crash the OS (due to a bug in CSRSS, if memory serves). I verified that one myself, in part just to see how easy it would be to exploit it.
For that matter, many of the escalation vulnerabilities in Windows (and there have been plenty of those over the years) can crash the system when fuzzed sufficiently. I suspect you can do it using Tavis Ormandy's #GP Trap Handler exploit, for example, with an unprivileged user-mode program.
In other words, this needn't have anything to do with drivers; nor is it the fault of Safari. (Impressive how many commentators can't understand that simple fact.) It's a bug in the OS, of a sort that's been seen before and will be seen again. It's noteworthy, but not unique.
And this happens to other OSes too. I remember a bug in the Pyramid flavor of UNIX that was triggered by an erroneous pipe() system call and caused a kernel panic.
...and that would be difficult.
This could be a Safari thing, in that it is installed to run as administrator allowing it to do silly thinks like trash the OS, or more likely being Apple; it has installed a whole bunch of other crap you don't need and were not told about that is running as a privilaged services in the background.
I installed iTunes on a Vista laptop a few years back, then uninstalled it; after which I was constantly getting issues the the machine because Apple had deleted system dll's!
I decided it was easier and safer to re-install the OS than try to find everything iTunes had screwed up.
" "and don't trust Google with their privacy perhaps?" because Apple's hands are clean..."
I more inclined to trust Apple because I am their customer. If they don't deliver what I want, they lose business. With Google, I'm the product - it's aim is to please the people who pay them, and that often means intruding as far as they can get into my life.
testing their code so that Safari users get the same experience as Firefox users, IE users, Chrome users, Opera users.
I have 5 browsers installed on this box. I only browse the Internet with Firefox, the other 4 only ever run code from my dev server or local files.
Now if all browsers were standards compliant and all supported the same feature set, I would only need one.
Farcical amount of time on browser quirks?
I know that one, trying to point out to a (publically funded/charitable) customer that they are wasting over £600k a year on browser support.
The response "well corporates like us use xp/IE6 and no way IT are going to let an update change the browser"
God, I don't even want to keep thinking about how wrong this policy is on every level.
I don't care if you ARE an IT admin, you're failing to perform your duties, deliberatly causing your employer to waste money and exposing them to the oldest collection of security flaws which they pay you to mitigate.
and if you happen to be the boss, then more shame on YOU because you're paying this worthless "£$%wit good money and letting THEM tell YOU how your business should operate and blindly trusting them to keep your company safe while ignoring the fact they shun the advice of EVERY major player, including the people that wrote the software in the first place and every worthwhile security consultant ever.
Mine's the one with the blinding fit of rage spilling out the pocket.
I do have some sympathy for him. He was sucking on Ken "Unix is snake oil" Olsen's kool-aid teat during his formative years, so seems likely that the Olsen kool aid was still in his blood stream when he went off to Microsoft to re-invent UNIX poorly^W^W^Wwrite NT.
Also I am pretty sure that Dave had nothing to do with putting "Win32" on top of the kernel, which seems to be where most of these fuck ups come from.
At the end of the day Dave Cutler et al could have saved everyone the bother by implementing POSIX properly in the first place, like Linus did.
I'm impressed that you know Dave Cutler's name, given your lack of knowledge about operating systems or history. When NT was written in 1989, it had dozens of modern OS features that UNIX either did not have or was still in a very confused state. Cutler's impact on UNIX is what is not properly appreciated. I was at Bell Labs when UC Berkeley folks ported BDS to the VAX, and it was pretty clear to us that BSD was UNIX + VMS (virtual memory, a file system that was not journally but at least did not horribly suck like UNIX V7).
Linux is basically UNIX + NT. So many ideas in modern UNIX come from Microsoft - the use of dynamic linked libraries, device driver interfaces, asynchronous file I/O, journaling file system, etc. All things done by MS operating systems before UNIX. UNIX still lacks the systems engineering design that Cutler and Microsoft brought to operating systems, the modularization and formal interface (e.g. COM) structure. Using BSD in the last 1980s, we had to edit tables and recompile the whole kernel to install a new device driver, since drivers were simply subroutines in a monolithic program.
Readers who are interested should take a look at Hart's book on Win32 Programming, to see what a kernel design should really look like.
Yep, SCO OpenServer 5.0.4 required a re-link of the kernel to change the IP address. Maybe not as drastic as recompiling the entire kernel, but still hardly something you can do on the fly.
No idea how DHCP worked on that OS. I have a feeling it didn't, and nor did any Internet link with a dynamic IP address.
>>So many ideas in modern UNIX come from Microsoft - the use of dynamic linked libraries, device driver interfaces, asynchronous file I/O, journaling file system, etc
Actually, dynamic-linking was implemented long before porting VMS onto MS soil, even before Unix, namely, Multics (BTW, the has the DLL hell been fixed already?)
--jfs (journaled fs) was the first one...
As a matter of fact, is MS suing anyone for the infringement of these (patented) ideas? No, that it is safe to disagree with you. Even if MS has any technological influence on Unix, POSIX and Linux, the counter-influence prevails. MS also is know to persevere in resisting the influence... not for ever. PS was introduced in 2006 decades after csh, ksh bsh, bash etc. Now Win8 is promised to be headlessly available w/o GUI.
>>the modularization and formal interface (e.g. COM) structure
This is the funniest part. Where is this modularism? Windows is modular on paper only, cannot be tweaked the same way as other systems.
Re: "So many ideas in modern UNIX come from Microsoft - the use of dynamic linked libraries, device driver interfaces, asynchronous file I/O, journaling file system, etc."
All those ideas predate Windows and some even Unix. Dynamic linking came from the MULTICS project, like almost every other idea in "modern" operating systems.
Solaris had dynamic linking very early on (not sure if before or after NT), but in any case the Solaris implementation, which is what Linux copies, is superior to Windows DLL:s in that creating and using dynamic libraries is practically identical to static libraries. In Windows you had to jump through hoops. (Export modules? Strange nonstandard C extensions? what's actually the sharing semantics of global variables in DLL:s in different Windows versions?)
I agree that older Unix like that 1980's BSD was inflexible in that recompiling or at least relinking the kernel was often needed for minor changes. It was simply showing its age. NT could avoid much of this as a new design, and so could Linux. It does not mean that Linux copied Microsoft, they were both just taking advantage of "new" (actually by that time well-known) techniques.
Just about al the kernel features of NT where copies of what VMS had been doing since 1978, the only big difference being that VMS didn't go multithreaded until Version 7, on Alpha's (not on VAX hardware).
If anyone wants to read how a truely secure kernal/OS should be implimented look around for a copy of the (Open)VMS "Internals and Data Structures", it covers everything from boot sequence onwards.
For those that think not doing multi-threading is bad, they need to realise probably less that 1% of VAX's made had more than one CPU so there was no benefit to be had, and the OS used AST's (Asynchronous System Traps) that made writing software where one process could handle 100's devices, timers, etc... all at once was so simple you wouldn't beleive (plus it's much more efficiant than threading on 1 cpu).
PS: The "Open" seen with VMS is silent.
"Just about al the kernel features of NT where copies of what VMS had been doing since 1978"
They're not, you know, they're really not. But the number of people who realise where they do originate is negligible. I have the privilege of having seen inside where many came from.
Other Cutler-inspired projects included a PDP11 OS, and a distributed realtime environment for VAXes called VAXELN. VAXELN had threads and the like before threads were even heard of, *and they were useful* even in the one-processor case.
VAXELN was an embedded environment where you could think about designing the application rather than driving the hardware (and the network, and all the other non-productive stuff that other RT kernels used to require, and which some still do).
*That's* as much where the NT kernel concepts come from as from VMS.
Sadly there's very little written about VAXELN, but I do believe Custer's book mentions it.
Dave Cutler is often given far to much credit for VMS, as he was just part of a large team of designers (both hardware and software that created VMS and the VAX hardware together), and that is why VMS is probably to most stable and feature rich OS ever written.
Unix didn't go 32bit with virtual memory until about a decade after VMS was shipped.
On VMS the C-RTL/POSIX libraries were nothing but a tiny user mode library that VMS developers avoided if they wanted to do anything: clever, complicated or efficiently.
Re AC @ 13:11: "Unix didn't go 32bit with virtual memory until about a decade after VMS was shipped."
Honestly, would it kill you to do five minutes of research?
VAX-11/VMS was announced in 1977, with the VAX-11/780. BSD3 UNIX had paging virtual memory; it was released in 1979. (UNIX System 7 had swapping, but not paging, in the same year; it spawned UNIX/32V, and then BSD3, before the year was out.)
1979 - 1977 = 2. 2 < 10.
VMS is a perfectly good OS - though I suggest you don't try that "most stable and feature rich OS ever written" line in a room full of TOPS-10 or TOPS-20 fans. And personally I think OS/400 / System i gives it a run for its money too, on the stability/features front, even if VMS is probably more fun to work in.
It's just as well that no UNIX-inspired OS in the world has ever had a kernel vulnerability exposed to userspace, otherwise your comment would make no sense.
But you're right, the "everything is a file, except there's ioctl in case it isn't a file, plus random synchronization primitives which aren't files" model is obviously way better than the "everything is a polymorphic HANDLE" model, unless you're some kind of incognizant fool!
I am certainly no Windows fan. but I have met and spoken with Dave Cutler and had an interesting conversation with him back in the NT 3.x days. He's a bloody genius.
Yes it really sucks that Windows has its own APIs and does not do POSIX, but that was a command decision from above. Dave certainly had the nous to make POSIX happen.
Back in the NT3.x days, MS was pushing NT for servers as a simpler to use and cheaper alternative to the 386 *nix offerings of the day. To woo over customers they offered POSIX compatibility (that almost worked) and a pushable streams module interface. These were needed to allow companies to port their products to NT.
The POSIX implementation was really crap though. Performance was horrid. But once companies, like the one I was working with at the time, had got sucked in and committed to NT based products it was too late. You had to port your apps to the Windows API to make them run properly. Even still, we needed a 100MHz 486 to perform a job that had previously needed a 25MHz 386SX.
> Could be a duff graphics driver told by an app to [ ... ]
No, it couldn't, because apps don't speak directly to graphics drivers, they aren't allowed to; they speak to the win32k subsystem of the OS, which then translates the graphics APIs they invoke into various calls to drivers, and it is that win32k subsystem (like every kernel-mode subsystem or driver) that is completely responsible for validating the input parameters it is passed with and ensuring that it doesn't submit any bad requests to drivers. In other words, a user-mode app attempting to allocate a hugely oversized bitmap or canvas or whatever should not cause the win32k subsystem to generate an insanely-huge allocation request to a device driver. (And the device driver should reject it also, and in fact probably does; we have no evidence, just your supposition that the crash is happening in a driver rather than the core kernel.)
It is not up to the OS to make size checks before asking the driver to perform allocations. That is the driver's job. If the driver can't handle the allocation then the driver should return an error.
The rationale behind this design is that only the driver really knows how big an allocation it can make. The OS should not know. If it did it would have to know the max allocation size for all different graphics cards which is clearly not a good idea.
Yes, you are correct that this is merely a supposition. That's why (s)he wrote 'maybe'.
that an app could take out Linux? Or OSX? Or could take out Android?
It's all very well getting angry about it, but if you can do it maliciously then you CAN do it by accident.OSes should look to minimise this sort of thing, but it won't protect you from stupid coders. Safari is more to blame than the OS, and it's a bit sad that the article takes the easy route to place the most blame on MS, rather than at least an equal footing.
"Windows should not allow an app - even if malfunctioning - to take out the machine; that's the ultimate fail here."
The fault wouldn't have been noticed if there wasn't a flaw in both W7 and the browser.
The browser is naughty for not handling the exception properly (as you might still be able to trigger some code injection trick via that door).
But, as you say, the operating system didn't handle the exception either and should not have allowed the take down by a single program, even if it was trying to do so.
Smacked bottoms all round.
"I didn't know that Opera costs money to use on Linux or Windows."
It doesn't. It did many years ago, though.
This is just another case of someone living in the woods with no running water, electricity and mail who is assuming a situation never ever changes. Pretty much similar to most 'arguments' of Linux fanatics against Windows which pretty much originated from the Win95 days.
"Why in God's name would you use Safari on Windows?"
I do Web development. Ergo, I use Opera, Firefox, and Safari on Windows and Mac, and IE on Windows as well.
Don't even get me started on various browser inconsistencies and rendering problems that make it necessary to do that. Point is, I do.
I can't speak for anyone else, but whenever I hear "Why do you use ___" what I tend to hear is "I personally don't use ____ and therefore I don't think anyone else should either." Which is silly, when you consider how many of these folks also say "and I hate anyone telling me what software I should use."
response to this. As others have already stated, the issue isn't the browser or even how the error is trigger, its the fact that ANY application can cause a system failure like this. If Safari can do it by accident how long before a virus writer makes use of the same exploit to cripple your machine, if they haven't already. Its a proof of concept, doesn't mean it can't happen any other way.
Sticking your fingers in your ears and saying "la la la, I don't use Safari" isn't going to make the issue go away.
I agree that it's a flaw in Windows. It seems the days of applications being able to crash the kernal aren't completely behind us yet. (Though imagine this being a headline ten years ago - everybody would shrug and say "it happens all the time" - this being on a news site is a sign of how far we've all come).
But does anyone still write viruses designed to crash your machine? I thought they were a relic of the old days when people wrote viruses just to be bitter or annoy you, instead of for profit by creating botnets and boosting your bank details. Aside from trying to destroy some poor Iranian centrifuges, I can't remember the last time I saw a virus doing actual sabotage. Do they still do the rounds?
"If they can crash your machine there is a very good chance that with carefully constructed code, they can gain elevated permissions - without blue screening the computer."
Right - or after crashing it. There are exploits that require rebooting, so a mechanism for letting an unprivileged process trigger a reboot (particularly an unsafe one like this) is useful. And, of course, sometimes a DoS is all the attacker wants.
Really, I don't know why so many people feel compelled to comment on stories like this when it's clear that they haven't studied even the most basic aspects of how common security vulnerabilities work these days. This sort of story always brings out the ignorant armchair pedants.
The answer is in the article, but perhaps only if you recognise the reference to win32k.sys. You can think of as the kernel-side components of the user interface. I think I'm correct in saying that all Windows apps necessarily use this part of the kernel. Safari just happens to be tickling a bug in there that no-one else has found yet.
Of course, now that everyone is looking, they'll probably figure it out in a day or so. Early Christmas present for the black hats. Cancelled Christmas holidays for Microsoft's kernel team.
Not sure if Safari is installing any driver level update or usage processes, but I'm seeing a consistent pattern on Win7 64 where iTunes is installed (and the associated update processes) that system performance periodically tends towards dead snail like. Naturally Task Manager is still reporting 99% idle CPU and <10% HDD access of course... have to love the fact that Task Manager doesn't not count some application levels as using the CPU and while I can understand the problems that may occur in tracking driver level application services, it's frustrating as hell. It may not be a specific iTunes / Apple problem but on systems where this software is *not* installed, these particular problems do not seem to show up.
Some applications implement their update processes as driver level services to ensure that they work around UAC problems and can update a system without bothering the user. Unfortunately at this level of process the OS protections are somewhat reduced - often required for real device drivers but taken advantage of by update processes.
I don't have iTunes installed, but the other Apple software that I've seen uses the Windows Scheduler to carry out updates, so no processes hanging around, until they're needed.
That's not to say that the update processes triggered by the scheduler don't kill your system, but at least they don't run all the time, unlike other companies' updates.
See also: "Windows should not allow an app - even if malfunctioning - to take out the machine; that's the ultimate fail here."
Both spot on, and therefore worth repeating. Frequently.
Here's how it happened.
When Cutler started NT, it wasn't quite a microkernel as such, but various chunks were kept separate (different address spaces, different privileges, different access rights...) and communication between them was managed via the local equivalent of an RPC. It was unlikely that any individual application running without Admin rights could crash the whole system.
There was a performance impact for this robustness though, such that UI-intensive applications (and in particular, benchmarks used by mass-market comics) ran slower on NT than they did on W98 on the same hardware. W98 not having the same protection, it should be obvious that it might run faster. It should also be apparent to those with clue that productivity and performance aren't the same either.
But this performance difference was unacceptable to Bill, so, over time, NT has lost a lot of its robustness as stuff that has no need (and no right) to be in the kernel has moved into the kernel, for "efficiency" reasons. These unwanted and unnecessary migrations have inevitably increased the vulnerability of the system as a whole to incompetent programming in any particular bit.
How "efficient" is a crashed machine, Bill-friends?
Recommended reading: Helen Custer, "Inside Windows NT", Microsoft Press. Explains the way it started, doesn't explain where it all went wrong (how could it, when it's an MS Press book).
I recommend you re-read Custer. Yes, the current contents of win32k.sys *used* to be in user space. However, and it's a whacking great however, if they crashed, the session manager (which was waiting on the process handles) was *designed* to go down in flames and take the rest of the system with it.
The rationale was that no-one ever used a Windows box "headless", so if you lost the UI then you'd be power-cycling the machine anyway, so Windows might as well save you the bother of flipping the big red switch.
Twenty years ago, you couldn't write a useful Win32 application that didn't engage with the UI. If the UI sub-system crashed, every useful process on the system was inevitably unrecoverable, having had part of its "state" wiped out. You, as OS designer, have two options:
1) Fail fast. Trigger an immediate BSOD and thereby maximise the chances that the resulting core dump will contain the information needed to diagnose the problem and minimise the chance that data corruption spreads to other processes (since they'll all now be getting garbage from their calls to the UI subsystem).
2) Pretend nothing has happened. Restart the UI subsystem and assume (against all common sense) that Windows applications are all written so as to be able to detect this and somehow recover from the loss. Maybe log an event so that the sysadmin knows that all files modifed after this date might be corrupt.
Windows ain't UNIX, not even now. It certainly wasn't in 1990.
As Ken says, and as I said earlier, those design decisions were made twenty years or so ago. But x86 has moved on a little in that twenty years, and Ken forgot to mention something:
"option 3: avoid the failure and the risk of corruption. Revisit those design decisions now that performance is less of an issue, and compatibility with DOS/Win16 apps (and even WinXP apps) is also less of an issue".
So, dear reader, what progress have Microsoft made in those twenty years, in terms of robustness?
Even the common or garden antivirus claims to be able to run a risky program in a protected sandbox these days. Isn't that something the OS should be doing for us?
"Windows ain't UNIX, not even now"
Indeed not, despite the number of The Great and The Good, people who do know their stuff, people who are now on the payroll at Microsoft Research. Very strange.
Here's a link to some pictures to refresh your memories of the progress Microsoft have made (by leaving the OS architecture+design in the hands of presentation layer people):
This is going to be a fun winter with kernel exploits becoming the new (old) attack vector.
This says something about the Windows user of the 2010s, not using admin access by default, and making the "evil doers" resort to this. The average Windows user is growing up. Finally.
Might be a problem. It's not the way I'd do it, but I've had many more X server crashes than BSODs in my time and for interactive end-users there's not much difference between them.
If apps are going to share a desktop (keyboard, mouse, screens) then their usage of those needs to be mediated *somewhere* and if you can persuade the mediator to go tits up then everyone with a UI loses it. Most apps aren't written to recover gracefully from that. :)
At least win32k.sys is all Microsoft's code. If you've got third party drivers, then there's code running at kernel level that the OS designers have never seen, let alone tested. That's not great either. In fact, the Linux crowd consider it so bad that they mark the entire kernel as tainted and won't spend much (any?) time investigating crash reports from such a beast.
I'm as much a fan of linux as the next guy, but X still runs as root. They are edging towards fixing this, but there is still some way to go. And even then, the DRM (no, not that sort!) components will be running in kernel space. Oh, and for legacy cards you will probably always have to run X as root.
The thing is, modern graphics cards need some fancy memory management to work well, and that sort of management can only be done efficiently (or at all) when the code is running with kernel level privileges.
This all seems familiar.
I remember about 12 years ago a small bit of html that put 4 iframes in a page that referred to itself, managed to take down a Windows 98 machine, a Sun OS machine and a Solaris machine, only machine I tested it on that didn't die was an HP-UX machine running netscape. (The netscape process died).
Warning: lots of words.
"if you lost the UI then you'd be power-cycling the machine anyway" (says Bill. And Ken.)
What happens to the X server on a Linux/UNIX box when you log off (or otherwise clean up) a direct-connected interactive session?
My understanding is the X server (the bit that drives the hardware) shuts down and restarts, if it wants to. The X client stuff (session manager, window manager, apps, whatever) has already gone away as part of the logoff/cleanup process. All resources freed, and if necessary all hardware re-initialised (mouse included, sometimes).
All done with no reboot necessary in the vast majority of circumstances.
Now compare with Windows.
What stops Windows session manager doing something similar in principle to Linux/UNIX where possible? IE at logoff, or after serious application crash bringing down the whole UI, just free up all allocated resources, re-initialise the hardware, and re-start the UI from the top?
I think the answer is idiot design decisions going back as far as the days of the 486/66 stop it. In particular, the unnecessary co-mingling of UI data and code with kernel data and code stops it. Design decisions relating to getting maximum performance out of limited hardware, design decisions long rendered obsolete by the performance of modern x86 (even Celeron and Atom). But the design hasn't been updated to match modern performance, so the system integrity holes are still there.
Evidence to the contrary is of course welcome (I'm basing this on XP, ignoring Vista, and assuming Windows 7 and the next server version of Windows hasn't changed this much).
There should be no important data routinely at risk in the case of a crash of a non-privileged application on a properly designed OS. Windows NT started well, but today's Windows is not a properly designed OS.
And WNT is not VMS++. Never was, probably never will be.
@silent_count: thanks for the thanks. Spread the word.
"My understanding is the X server (the bit that drives the hardware) shuts down and restarts, if it wants to. The X client stuff (session manager, window manager, apps, whatever) has already gone away as part of the logoff/cleanup process. All resources freed, and if necessary all hardware re-initialised (mouse included, sometimes)."
Yeah. That's not the situation we're facing here. What happens on your X system if the X server crashes whilst all the client apps are still running? Does the X server automatically restart and the clients spot this and restore all their UI state?
"I think the answer is idiot design decisions going back as far as the days of the 486/66"
Ah, no. The "idiot" design decisions go back to Windows 1.0 on the 8086. Back then, the hardware didn't support pre-emptive multitasking, so the whole system was driven by posting messages around. Back then, 64K wasn't enough for every window to have its own v-table (like X) and so messages were used instead. Every version is as backwards compatible as possible with the previous few versions, because the customer's investment in closed source apps precludes breaking changes.
"There should be no important data routinely at risk in the case of a crash of a non-privileged application on a properly designed OS."
Microsoft would agree. Win32k.sys is not supposed to crash. (It's also "not non-privileged", btw.)
"Windows NT started well, but today's Windows is not a properly designed OS."
I'm not sure NT started anything like as well as some people now like to believe. It is better designed now than it was then. We have IO scheduling as well as CPU scheduling. We have process integrity levels as well as DACLs. We have session 0 isolation.
And if you read the forums from twenty years ago, NT3.1 had its fair share of BSODs too. It looked *very good* compared to DOS-based Windows, but the implementation wasn't perfect. Today's BSOD is news. The same thing twenty years ago might not have been.
Personally I don't need to read the forums from twenty years ago, I was there. NT3.1 was almost as new to system and device vendors and driver writers as it was to punters. That's one reason there were so many BSODs (on x86 anyway).
Of course there have been improvements in NT. It is good to see NT finally dropping some of its Gates-imposed DOS-era baggage as time goes by, but actually maybe the NTVDM wasn't such a bad idea after all, maybe all the Win32 stuff should have lived in a Win32 sandbox too (maybe Vista's "Session 0 isolation" is a step in the right direction), and there's now less excuse for the "efficiency" improvements which led to so much UI code and data being mixed with kernel code and data.
"We have IO scheduling as well as CPU scheduling. "
Excellent. How long has Linux had IO scheduling? (Some PDP11 OSes nominally had IO prioritisation too. Nothing new under the Sun.)
"Microsoft would agree. Win32k.sys is not supposed to crash. (It's also "not non-privileged", btw.)"
The application in this picture isn't privileged and shouldn't be able to cause a crash like this. Somewhere in this picture (probably in Win32K.sys) is some privileged code which isn't written right and consequently a non-privileged application can bring down the whole system. Whose fault is that?
"What happens on your X system if the X server crashes whilst all the client apps are still running? Does the X server automatically restart and the clients spot this and restore all their UI state?"
I don't know, it never happens :)
More seriously, the apps do the same as they would if they lost a network connection between X app and X device. Because the apps *by design* have no critical kernel info in them, the system carries on, and once the X server is successfully restarted you log in again and continue. Without a reboot.
does Safari implement CSS shaders or WebGL? If it does not and if there is indeed no kernel level software installed with Safari, then indeed it would be Windows bug. Hope to be fixed soon.
However, if WebGL or CSS shaders are involved, that would mean graphic card drivers is involved as well.
Lets wait until more of this is known.
The obvious solution is for MS to copy Apple, that will be the inevitable result of the blackhats (including the blackhats pretending to be whitehats).
MS copies Apple, only lets approved programs run on its machines, only lets approved programming tools be used, and levies users heavy fees (via vendors) for keeping them safe.
Is that really what the blackhats want? A world like Steve Jobs envisioned?
The other solution is one in which those who produce hacking tools are treated like those who conspire to commit burglary by law enforcement.
It's a question of security vs risk. If you know what you are doing are if you are being targeted specifically, then it is better to go for a secure system. If you don't know how to run a secure system and no-one is after you, then the least risky system would be better.
Safari (and OSX) offer obscurity which can offer lower risk rather than increased security. What actually provides lower risk depends on your circumstances.
That said, the issue here is with an OS component not checking its parameters. An app coding error should lead to an app crash, not an OS crash. This isn't to say that Linux and OSX don't have similar issues, but "everyone else has problems" is not an excuse. Never rely on the client (in this case application) to do your validation for you.
If UI code was put in the kernel (for speed possibly) that's a poor design decision - much worse than a simple bounds check failure.
No, the problem doesn't lie with the Win32 libraries. Win32k.sys is just a name. It's a kernel mode component, so it is is the same bitness as the rest of the OS. Changing your browser wouldn't help, and unless you look at *very* large web pages, there's no benefit to using a 64-bit browser on Windows.
Does some strange things on Firefox 8.0 (Gentoo Linux AMD64). Konqueror renders a page with a very tall iframe (as you would expect). I haven't bothered to reboot into MacOS X to try Safari, and I don't have any Windows 7 machines.
I'd be interested to know if any Chrome or Konqueror users on Windows 7 AMD64 strike problems with that page, as that would put the blame squarely at Webkit itself, rather than Safari.
(Closest we have is one that came with Vista, but that had an exorcism with a Windows XP Pro CD, as we needed to run software that didn't run on Vista.)
"So from the comments; when Adobe software takes down OSX it's not Adobe's fault, it's Apples?"
Yes. Third party code should not be able to take out the monitor/kernel/whatever. On any system. Anywhere.
That said, might I point out that Adobe's code is crap?
Two wrongs don't make a right ...
Given that world+dog have interpreted Steve's statement that he gave adobe all the time in the world to get flash working on iOS* and they couldn't do it as "I refuse to let flash on iOS devices", I think you're right.
* I think it was a year after the launch at that point, so adobe had at least 18 months
"browsing a few pages with netscape would start swapping and eventually cause a kernel panic on the Sun or Sgi machines"
Run out of swapspace/pagefile, at a guess. University sysadmins expecting well-behaved applications not needing a semi-infinite amount of virtual memory?
What a properly designed OS might do is have quotas on user accounts and/or processes, in order to prevent a single unprivileged user causing a denial of service on everybody else.
There'd be a slight performance penalty for the privilege and resource usage checking, and a sysadmin overhead in setting the quotas appropriately for the user, but done right you'd end up with a more robust system which, in the case of a luser wanting a semi-infinite amount of virtual memory, would just tell the luser "computer says no", while the rest of the system carries on as before. How brilliant is that?
If only there was such a properly designed OS available.
I wonder if HP have one (not WebOS. not PH-UX. What's the other one called?)
I assume the HP OS you're referring to is MPE. Sure, MPE is nice too.
However, UNIX and its derivatives have also supported restricting resource allocation for many years, via the setrlimit() system call and the ulimit command (actually a shell built-in). If sysadmins choose not to set hard limits for their users, that's administrator error, not the OS's fault. I suppose you could knock the OS for not making it the default, of course.
And the OS should handle resource exhaustion more gracefully; many UNIX flavors do. (AIX, for example, since at least version 3 has had fairly intelligent handling of virtual memory exhaustion: it starts by killing off non-privileged processes using the most memory, and does so by raising SIGDANGER first, so programs can be written to handle the condition sensibly, before it gets out the big stick (SIGTERM followed by SIGKILL).
100 people write some software, 1 million will try to destroy it, because if they can make a mint selling their software to fix it. Something makes me think if we were just inventing the wheel today, 1 million would be designing roads to break it and offering wheel protectors to stop it breaking. Humanity is such a disappointment.
Biting the hand that feeds IT © 1998–2021