Just check ALL buffers are sacrifice a little performance, ok? Goddam this has to end. lol
Researchers have identified a kernel-level vulnerability in Windows that allows attackers to gain escalated privileges and may also allow them to remotely execute malicious code. All versions of the Microsoft OS are affected, including the heavily fortified Windows 7. The buffer overflow, which was originally reported here, …
Not necessarily recycled code. It could have been freshly minted code. It did, however, have to conform to the same spec (or else it wouldn't be compatible with existing Windows apps) and was therefore susceptible to the same design errors, particularly if the new version was written by someone familiar with the old version.
Parts of the Windows API were devised in the 80s for a machine with less than 640k of memory and no protection. (CreateDIBPalette isn't quite that old, but close.) The former point encouraged "packing" structures and "re-using" fields for different purposes depending on the values of other fields. The latter point meant that programmers had to be trusted anyway. If you'd insisted on a tighter spec then the resulting product wouldn't have fitted on the target platform and wouldn't actually have been any more secure as a result. That's a cost with no benefit. Cutting corners with the Windows API in the 1980s was a perfectly rational thing to do.
Fast forward twenty years and Microsoft probably don't *have* a mathematically rigorous spec for the Windows API. If they did they'd probably find that it was self-inconsistent and provably insecure. The twist, however, is that the closed source ecosystem means that after you've found a problem you may find you can't fix it without breaking existing apps and pissing off your customers even more. Closed source ecosystems are intrinsically less secure than open ones because sometimes you aren't allowed to fix them.
Which brings us back to the "all new" Vista kernel you mentioned, since one of the big criticisms leveled against it was the fact that MS took the plunge and redesigned all the kernel interfaces, with the result that zillions of hardware devices were no longer supported. Those hardware vendors that were still in business eventually issued new drivers for their more recent offerings, but that still left a lot of hardware unsupported. (And as we've read this week, XP's market share is still larger than Vista+7 put together. Co-incidence?)
This is such a regular occurrence that I'm astounded that people are still surprised by this kind of news.
Let's face it the "all new" vista kernel was more of an "xp kernal with some old junk removed", likewise the win7 kernel is just the same again with even more derelict and forgotten code removed or tweaked, you don't have to work for M$ to know that, it's pretty much common knowledge.
I also teach students to code. High performance languages such as C and C++ are highly susceptible to this kind of error. I and just about any coder I have ever taught to use such languages have written code with buffer and heap overflow possibilities, probably many times over. Most of the code you create won't ever be used in a hostile environment, until this use context creeps up on you, when these security bugs really matter.
You will minimise occurrence by improving programmer education and by maximising code peer review, in some cases helped using automated code analysis tools. Even very experienced coders with deadlines to meet and insufficient time for peer review will create buffer overflows.
A good defence is likely to include opening up the source code to all interested. This doesn't defend against such bugs in open source code which isn't being inspected by many interested eyeballs. It does defend open source code which is being openly inspected. In this case there will still be some eyeballs finding these bugs and more interested in covert criminal or intelligence agency use of them than in reporting them and providing fixes upstream.
The security case for closed source is worse than this. Those with access to closed source code who are not the mainstream developers are more likely to have a covert interest than in reporting problems to other users and developers. Software development shops are rarely leak free and programmers with criminal intent are not deterred by closed source intellectual property restrictions. Also governments won't purchase Windows unless their intelligence agencies have access to the source code.
"You will minimise occurrence by improving programmer education and by maximising code peer review, in some cases helped using automated code analysis tools. "
It might help if you also point out that the *fastest* piece of software is the one that ignores whatever is input and finishes IE it does nothing useful but boy, does it do it *very* quickly.*
Response speed (which is *one* kind of performance. There are others) is *rarely* critical in the real world (most of Windows appears to be interpreted through the Common Language Environment).
However *when* it is it's usually linked to other issues like security, and reliability. Control systems for everything from machine tools and boilers (both of which, unlike cars and avionics don't AFAIK have specific development standards to work to) but could kill someone if their software was written by a halfwit spring to mind. Also low (or relatively) low level wrapper code (IIRC In Windows the DIB prefix means "Device Independent Bitmap")
So if you're training people to develop for the lower levels of mass market products (Windows OS) which *will* be attacked, or deeply embedded systems (which *might* be attacked and would probably hurt or kill someone if they go wrong) where "performance" *is* an issue and you're *not * instilling them a *deep* interest in things like testability, sanity checking input data and even (dare I even breath it) verifiability of their code their future employers should get ready for a whole lot of fail.
Even very experienced coders with deadlines to meet and insufficient time for peer review will create buffer overflows.
No doubt. They might find that writing a tool (well in principle I'd guess some C macros) that spits out outlines of functions in either a full parameter checking or a take-everything-at-face-value version might be a worthwhile investment.
Mine's the jacket with "Premature optimization is the root of most evil. DE Knuth" on it.
I have another slogan for you: "Assumption is the mother of all fuckups".
Sanity checking takes time. Yet as a beginning coder I found myself writing code to check everything, everywhere. That's reasonable, though at some point I stopped silently folding invalid input into some default value; better to just give up and teach the programmer to make sure the inputs are correct. That way you can, carefully, lift most of the sanity checking and speed up the code. It also shows very clearly where you must do a lot of sanity checking: Right where your code receives inputs it cannot afford to assume anything about. It also encourages to perform every check exactly once.
It's also why I believe that encapsulation, bordering off areas of responsibility, creating well-defined interfaces between parts, is the most useful thing that OO gave us. The rest, like hierarchical inheritance or even multiple inheritance, and polymorphism, is window dressing. Useful window dressing, not always but often enough, but window dressing. And yes, "I believe". That's highly opinionated personal opinion right there.
Anyway. As a programmer you're free to assume whatever you want, as long as you've thoroughly checked and confirmed your assumptions. That ought to be basic practice for everyone from architect down to "cheap indian" implementor. I trust our instructors teach that nowadays?
To hear the informed opinion of your self and the posters to which you replied!
I'm a Software Engineer as well and can appreciate the nightmare it CAN be to ensure all POTENTIAL vulnerabilities are captured and dealt with accordingly.
Refreshing as well because usually these sorts of stories start with some Linux zealot banging on about how it could never happen on their platform of choice or a Fanboi giving it the same. Well, news for you chaps. It can and does!
"Assumption is the mother of all fuckups".
Agreed. Especially the ones about the interfaces between different levels of code modules written by different programmers and the fact that no user or developer wants to do Bad Stuff (TM).
"That way you can, carefully, lift most of the sanity checking and speed up the code. It also shows very clearly where you must do a lot of sanity checking: Right where your code receives inputs it cannot afford to assume anything about. It also encourages to perform every check exactly once."
That would be *appropriate* optimization based on data collection of a *running* system and analysis of the results.
Note The function referred to is part of the Windows API. It's in the manual and publicly accessible. It is *definitely* a part of Windows that *will* receive input from *almost* anywhere. It is likely to be a wrapper for a bunch of device specific stuff but called rarely enough (AFAIK "Device Independent" bitmaps are not the *performance* option for anything) that checking its parameters should not hit performance (premature optimization again).
My gut feeling is that it should be feasible to code a lot of the sanity check code automatically from a spec of the functions definition. It would seem the sort of thing macro processors were written for. Note that working through an API manual and feeding them with pretty near anything *but* the valid ones as a way to break the OS (ideally to a state useful to bad guys) has been SOP since the late 1960s.
If it can be called from user space it's fair game.
Personally I like to work toward optimisation wherever I can. Note that this is different from getting down with the code and tweaking so it'll run faster, which is where the cost is. There is a big difference between doing that and planning ahead for possible later optimisation. And, of course, where better algorithms trump bit twiddling any day, so does better architecture trump better algorithms. The best optimisation possible is not making sure that all the work is done in the most efficient manner, but twisting things so that the work ceases to be necessairy.
Knuth basically says "don't waste your time", and I like to think big not wasting my time. The fact that the optimisation (in the previous example, lifting input checking on non-api subroutines) is now possible is the salient point. Do that in a timely manner, and the now-superfluous sanity checking code didn't need to be written in the first place. What do you mean, lose time optimising?
I probably should've clarified. So here I do.
As to feeding invalid input into APIs, that should be SOP. Apparently it isn't; it's quite amazing what you can break still with even simple "fuzzers". Just about everything will break down eventually. Not too long ago someone figured out how to tickle TCP stacks (remotely) such that the system ran out of *timers*. OSes don't much like that, no. It's also quite hard to prevent that sort of thing through simple input filtering. But as we've seen, even that regularly doens't happen in the right places. But by certain reports, lots and lots in all the wrong places. Common sense has it that shotgunning sanity checks is still effort well spent. I think it's just as much a waste of time, though justifyable if finding the right places turns out to be too hard. It'd still have me ask hard questions like: Why? So you don't actually know what the data flow in your code is like? Oh?
That's why I like the encapsulation part of OO programming: It gives me language tools that help provide guarantees about the structures I'm manipulating and provide barriers against meddling from the outside after the input checking has been done. That and the (same) tools that let me clearly define boundaries and interfaces between program parts.
are you honestly saying linux doesn't get this kind of vulnerability? If so, I'm glad you're not in charge of my systems. I suggest you subcribe to your distribution's security list, and start reviewing it.
Granted, an issue as generally exploitable as this comes up rarely, but "local attacker can get root if you're using n" issues come up all the time. I've got a "PAM vulnerability" from 7/7/10 (ubuntu) in my inbox right now, for instance
As a Mac user I have some sympathy for MS, genuinely I do. This is code that is very complicated, hundreds of thousands of lines of spagetti code. Quite a bit is probably so old that most of the devs that wrote it, have long since moved on, it would need loads of manpower to sort out. Couple that with the marketers and PHBs bearing down on the devs to get the latest greatest code out the door, these poor devs, who I am sure would love to fix it properly, are simply not given the time to do the job properly.
I am not advocating they do an OSX, dump the current line and start from scratch, but sooner or later most companies reach a point with their lead product, where they simply have to cut their loses and start again. MS need to sit down and have a serious think about how much time and effort is going into fixing and fending off these bad press stories. Then again I suppose so long as the license revenue keeps flowing, it far outweighs the cost of a few hundred developers time to fix a few small niggles every few weeks.
Time is money, and as they say money talks and BS...you know the rest.
MS are mostly a technology acquisition company now, they buy stuff they can't write themselves, has been true since the Frontpage and I think even MS office days...
They do some development, sure, but rewrite a new OS? Doubt they could do it. Not without hiring some programmers ;o)
.NET could be the tool to allow for this! (No - don't laugh!)
As a layer of abstraction between applications and the OS, in theory the OS could be replaced for something newer/more secure, but still keep application compatibility at the .NET API level.
I grant you, it is a big IF, but technically possible. It would require all major apps (including MS Office) to be re-written against .NET, but this could be done today, before the OS is actually switched out.
I guess it all depends on how much longer the fat desktop paradigm remains fashionable. With cloud computing and better browser apps (HTML5 et al) looming, who needs Windows?
I, for one, don't like the idea of putting my data into the hands of a company that either wants to sell my privacy to the highest bidder or just doesn't give a damn about my stuff OR puts my stuff beyond an unreliable net connection. But I do believe this time will come - it's pretty inevitable.
So maybe the onerous task to abstract the OS via .NET is pointless - Windows is a dead OS walking?
Forget I ever said anything.
MS already tried to do this... it's called .net. It's secure, easy to manage, fast and extremely efficient. At least that's the marketing BS that MS released said anyway.
Back in the real world, it's DLL hell overload, bodged APIs layered on top of the old existing APIs, and is so inefficient it's comical. The first versions missed out half of what real developers actually required so struggling developers had to lever in place so many bodges and kludges to call normal APIs it was unfunny. There are now around 5 different versions of .net to download, install and maintain on every system and that's before MS start the shenanigans of certain versions only working on certain underlying OSes... Apparently all this is good.
ms has that manpower. tells customers they are designing from the ground up.
ms took the time to write vista and produced a camel.
the main selling point for 7 apart from security and stability is you can run it in xp mode
remember this is the os you pay for. why isn't it way better than all those free oses?
Preventing buffer overflows isn't exactly that difficult. You know how big your buffer is, so you only accept as many bytes as will fit. If somebody throws a huge mess of bytes at you, you just take what will fit and send the rest to the bit bucket. Problem eliminated.
Of course, finding every, single, last place you've used a buffer and correcting the code in something as big as Windows is going to be a long, hard, difficult job; no question. However, there's no reason in the world to add new code with a potential overflow issue.
To see how common Buffer overflow issues are - follow some Linux security advisories - e.g. there are only 4 mentions of buffer overflow on http://www.debian.org/security/ currently (showing 11 July on) - then sitting under "several vulnerabilities" items such as:
It was discovered a buffer overflow in libpng which allows remote attackers to execute arbitrary code via a PNG image that triggers an additional data row.
that Microsoft don't sit down and have serious thinks? Or that they don't have plenty of software engineers who do nothing but review and fix code. Considering the cost of a Windows license in the Western hemisphere, compared to the average wage of a software engineer in, for example, India, I thnks that's highly likely.
It's fantastic that there are so many people out there who either genuinely want to fix exploitable code, or who want to bash Microsoft so much, that they find these weaknesses for us and Microsoft.
...........the inevitable, tiresome comments about Windows and MS.
Like Pavlov's dogs you respond with saddening predictability.
Headline, "vuln affects all WindowIs versions". Woof woof, a chance to repeat what's been said countless times before. Is Linux perfect, is OSX, is BSD? A "vuln" in Windows, sorry Windoze, has the potential to be more damaging because of usage numbers. But that's the way things are. I agree that there should be more balance. That people should be using Linux more. But they're not. If they were, we'd be reading headlines about "vulns" in Linux.
How many of you MS, sorry M$, bashers actually know how to write "code"? You talk the talk. As if, were you were given the chance, you would be able to write a completely secure operating system.
Please, either be original or do us a favour, give it a rest. Instead of preaching to the converted, try to convince the un-converted or rather the un-aware.
C null terminated strings and similar stupidity.
C, C++ and all related languages suffer from this. Modula-2 you can put inline compiler directives to turn on or off RUN TIME array bound checking. Since 1983 or earlier.
Software engineering wise PC programming is still living in the 1970s. The only useful thing (misused) was C++ "objects" which is really just an automatic way of hiding a pointer to an instance of struct where some members are pointers to functions. 1987
Windows 7 is the latest version of NT3.1 (1993) based on 1985's IBM/MS OS/2 (MS had an OS/2 version of their own in 1989 with lan Manager added, hence NT first version is 3.1).
So OS programmers in MS (and Solaris, Linux, Mac OS X) have been knowingly writing insecure unreliable software even though "technology" existed since early 1980s to avoid this well known problem even then.
In 1986 you didn't just get an application error, Array bounds (buffer overflow) error did everything from rebooting the PC to erasing the disk.
1) Writing a new version of Windows from scratch is just the dumbest thing they could do. You don't drop a code-base of 100k+ (Windows is probably several million lines) lines of code because all those fixes you put in over time will be lost.
2) When was the last time anyone here wrote a giant software project running for several years, that doesn't get bugs reported even in old features... and that's without world+dog specifically looking for such things. I bet any code ANY of us have written has a higher number of bugs per line than Windows... but our software doesn't get relentlessly hammered.
3)I always have wondered - there _must_ be similar holes in MacOSX and *nix variants. It's simply impossible there are not. And in fact don't they have hacking contests on different platforms? What happens about them... do *nix people have an equivalent of Windows Update or are they reliant on updating the OS to a new version, or what? Genuine question...
sure this article is several years old.
just taking the numbers from this, but you can check the article yourself.
linux had an average of 0.17 bugs per 1000 lines of code
commercial software has about 20-30 bugs per 1000lines of code.
when that was written XP had around 40million lines of code. there are issues with the study in that they weren't able to look at the source code for XP,
but are you seriously going to claim that M$ are going to have a "much" lower bug count than linux...
and talking about hacking contests, I point you towards
vista and OS-X were beaten, ubuntu wasn't.
Most Linux distros update the entire installed software base as often as you like ( check once a day or once a week) The updates can be automatic or user authorized ( root password needed on most). Because only kernel updates need reboots in most cases this can all happen without the user being aware until for example a new version of Firefox is started and announces the fact.
Apt based packaging from central repositories makes updates easy.
This is why I love Ubuntu.
Actually the default is to run this for you each week as "Update Manager", so you don't get swamped with updates each day.
Security updates are push out immediately.
"1) Writing a new version of Windows from scratch is just the dumbest thing they could do."
I don't know really. Sometimes the 'dumbest' solution is the best.
Granted it will take people who really understand what the hell is going on behind the scenes, and smart people at that, and it would be 'somewhat' labour intensive, sometimes when a code base becomes to ugly, it may just as well be time to nuke the entire thing from orbit, as it's the only way to be sure...
I use windows (I am platform agnostic somewhat). This is what I would like to see.
I am no MS lover, but from what I can see the finders of this bug have gone public as soon as they found the problem. They should have reported it to MS and gone public some time later - a month would be OK. That would allow MS to fix & get the patch out. What they have done is to make it easier for crackers to attack end user systems.
Having said that: it does seem that part of the problem is that MS has too much running at kernel level, things that do not need to be there. Thus problems in code have greater consequences than they ought to. This is a big design error in MS systems.
Oh: it is NOT a remote exploitable problem as El Reg suggests.
This is a right-across-the-board piece of stupidity that affects all mainstream OSs - No exceptions.
I recognised buffer overflows as an important issue in the early 1980s doing assembler programming on a BBC model B, so how is it that major software devs STILL write unsafe code? If you don't know for an absolute certainty that underlying code has length checks, do it on your own code. Don't be a lazy bastard!
to m$, if this is legacy code , then it will have been written without the overflow checking, but then whats it doing in the latest and greatest products?
More than likely the PHBs decided "screw this, we have'nt got time to write a decent secure .dll, shove the old code in so we can get the product out of the door"
But there again , even in ye olden days of times past (ie the 1990s) how much extra processing would it have taken to go
Query: what size is your data
Println "Oi your data is bigger than you say it is fek off"
But then I'm using linux so I'm happ.... bugs in that too? aawww s**t
Every OS has flaws but windows is a bundle of flaws masquerading as an OS.
Checking and quick patching are the key points.
GNU/Debian/Ubuntu/gnome is spot on in this regard.
With open code lots of ppl are looking and checking and fixing.
Big name like Google and IBM rely on it.
Apt based packaging from central repositories makes updates easy.
Windows is a nasty mess with previous compromises between security and marketing coming back to haunt them.
A first step for anyone running windows is to do the easy obvious stuff, get a Linux based router and run windows behind it, for a start.
Run firefox or chrome not IE, install Security Essentials. Don't use dodgy copies of XP.
If you are short of cash, use Ubuntu or at a push download the 120day copy of MS Server 2008 and turn it into a workstation, you can bump it to 240 day legally.
But seriously unless you have corporate IT keeping an eye on security for you, just take the time to learn Ubuntu.
Using windows is like wearing a big target on your back, all the viruses are targeting YOU!
Get a Xbox for games.
Running Ubuntu... use Virtualbox and run a copy of XP in that for any apps you need.
Run "sudo stop qemu-kvm" before you start it and you are good to go.
No company can keep *all* their developers active *all* the time.
Logic says there can't be *that* many *patterns* of code that have the profile that will give this failure.
Here's the point. Pick up the bugs (and this *is* a bug) in the *source* code not run loads of tricky (but ultimately ineffective, given they have found this for how many versions?) program exercisers, stress testers etc.
Hint. This is not *just* a bug in a function. It's a bug in your development *process*.
Speaking of which how long ago was that root and branch code review of Windows?
AC says: "As a Mac user ... I am not advocating they do an OSX, dump the current line and start from scratch,"
Well OS X is simply recycled UNIX from NeXt Step with a new shiny UI added. Nothing to be proud of and after 10 years is still < 5% market share. It's really an inferior security model, but just less attacked.
Certainly not from scratch. Suffers from the same bloat as Windows and Cruft and stupidity going back to 1976 that Linux has.
Windows 7 is latest version of a 17 year old OS based on work started in 1980s
OS X is an 10 year old OS based on work started in 1970s
I'll go now. Mines the one with Knuth's "Fundamental Algorithms" in the pocket.
@Mage: Unix certainly has roots in the 70's and some of the Mac OS X underpinnings therefore go back that far. But don't criticize code *just* because it's old. The big advantage of old open-source code (Mac OS X's Unix underpinnings are all open-source) is that it's open, and has been extensively inspected and tested.
I don't criticize Apple for not rewriting the kernel every couple of revisions - that would be silly.
But Microsoft DID put a big, sparkly "all new" sticker on Vista. Now we find that, well, maybe they fudged the truth. Or possibly, to one poster's point, it *was* rewritten but compatibility issues meant that the bug remained. Humbug. If they rewrote the code and retained the unchecked-buffer-overflow behaviour then they deserve all the wrath they're getting, and more.
As to .NET being the saviour, MS has very, very infrequently cut off backward-compatibility. If they continue this with their next OS-cycle, whatever it is, then they're just carrying the same bag of fail.
Apple cut it off after decent transition periods - and handles the transitions very well IMHO. But 68K is no more, PPC is gone, OS 9 is dead and they only have to support Leopard and Snow Leopard.
Yet H.M. Gov't still prefers IE6. God help 'em for few others will.
>win32k.sys isn't a necessary part of kernel per se - it's a lot of the functionality
>that in *NIXes is provided by desktop environments, but moved to kernel mode
>in NT 3.5 (?) times to improve performance.
If it's in kernel-space then, as far as bugness goes, it's part of the kernel.
This is typical MS behaviour - trade security for speed.
There is a whole lot of stuff in Windows kernel space that shouldn't be there. They put it there to make it faster (so why is it still slow?) at the expense of security.
Bad trade, gentlemen. Bad trade.
The thing is that in theory they had a better case than the simplistic model of "unix": They tried to actually make use of the four security rings model x86 got from multics. Turns out that the way they did it was a bit too detrimental to performance for everyone's tastes. There's a lesson here.
What the lesson is? I don't know. Maybe that yes, context switches are awfully expensive, though a microkernel like QNX still manages to do quite well. Or maybe that one shouldn't make graphics performance that integral to overall system performance? As I say, "real servers are headless".
The fact that micros~1 finds itself so often making such onerous tradeoffs itself is another sign that something is fundamentally fishy with what they're doing. It's not just bad management or poor programming; it's also a bad approach to engineering.
What Microsoft need to do is to stop this crap piecemeal approach and instead learn from the Windows (7) Kernel dependency scope/break project, by rolling this code QA/fix across all the Windows sub-systems, with static analysis checks, and also to adding bounds/null checks to all C/C++ function calls (e.g. via compiler macros), to isolate issues and reveal hidden issues which must be dealt with.
You can already fix loads of stupid bugs in Java using static analysis software QA tools like Findbugs and PMD, and other input fuzzing tools, surely something similar can be done with Windows C/C++ code, with debugging symbols.
Windows is too damned monolithic, with too much code running too close to the kernel and with not enough isolation, with the excuse of performance, when it doesn't need to, especially in multi-core CPUs.
Microsoft also need to fix their crap and insecure .Net framework, its ridiculously bloated number of files and registry entries, the tiresome pain to fix OS bundled versions, and its frankly appalling application error reporting. I keep seeing stupid application errors and even worse error reporting than Java, and no easy way the fix them e.g. with Java I just reinstall the JVM/JDK, or fix a tiny number of registry entries, but not so with crappy .Net!
I know a lot of this is just sh!t talking for fun but now that Android is sending linux into the mainstream you might want to tone it down. If you really believe that any Linux distro does not have these kind of errors the I don't think IT is the field for. Do a search for your self. WGET even has a massive exploit they just found last week. Check out the risky business podcast if you really want well rounded view of things. Its very serious shit that effects everyone. The iphone was just exploited last week that allows complete remote root control, bypassing the sandbox of the device! Germany's government even issued a warning. Ha, good luck if you think switching OS's is all you have to do to stop this stuff.
Maybe the MS fanbois can simply admit Windows is much less secure *in the real world* than Linux, BSD and MacOS X ? Zero virus/botnet problems with the latter group seem to indicate exactly that.
Microsoft screws up *conceptually* all the time. For example, their process permissions concept would be very good if it also worked with the FAT filesystem (which is "just" used for USB memory sticks). All the time they compromise on security if the slightest inconvenience can be expected for their lUsers.
A friend of mine is a windows expert who maintains internet cafes, including SHELL32.DLL hacks. He recently discovered the glorious IE8 cannot defend against Flash Exploits. IE8 is supposed to be "heavily fortified" and to contain the Flash player crap and all of its exploits. Instead, a process called "Skypenames2.exe" shows up when you look at sites like viporn.com. And that process will do a royal screwup of your Windows XP machine, which no virusscanner can detect right now.
Of course, this only works if you run in Admin mode - which most lUsers and internet cafes do. Because they are lazy suckers, certainly.
MS is botching it because they are lying at their customers when they claim IE8 can "secure" a system. Browsing with a locked-down, normal user account can do that, but apparently not the much-hyped IE8 sandbox.
All file security is system-local, whether NTFS or FAT. Lesson learned: ENCRYPT SENSITIVE DATA. Complaining that security doesn't apply to USB while completely failing to understand how file security works in the first place is petulant - you'd just as quickly complain if it did apply to FAT and you could just place the USB stick in another PC to read it.
Also, I don't really get the whole assumption that non-admin is safer than admin mode, and I'm an IT security pro. True, instead of hijacking the system for all users, it's only hijacked for one user. Where exactly is the defense against encrypting/destroying all of that user's important files and documents, especially all of the files on the fileserver they have access to? How does that stop their credit card data being watched for and transmitted to Russia? What's really more important? The system is easy to reinstall, spam is easy to detect and block, compared to a user's data or identity being stolen. No, the only defense is keep any and all exploits out, user or admin mode, and keeping up-to-date backups in case of failure.
..you first get yourself a proper education and then check back in three years time.
FAT / Windows Process Rights: Google Chrome tries to remove virtually all privileges from an html rendering process or a Flash plugin process. This works for NTFS, but not for FAT, because of an MS implementation quirk. Consequentially, your harddisk is safe, but a virus may suck data from your USB stick.
You are talking about security of a system when the attacker has physical access, which virus writers normally do not have. And even if you encrypt your file system, it must be accessible in unencrypted form during computer runtime to be useful, which potentially means a virus can read it.
And YES, running as a non-Admin or non-Root when reading documents is a very basic security measure. Because thereby only the currently executing process can be affected. Of course also files residing on the harddrive can be read, changed or deleted. A system like SE Linux or AppArmor can even remove that threat to a large degree - for a normal user of course.
BUT the virus cannot fiddle with the operating system nor can it change IExplore.exe, Firefox.exe or any system-installed dlls. It CANNOT INSTALL A ROOTKIT. Maybe you can remember this phrase for your future life as a "system administrator".
Running as a *normal* user, browsing http://sweetgirlswithbigboobs.ru, clearing history, logging off, logging on again and doing https://bankofamerica.com will give you a fresh browser instance where you can have high confidence it is not owned by some Russkie Criminal. If you don't understand this, maybe you should apply with McDonald's.
That viruses can indeed destroy documents is true. IE8 was supposed to sandbox that can of worms called Flash, but apparently the sandbox and UAC is more a kind of Box With Many Ratholes.
The only proper security thing Windows contains is the privilege reduction of a norma user. UAC and the IE8 sandbox are apparently not stopping criminals.
Stepping around the traditional OS flamewar, and looking at the details of the vulnerability...
The guy that made this public is doing it purely for the "fame" to show the world how cool he is, this type of behavior has been around for years (hackers trying to out-do each other, crackers breaking security mechanisms as they see them as a challenge, etc.).
This article is sensationalist, however - by the guy's own admission he couldn't inject any code so the impact is system stability (crashed/hung OS):
"Anyway, it’s really funny for me to read that people say it’s exploitable, I am waiting to see an exploit, in the code execution sense.
This is not trivial since every fourth byte that is copied is the value 4.
And the memory block gets allocated per call, very hard to have any assumptions on it.
It’s very hard to exploit it for code execution, on the edge of impossible.
That’s why I felt safe about releasing it publicly :)"
This isn't being publicized for the "greater good" - he couldn't figure out a way to exploit the vulnerability so he sets a challenge for others... before even giving MSFT a chance to review and address it.
The only people that lose out here are the end users, if an exploit is possible and gets into the wild.
By the late 1970's, many of us had learned how and how not to design and implement secure and robust operating systems, programming languages, compilers, etc. that were far better 30 years ago than any of the MS, Unix, Linux, OS X, or related systems that currently dominate our industry. From experience gained from systems like Multics, from production and experimental languages, from the advances in compiler technology, and from efforts like the "Orange Book", we had learned the skills needed to build very good systems.
Systems based on this technology dominated the mini-computer space in the 1970's and 1980's with systems from Prime, Tandem, Data General, IBM, and Digital Equipment Corporaton (DEC) becoming market leaders. The DEC VMS operating systems on the VAX computers and later ported to the Alpha and to the Itanium is, I think, the best of these.
Unfortuantely, Unix and the C language were free to universities, low cost to workstation vendors, and became trendy among the next decade of students and their equally naive professors. Rather than building real systems, nearly all the profs had spent their academic careers in arcane fights over theoretical advantages of control structures (e.g. GOTO-less programming) and such. Most of them became swept along with the Unix/C advocates because academic budgets were really tight during the 1980's and "Free" and "Open" are powerful marketing terms.
The IBM PC and Microsoft were disruptors in the business markets, initially as stand-alone systems with almost no focus on networks, security, or robustness. Later, Windows NT was marketed as a low-cost, almost VMS operating system, with much made about several of the DEC VMS developers going to the WNT team at MS. But it was more hype than reality.
So here we are, all facing a nearly-impossible-to-fix mess that sucks up resources and our time, with no end in sight.
I think there might be a way for us to clean up a lot of this mess without completely starting over. It's not likely to happen, but the rapidly increasing risk along our present course might leave few alternatives.
Use the VMS/OpenVMS operating system and its file systems as the base. VMS already includes everything at the user level for Unix systems and many for MS Windows compatibility. Like Unix and Linux, VMS has native support for X Windows and everything related. VMS has very good compilers, including ones that use executable files from other systems as their input.
Initially developed at DEC, the OpenVMS products and developers were first acuired by Compaq and in turn by HP. They are usually marketed to existing high-end high-reliability customers as large servers. But all the software is there to support OpenVMS on workstations, though the addition of support for a few more MS Windows APIs and libraries would make things easier (while still enforcing security). A port of VMS to current 64-bit AMD/Intel processors would give us, at least, the possibility of resecuring our world.
A bit like virtual machines, but easier to manage and immensely more secure.
Biting the hand that feeds IT © 1998–2021