Well Done MS !
Another example of a slipshod attitude towards security, who in their right mind would design an OS that would perform such dangerous tasks ?
Yet another reason to use a 'nix platform.
Hackers have developed malware that spreads via USB sticks using a previously unknown security weakness involving Windows' handling of shortcut files. Malware targeting the security weakness in the handling of 'lnk shortcut files has been spotted in the wild by Belarus-based security firm VirusBlokAda. The malware uses rootkit …
it is related to the behaviour of explorer. It does do some crazy things. Remember the one where it would hang if you clicked on the directory above a directory that contained large zip files, because the developers decided it would a good thing to look at the zip in case you later wanted to list that directory later? Except if you did list it, you had to wait a second time while it read it again in case it might have changed since the last time.
The cd - every time you look at the top level (and sometimes at other random levels) it decides to wait before it shows you anything while it spins up the cd, which it knows is unchanged and read only and it has already listed it's contents, but let's just check again.
It still does things like this which become very apparent when you operate obver a slow link. I'd bet explorer is doing something "clever" when it sees the link. Too clever by half.
In my opinion, the explorer developers were clueless as to the real world.
and 'nix doesn't have security holes of course.
There's obviously no reason as to why there are hundreds of regular security patches when you download package updates on most distributions.
And no reason why my web server logs is crammed full of hack attempts targeting php packages on linux.
I love 'nix stuff as much as anything else, but I'm not blind enough to believe that only MS is vulnerable.
Anyway, who in their right mind would design an OS where a clueless admin can go 'rm -rf /'
(sure you can do likewise with just the delete key in Windows, but the point is you can do dangerous things on non-MS operating systems).
You can create files called '-f' and '*' under *nix. We used to do that at Polytechnic. It was one of the several ways we used to 'educate' people who forgot to log out. Create those two little beauties in their home directory then wait :)
My personal favourite though (as it was less harmful) was just emailing someone text with control characters in it. IIRC ^S logged off the BBC Micro console software and dropped them back into BASIC. Another one changed the screen mode. Mode 2 was always good for a laugh :)
I really don't understand this. LNKs were added to offer a similar functionality to softlinks in Un*x. However, they don't work properly, because not everything recognises that it is actually a link to a different file.
Indeed some programs still open the original file, but on the way, they realised that the file originally pointed to had an extension that they don't recognise, so they open the file, just wrongly.
And this is despite the fact that NTFS supports linking in the FS!! I'd say that if you are using anything from NT onwards, then the main FS should only be NTFS and get the links free.
Instead we have these weird files that sometimes work like a FS link, and the rest of the time either don't work, or just infect your computer...
..because removable media isn't formatted with NTFS.
Thumb drives usually use FAT which doesn't support soft links. DVDs and CDs use ISO9660. This might possibly support soft links (it's been a while since I had the unpleasant experience of reading the spec) but I doubt anyone would risk it. ISO9660 is such a pain that most people only implement the basics.
I believe NTFS links are only hard links. They're pretty nasty as there's little to know something is hard linked.
LNKs are (or were) in some ways safer than a soft link, because they are just a UI shortcut, not a FS shortcut. So you can't accidentally have an application go off traversing via a link that wasn't intended to be anything other than a shortcut for a user to click on. That some apps parse them as soft links is their fault. They should be treated as desktop shortcuts and nothing else.
When the internet took off and XP came along, MS should have pushed a lot harder at locking down the O/S, they didn't. Just like Larry Ellison with Oracle back in the 80's, numbers, numbers, numbers was all that mattered. Didn't matter if Oracle database couldn't hold data for toffee! Getting names on the books and securing the numbers was all that mattered.
With the saturation that MS have in the home and business desktop markets, they have no reason to make more than a token effort to secure their flagships products. The market won't dump them, they have it sown up so why should they worry about something that will bother a small percentage of users.
I am almost positive there are developers in MS crying out to get things fixed, but the marketing droids want bums on seats, even if that means cutting back on developers and shipping products not completely tested, so be it.
Linux might be a little rough around the edges and need a little more work in some places, but at least the developers have passion to try to get things right, the marketers don't get anywhere near as much of a say in Linuxland.
You can also say that to the folks who say "I told you so" when it all goes pear-shaped, as it inevitably will, sooner or later.
Conficker and friends don't need systems connected to the Internet, they just need (eg) a USB stick, an unclean subcontractor/visitor with (in)appropriate access, and incompetent IT staff and procedures. Fortunately those three never coincide so there's no need to worry about the kind of scenario proposed in the article is there.
What, they do coincide, and not just in hospitals? (The private sector are just better at keeping their outbreaks under cover; it took weeks for my high-tech employers to get rid of conficker).
I suppose it could be a hoax... anyway I think it was Windows 95 or 98 where they announced on the same day, "Your operating system is no longer supported from today, and there is a type of malicious JPEG that will destroy your computer if it is ever on a web page you visit, we have known about this for some time."
That's been the case for a very very very long time.
The original NT design had lots of separate kernel modules and very little code ran in kernel mode unless it actually *had* to. This meant that there was very little code which was capable of compromising the whole system, and thus the system as a whole was relatively stable and secure. It also meant that there was a bit of a performance hit every time code went from user mode (an editor or whatever) to kernel mode (eg to do some actual IO).
The performance hit meant that in the early days of NT3 and NT4, apps were *slower* on NT than on W98 - W98 was always in "kernel mode", always capable of clobbering the whole system, and often did. So NT was typically more productive (because it wasn't subject to address space limits and wasn't falling over) but any individual benchmark would be slower on NT.
The marketing men didn't like this, and nor did Bill.
Bill said "make NT faster than 98". So lots of stuff that could and should have been user mode got shifted into kernel mode so there weren't so many changes from user to kernel and back again. And all that unnecessarily exposed kernel mode can compromise a whole system.
When high definition content started coming along, Bill's mates in the content industry attempted to get MS to restore some of the security of the user/kernel split, so that their extremely valuable high definition bits weren't as easily copied as they might have been without MS DRM and anti-tilt and the like. Unfortunately in many cases the performance effects were even worse than the 98->NT performance hit, and so Vista was the delight we came to know and love.
Whatever the naysayers may tell you, Linux does at least generally understand the difference between user mode and kernel mode, and generally makes the tradeoff in favour of stability/security rather than ultimate performance. For a lot of people that's a very sensible tradeoff.
One set of folks who may not like that tradeoff are of course l33t gamers; they just want everything to be as fast and as low overhead as is possible so they can get on with their frogging or whatever. They'd be better off leaving games to consoles though, and letting PCs be used for what PCs should be used for. No PC can serve two masters equally well (not with the same OS, anyway)..
It was pretty sweet, the display module could crash, but the system would be merrily running away while you figured out some other method of dealing with it. But that doesn't play video games very well, so we got the NT4 corruption thing going and things went south from there...
These companies who offer control systems to critical and/or safety critical applications do guarantee them I presume? You know, they have audited the software and hardware, and have the full backing of all suppliers to cover them for the consequences of flaws in the system?
If not, who was the muppet that OK'd the choice?
"These companies who offer control systems to critical and/or safety critical applications do guarantee them I presume?"
Have you ever read a software contract/licence of any kind? Either for off the shelf or bespoke software? Mass market (Anti-Norton Virus etc) or niche (a SCADA package)?
It is the job of the software supplier's highly paid lawyers to ensure that software is supplied with no effective warranty of any kind, and with no possibility of supplier liability for damages if things do go wrong.
It is far more important to make sure the lawyers do their job right than it is to make sure the designers/coders/testers/reviewers etc do their job right, which is why the lawyers are generally paid far more than the people who actually produce the revenue-generating product.
Also, in a related way, the PHBs in this picture are generally far more interested in shiny new stuff ("we've done Ruby on Rails, what's next?", "we've done agile and scrums, what's next?") than they are in actually improving the product or service by simple stepwise refinement.
I guess there is a miniscule chance that the corporate lawyers may get it wrong occasionally, and that liability may then arise, but can anyone think of any examples that have gone public? I can't, but I'm only half awake today. Sky vs HP/EDS in recent months seems to ring a bell of some kind.
It sounds like the flaw here is in Explorer then, not actually with shortcut files. Someone found a way to get Explorer to execute code via a shortcut much like the shell scrap thingy about 10 years ago. Truly, these idiots at MS will never learn. The more they try to have Explorer pop things up, display little info panes, or display stuff in the status bar the more opportunities they create for this kind of stuff. Any mitigation options? Does running in Classic mode help without all the stupid panes?
In a semi-related matter, my Explorer leaks file handles like mad, undoubtedly due to the half assed way MS implemented shell extensions, compounded by the idiotic way that every piece of software you install sees a need to install some useless shell extension just to prove that their zit faced developer learned a new trick. Then, not only is his company's software unstable, but my whole system becomes unstable, making it harder to identify the source of the crappy software.
I dont understand why the system doesnt just cache code changes to the kernel for review / approval, if the changes arent approved they;re discarded on the next reboot. would make things so much easier...
Once the OS is set up, the admin should be able to "lock" the system, no changes to the kernel are accepted from that point forward, this would solve so many problems...