Redmond's lack of a historical database...
...does not mean they cannot be served with an order to monitor an address, or allow real-time access to an IP's behaviour.
*dons extra-lux tinfoil hat* (built-in earmuffs)
Microsoft has moved to quell fears that Windows 8 is building up a detailed record of all applications stored on client machines via its SmartScreen application. An analysis by security researcher Nadim Kobeissi noticed a potential privacy violation in Windows 8's SmartScreen system, which checks applications that the user …
If they get caught doing something, they're still going to deny it.
Whatever they do, they will deny until they have their face shoved into the proof that they did it, then they will try and make everyone believe that it was "a mistake" or "a feature in beta" or whatever else passes for an excuse in la-la-land.
"Currently the SmartScreen system does use application information stored at Redmond to validate local apps, hence the information is collected. But Kobeissi points out that the need for this could be eliminated if such data was stored locally on the client end and updated regularly."
So what happens if you install something while you are offline?
If this is part of Windows, and every one has it, I don't think it will take long for the bag guys to find a way around it.
"So what happens if you install something while you are offline?"
You get a message telling you that your system is unable to verity the program with Microsoft and that they can't therefore tell you if the program is safe or not, and asks you whether you want to actually proceed or not. It's similar to the message you get if the program you are installing isn't known to Microsoft, with the exception that it explains the failure is due to being offline so that you can go online if you choose and check it.
"You get a message telling you that your system is unable to verity the program with Microsoft"
Ah, so another message the user will blindly click yes to.
Kobeissi's point still stands - if this is stored on the client end then it wouldn't NEED to ask you to go online to verify, it'd be able to do it there and then. And if you're offline, the likelihood of you installing some software which wasn't yet on your cached list would probably be fairly low.
I'm surprised there wasn't (or I didn't hear about) such an uproar with MS IE SmartScreen. As I understand it, Firefox does the same thing, but caches a list of bad wobsites locally.
"Ah, so another message the user will blindly click yes to"
Er, what would you like, a message telling you that if you want to install something you first need to go online an get Microsoft's permission?
"if this is stored on the client end then it wouldn't NEED to ask you to go online to verify, it'd be able to do it there and then"
Yes. I really want my machine frequently downloading a multi-gigabyte database on the off-chance I might want to install something offline. Not to mention that the entire point of a reputation service is to recognise commonly installed software, so it'd still have to go online to let the service know that what you've just installed has become a little more commonly installed.
And quite why you don't hear exactly the same complaints about Linux repositories, which are equally in a position to monitor everything you've got installed, is a whole other matter...
This post has been deleted by a moderator
This simply should not be allowed to happen. What it means is that if a business tool which competes with MS is installed on your machine, MS can selectively load down "security patches" to cause it to fail.
They have done this before where only one product competed (NETSCAPE) but now they can treat competition like virus.
The people at MS demonstrably do not have the moral or intellectual integrity to be allowed this data and should be stopped from obtaining it.
In the mean time, another good argument for avoiding .Net development.
"An irrefutable advantage of Linux is that Linux respects your privacy and does not sell information about you. Now before some block head chimes in with "but it might do, you did not read the source code", yes that's a possibility but if such a trick WAS found, then the geeks would quickly latch onto it and there would be a scandal, so the risk is low."
apt-get sends far more information about the software you're installing than Microsoft's Smartscreen does. And nobody gets to see the source code of what runs on RedHat/Ubuntu/Debian etc's back end or what they do with any datamining they do of information collected there.
Food for thought.
Do you really have balls to call Debian, the Debian Gnu Linux distribution, Mecca of open source spy on their users?
Heard the thing called open source/ GPL? Every single thing including the freaking apt mirrors are in open.
Also you really don't know why there is a very differently treated domain: security.debian.org ,gpg signing etc.
Just don't mention Debian next time so some may buy it.
This post has been deleted by its author
It seems that any software which purports to protect the user feels obliged to feed back what the user is up to in order to provide the best protection. Didn't Microsoft Security Essentials require this sort of feedback in order to use it?
I noticed a few things about Avast recently:
1. When certain applications are loaded it sends a POST packet out.
2. Whenever a scan has completed it sends some cookies (as far as I can tell) to GoogleAnalytitcs.
I guess it is all legit, but it seems you really need to look at the Ts & Cs closely these days.
"which checks applications that the user wants to install against a database of known dodgy code"
Not according to that link to Microsoft's description:
"application downloads without established reputation result in a notification (see below) warning them that the file may be a risk to their computer."
So it doesn't warn about apps on a known bad list, it warns about apps not on a known good list.
This effectively means that only Microsoft approved applications and applications from large companies can install without a warning message.
If you're an individual programmer or a small company just starting out, everyone who installs your program will get the warning.
And by the time you manage to get on the list, you'll have issued an updated version and have to start again.
Just how complicated, time consuming and expensive is it going to be to get onto that list?
Could it by any chance turn out to be something that big companies can manage easily but small companies and individuals can't afford?
Just how complicated, time consuming and expensive is it going to be to get onto that list?
Could it by any chance turn out to be something that big companies can manage easily but small companies and individuals can't afford?
Yup, though I suspect there'll also be an element of if you're in the Metro^H^H^H^H^H Windows8UI store we'll make things cheaper and easier as you won't be subject to this check
"Just how complicated, time consuming and expensive is it going to be to get onto that list?"
Not particularly difficult or expensive. Signing your code helps, because you can build a cumulative reputation across multiple apps but it isn't required, but I've downloaded plenty of small, unsigned apps that passed Smartscreen in IE previously.
And, to be quite clear, if the app passes a Smartscreen check it suppresses prompts that previous version of Windows gave you, if a reputation hasn't yet been established you just see the same prompts that have always been there. Obviously if an app has a bad reputation, then the experience is otherwise, but that's exactly what is wanted.
If you want to digitally sign your code, a signature from an authority such as VeriSign will cost around US895 dollars per year. There might be others out there that are cheaper and if you're talking about selling through the Windows Store, then you don't have to do this, MS will certify your code, but charge you a commission for selling through their store. So depends on what business model you want to use. It's not complicated at any rate - anyone capable of the complexity of writing a saleable program will manage the code signing process. US$895 is a lot of an amateur to pay just so that people don't get warnings, it's a minor cost for even a small company though. Depends what situation you are in and how worth it is to get rid of that warning. But the warnings are a good thing. People should think about what they're doing when they install software.
"This effectively means that only Microsoft approved applications and applications from large companies can install without a warning message."
Definitely doesn't require a "large" company. Is an issue for the lone programmer working in their own time on small projects, but these might typically be able to live with their small userbase having to click a warning message. Note, once you are able to sign the code, you can sign new versions, etc. Your post suggests that this is a long process but it isn't an issue.
If all you need to do is sign the code then the fake AV guys can afford a key. Are you saying their scam software will show up as good?
I would not be surprised if there is a fee and some running around to get on the good list. Or you can just sell it on the MS app store...
"But the warnings are a good thing."
No meaningless warnings are a bad thing, they will just add to the average users trend to just click yes to anything without reading it. If your computer is always crying wolf, no one will notice a real wolf when it shows up.
"If all you need to do is sign the code then the fake AV guys can afford a key. Are you saying their scam software will show up as good?"
Well firstly, if someone has to register and pay to get their malware signed, and typically you would expect a company to be doing this, then that is already a step toward catching people. Secondly, you forget that the moment it is identified as malware the key can be revoked.
"No meaningless warnings are a bad thing"
Are they meaningless? In this day and age, any commercial software should be signed. As has been pointed out below, the cost to do this is pretty low, apparently <£100. Which addresses the below comment:
"If your computer is always crying wolf, no one will notice a real wolf when it shows up."
It wont because how much software would the overwhelming majority of users be installing that wasn't signed? If you stick Adobe PDF reader or the GIMP on there, you would expect it to be signed. So the computer will very far from be "always" crying wolf. Secondly, you are arguing in favour of a system whereby no cry is raised at a real wolf, which does not seem safer to me than a system with the occasional false positive.
"Well firstly, if someone has to register and pay to get their malware signed, and typically you would expect a company to be doing this, then that is already a step toward catching people."
The fake AV guys already have credit card processing, toll free phone numbers, websites, shell companies. A key is hardly a big deal.
"you are arguing in favour of a system whereby no cry is raised at a real wolf"
No I'm not, that's what MSE is for. It will say This is a Wolf.
This stupid system will not say This is a Wolf, it will say this might be a Wolf, the same thing it said with all the existing software you bought for your old computer and are now installing on your new Win8 box.
Signing the code isn't sufficient to pass Smartscreen checks. The only difference with signed code is that multiple version of an executable, or multiple executables by the same vendor, can 'share' reputation. So, for example, very new version of Firefox can keep the reputation built up by previous versions.
If malware is signed then at best it has to go through the same level of checks to pass Smartscreen checking as completely unsigned code. On the other hand, if it gets picked up as malware, then every application they've signed with that same key gets onto the potential malware blacklist. So signing is of no benefit at all to a malware author.
So am I right in assuming that to get approval MS get to see all of your 'trade secrets' of your source code, quite possibly to copy (sorry, "influence") for new MS products, but you don't get to see theirs?
If you have to bare all, at least go open-source and maybe get community help in bug-fixes, etc.
"So am I right in assuming that to get approval MS get to see all of your 'trade secrets' of your source code, quite possibly to copy (sorry, "influence") for new MS products, but you don't get to see theirs?"
No. For a start, you can use a certificate from any company you like that is recognized - e.g. VeriSign, StartCom or others. There's quite a bit of choice, just as you can buy a SSL certificate for your domain from a range of providers. Secondly, source code is not signed, compiled binary code is signed. If you think about it, signing source code would mean that you could only verify that source code and that you would then have to carry out the entire compilation process during each install.
So just to re-iterate, it's compiled code that you sign, not source code (unless you want to for some reason, but that's not what is talked about here), and it can be signed with a certificate purchased by a range of providers, not just MS.
I'm impressed Microsoft turned the SSLv2 support off within hours of it being pointed out - though of course it would be better if they'd turned it off unprompted in the first place.
Code-signing can be a lot cheaper than that; I've used Startcom in the past, who charge a handling fee of something like $30 or $40 for the validation involved, and even Verisign were a lot cheaper than $895 - more like $99 for the year. (There's a hefty discount if you sign up via Microsoft's promotional link rather than directly.)
is this in any way worse than Android apps having access to all your contacts, emails, geo-locations all being served up for anyone who wants them???
At least MS allow you to turn the feature "off-ish".....
More than the chocolate factory allows!!!!!!
Me? I'm still with symbian......
"Me? I'm still with symbian......"
Clearly, as you obviously don't understand the Android permission system. Each of the three things require separate permissions an application must request and by requesting those permissions, you are clearly told about it at installation time.
Whether the user chooses to read or ignore that is their business. This is the same choice they will have to make if/when this warning dialog appears (read or ignore it).
It's to frighten ordinary users off open source, every time they try to install the majority of OSS apps they will get a warning. Nothing else.
MS are like a headless chicken, fruit on one side, penguins on the other with declining market share, they have lost sight of how to make things people want, they have just relied on being the de-facto OS/software house.
Nothing new or exciting from MS in years, just more give us your money releases of the same old, but wearing a different frock to fool the masses.
"It's to frighten ordinary users off open source, every time they try to install the majority of OSS apps they will get a warning. Nothing else."
I'm sure that the Mozilla Foundation or Apache can spring for £100 to have their code signed. Doesn't even have to be signed by Microsoft. Signing your code is good practice. In what way would the OS flashing up a warning that you were about to install software that couldn't be verified be innaccurate?
I'm sure that the Mozilla Foundation or Apache can spring for £100 to have their code signed.
I'm sure they can, but what about the smaller and less well funded projects? What about derivative releases of a bigger project?
Not that code-signing is necessarily a bad thing, but as others have said this system offers no real protection in that users will probably continue just to click 'yes'. Those who release the nasties will find a way to sign their code, and the cycle of catch-up will continue. In fact, from where I'm sitting, the only one who stands to gain is MS by pushing more of the smaller devs towards the Metro store.
"I'm sure they can, but what about the smaller and less well funded projects? What about derivative releases of a bigger project?"
Whilst I don't want to trivialise costs for anyone, a project has to become pretty small before getting your code signed becomes a relevant part of your costs. I looked up the cost with Startcom and they sell a certificate you can use to sign code for US$59. And just to be clear, you can sign as many programs and versions of programs as you like with that. Anyone releasing software that finds that significant will just have to accept the warnings, I would guess. It might be a shame, but digitally signing code is a good thing to have as an industry standard. So basically, my answer to your question about smaller and less well-funded projects, is that these will be fine too.
"but as others have said this system offers no real protection in that users will probably continue just to click 'yes'"
If signing software to be installed is the default (as it will become), then unsigned code really will stand out and will therefore more likely make users think.
"Those who release the nasties will find a way to sign their code, and the cycle of catch-up will continue."
Some will, but signatures will get revoked and revoked fast. Repeatedly. Not only that, but when a piece of malware has been signed, registered to a company, then you can quickly check all the other things that company has signed. You say "will find a way to sign their code", but if you're having to go through a whole new registration process as a new company / individual and require pay a new fee every single time Kapersky Labs or MS or whoever notice and report your latest malware, that rapidly becomes a real nuisance. Which would you rather?
" In fact, from where I'm sitting, the only one who stands to gain is MS by pushing more of the smaller devs towards the Metro store"
I think smaller Devs will already gravitate toward the Windows Store. They want the advertising, the security, the streamlined way of getting paid and handling licences and hopefully the reduced piracy. I don't think this will be a factor one way or another. I mean if you feel that Windows Store isn't the best fit for your business model, paying US$50 dollars is unlikely to tip the scales toward it, imo.
I looked up the cost with Startcom and they sell a certificate you can use to sign code for US$59. And just to be clear, you can sign as many programs and versions of programs as you like with that. Anyone releasing software that finds that significant will just have to accept the warnings, I would guess.
Some would probably argue that it's potentially an un-necessary additional cost, but as you say it comes down to a decision as to whether to pay the cost or accept the compromise of the warnings I guess.
You say "will find a way to sign their code", but if you're having to go through a whole new registration process as a new company / individual and require pay a new fee every single time Kapersky Labs or MS or whoever notice and report your latest malware, that rapidly becomes a real nuisance.
A nuisance yes, but considering the potential sums you can make from a good (I use that term loosely) piece of scareware, $59 a shot isn't bad. Of course, if things are picked up on quickly enough then the potential bounty per-go is reduced. That does rely, though, on the CA's being quick to revoke the relevant certs, can't say I've much faith in that but time will tell either way.
I mean if you feel that Windows Store isn't the best fit for your business model, paying US$50 dollars is unlikely to tip the scales toward it, imo.
For you or I, no probably not. For some? possibly, though it's not necessarily a bad thing by default.
To be honest, we're in the early stages of code-signing (in that there's a hell of a lot of unsigned code out there), so things probably will improve as more code gets signed. In the meantime though, I suspect it's just going to be viewed as another inconvenience by the average user, and that's the period where we run the risk of people 'learning' that you can just click 'Yes'.
"A nuisance yes, but considering the potential sums you can make from a good (I use that term loosely) piece of scareware, $59 a shot isn't bad."
Just to re-emphasize what I wrote, it's not $59 a shot for your malware. If you just made one version and only had to come up with a fake company or register with a fake passport once, then it's a nuisance, but not as bad. But a piece of malware goes through scores of iterations typically. Both because the writer needs to change how it works to deal with changing servers, etc. and because they're engaged in an ongoing war with those who detect and identify instances of malware. Combine that with the fact that you will need to do a separate business or fake individual for each version (unless you want to see everything taken down the moment one thing signed under that certificate is detected) and that every instance means a new payment that is a potential lead in tracking you down... It's a significant thing. It's not "$59 a shot".
But a piece of malware goes through scores of iterations typically. Both because the writer needs to change how it works to deal with changing servers, etc. and because they're engaged in an ongoing war with those who detect and identify instances of malware.
$59 a shot then. If you sign each iteration with a different key, then that's $59 a go. Admittedly that's ignoring the potential cost of faking a business, but it does seem that the malware guys have that in hand anyway. That's also assuming we can continue to trust the CA's to verify things correctly, otherwise it gets even easier.
If one iteration nets you $4000 (so 100 marks at $40 each) and you've paid $59, I'd say that's still a pretty effective business model.
I think you are taking a biased position on this and I think this because you are repeatedly taking the position of dismissing good things that make the community more secure with reasons that are essentially: it doesn't solve everything 100%, which suggests to me that you are trying to dismiss this rather than evaluate it. Well no, it doesn't solve everything perfectly. But it certainly helps. In one small move, you have a situation where a new key must be obtained, paid for and a fake business created for every minor variation of a piece of malware if they want to avoid triggering warnings to users. And it's a system that relies on a very minor cost to legitimate producers.
"If one iteration nets you $4000 (so 100 marks at $40 each) and you've paid $59, I'd say that's still a pretty effective business model."
Take a look at this: Malware Definitions. Notice how many pieces of malware come out in a day. Nearly all small variants and iterations on previous ones. Look down the list and see how many of these are trojan types. These are ones that require the user to install them (normally).
Why are you a priori against this when there are significant demonstrated benefits and the cost is so low. I mean, what is next, are you going to start posting about how Debian and RedHat should remove the Hash signatures for all the packages you download and install on them?
"We can confirm that we are not building a historical database of program and user IP data," a spokesperson told El Reg
... "It wouldn't make any sense to. Since we're moving to the same closed app-store type architecture as Apple, we'll soon have a historical database of program and credit card data -- that we have to retain for pretty much ever, because you'd complain if you couldn't reinstall that fart app. Credit card data is directly linked to an individual, so what's the point of keeping the less-traceable IP address? Sheesh!"
"And why can't this trafic be stopped using Windows or a third party Firewall? Firewall work in outbound as well as inbound traffic."
Of course it can be blocked. Though as you can turn this feature off, it would be simpler to merely do so. Is it actually a major concern that your PC checks software you install against an online database, though?
It depends. Obviously if the service can be disabled (and MS doesn't enable it every patch tuesday to save you from yourself) that's the best solution. As for the filtering it would work best if you blocked the service with a software firewall on the local machine. Otherwise you probably couldn't filter it at the network edge if it's using SSL and using common ports/ip's. For example if this service was directed at the same ip block as windows update you would block windows update as well. Being encrypted you wouldn't be able to do any deep packet inspection to use application level filtering either. The point is that you'd have to find some way to uniquely identify the traffic in order to block it.
That's why malware using SSL to contact a randomized C&C server would be hard to detect with packet sniffing alone... well unless you have no other SSL traffic on your network which is rare.
Well let's try and work it out.
Essentially all computers come with Windows installed. To have Linux you either :
1) Build your own computer from parts
2) Delete Windows ( which has been paid for) and install Linux
3) Try and find a retailer who isn't cowed and buy from them
4) Get a second-hand machine/hard-drive broken machine and fix and install (really a variant of 1 but useful for laptops)
Given that most people don't even know about alternative operating system and most others have to put up with what their employers/school whatever provide I think 1% of desktops using Linux is actual a very creditable number.
"Try and find a retailer who isn't cowed and buy from them"
Not that difficult. How about Dell? They gladly sell you any of their business class computers without Windows. Or just ask your local HP Partner, as HP does the same. As do most other major vendors like Lenovo or Fujitsu. Simply because most companies buy their kit without OS license.
Saying that there are barely any ways to buy standard PCs or laptops without Windows is testament that you didn't even bother to check the facts.
A touchstone of military strategy is that you plan based on what other countries can do, not what they say. You can be sure the US military knows what, for example, the Brits are capable of, and have contingency plans to deal with it.
The same must apply to any computer or software you deal with. Microsoft sells you software which can phone home with a notification of every app you install. All planning thereafter should be based on what they could do with this info in the worst case. Just like the gummint saying ``Don't worry that we can wiretap you without a warrant. We'll be good, promise!
They said they "eventually" delete IP addresses. "Eventually" can mean 1000 years. They do not deny your app list is sent to them, which is the heart of the problem. I'm quite sure the SSLv2 thing was an oversite, and I wouldn't hold that against anybody. I'm glad I don't use Windows though, knowing they plan to send lists of your software to themselves. None of Microsoft's business!! Indeed, the right way to do this (if they bother at all) is to update a blacklist locally, NOT to send everything to Microsoft to compare against Microsoft's blacklist.
The Windows monthly updated Malicious Software tool, from Windows XP (as an option) forwards, has a licence that authorises Microsoft in perpetuity to delete files on your PC that in their opinion shouldn't be there. I assume that this includes Linux, or it will at their discretion. I mean they'll delete Linux.
The licence for Windows 7 itself needs to be read very carefully with this in mind. But this time you don't have a choice of not using it.
"The Windows monthly updated Malicious Software tool, from Windows XP (as an option) forwards, has a licence that authorises Microsoft in perpetuity to delete files on your PC that in their opinion shouldn't be there. I assume that this includes Linux, or it will at their discretion. I mean they'll delete Linux."
You are the definition of Tin Foil Hat. You're actually suggesting that MS are going to classify Linux as malicious software and set Security Essentials to delete it? Have you any aware how paranoid you sound. I run Linux on Windows all the time. Many, many people do. We haven't had any problems. What possible evidence do you have that MS would do something so massively self-destructive to themselves as you propose.
"This way, if Microsoft catches an interesting program early on. It will use (copy) the concept right away and get its lawyers involved earlier on. I smell patent heaven."
You don't seem to know how the code-signing process works or what it is. If you buy a Cert from Verisign to sign a binary blob so that it can be authenticated when someone installs it, that is not the same as you sending your ideas to MS for approval.
I really remember this "technology" blocking a safe download of a patch from a very big name (can't remember) saying it wasn't downloaded many times. I was ready to hit virustotal and Kaspersky to analyze file until I noticed the reason for "security alert": file isn't popular.
Yeah, but let's face facts here, the "big name" games companies can not only afford to sign their code so that it can be more easily verified (which given the tendency of patches to get uploaded onto various third party fan sites, would be a good thing) but also the number of same companies who've shipped shoddy code patches in the past, it might actually be advantageous for end users to know that not many people have actually tried this buggy patch yet! ;-)
Biting the hand that feeds IT © 1998–2021