The mind boggles.
That an NSA hacker is running Windows on a sensitive machine is one thing, but that it's also virus infected is beyond belief. You'd think that a professional would be on top of such things.
An NSA hacker has admitted taking home copies of classified software exploits – understood to be the cyber-weapons slurped from an agency worker's home Windows PC by Kaspersky Labs' antivirus. Nghia Hoang Pho, 67, pleaded guilty in a US district court in Baltimore on Friday to one count of willful retention of national defense …
but didn't Kapersky give a false positive and proceed to upload his secret stuff
Almost correct, except positive not being "false", but spot on. In the case the NSA malware - that's still malware. In the case of the other malware - we will never know was his computer infested with active malware or he just had samples he was studying.
Unfortunately 'software does its job and Russian spooks also do their job' is not an exciting enough headline.
I had seen a brief piece on the news this morning about it where a tech reporter was explaining their blog remarks that said Kaspersky was fine for UK users unless you were doing super high security stuff for one of the agencies of our own glorious government. Which would surely apply to any AV software.
But really this just seems to be going back to trying to argue over who is the most guilty for these exploits being used in the wild - it's all relative and I like the egalitarian approach of just throwing all of them into the same pit.
Use as wide a definition of 'all' as you see fit.
AV software might possibly be somewhat useful for random home users. It might also be useful to scan all incoming mail in a corporate setting and such.
It's definitely not useful in a high-security setting with an advanced threat model. Attackers in that case are much more likely to compromise you through the AV than be stopped (or even considerably hindered) by it.
>It's definitely not useful in a high-security setting with an advanced threat model.
I guess TAO doesn't count as an advanced threat then, since Kaspersky picked it up.
Seriously, if AV picks it up, the code is useless - don't be distracted by "Ooooh - magic source code". Maybe its just the American AV that's rubbish and wouldn't quarantine it.
"didn't Kapersky give a false positive and proceed to upload his secret stuff?"
His secret stuff was malware, recognised as such and uploaded for analysis. Subject to the user's configuration, that's what AV packages do. It's how they stay up-to-date on the malware they're supposed to be detecting. It just happened to be NSA-written malware.
As it was included in a zip file the whole zip was uploaded and found to contain the source. Oops.
Not a false positive; apparently he uploaded to his PC the latest version of some NSA hacking tools. These were indeed active malware - designed and built by the NSA for hacking.
They were picked up (it is said, who knows the exact truth in these matters) because these packages were similar to existing exploits that Kaspersky was aware of and one of their algorithms flagged these files as probable malware and uploaded them to the Kaspersky servers to be further analysed.
"Kaspersky Lab has denied any wrongdoing in the matter or illicit ties to Russian intelligence. The security vendor also pointed out Pho's machine was infected with loads of malware, meaning any miscreant could have stolen Uncle Sam's cyber-weapons."
So... Kaspersky is saying their product wasn't effective at keeping malware off Pho's PC?
"...concerned that AV software needs to upload random content from my computer to the cloud?"
Definitely not random content. Actually it is very spesific content, i.e. the binary of the malware.
Item that's not yours for sure and you don't want it to be there. Unless you are a malware writer, like this guy.
Yup, this is true and basically renders whole justice system invalid. BUt it's also obvious that Police or justice (hah!) department see no problems at all, it's their advantage.
Pleas are always arranged outside of the court so a) the public never sees the evidence and b) there's no proof that the accused is actually guilty.
The basic idea is that when you don't have evidence, invent so much charges that the accused succumbs to plea and no court needed. Basically a police state where you are guilty because the Police says so and the prosecutor decides the sentence.
Ironically the same system Russia has been having since Tsar. Also this "we'll confiscate your money if we feel so"-legislation - that's also a loan from Russia.
No, he's definitely not a moron - they don't employ morons in the NSA. He's a most likely a highly talented professional who was so focused on what he was doing that he forgot what everyone else was doing. I see this a lot - very bright people who have such a blinkered view of life that they (and me too sometimes) make stupid mistakes.
This was obvious in hindsight ... come to think of it, I'll put that on my tombstone.
Being very clever does not preclude you from being a moron, they are not diametrically opposed concepts.
What it does mean is that when a genius *does* balls something up, it tends towards the spectacular.
Genii are often only clever in certain specialized areas and can quite easily be as dumb as a post in others. My wife is often commenting that I'm the cleverest ****ing idiot she's ever met :)
No, he's definitely not a moron - they don't employ morons in the NSA. He's a most likely a highly talented professional who was so focused on what he was doing that he forgot what everyone else was doing. I see this a lot - very bright people who have such a blinkered view of life that they (and me too sometimes) make stupid mistakes. /sarcasm
I wouldn't say he's being punished for be unlucky... more like stupid. Stupid to take work home and run it on an insecure personal computer. The guy was moron to do that.
Considering the value of the prize, I'd say that any internet-connected personal computer is sufficiently insecure. I can understand him wanting to look at work after hours so he could experience karoshi at home. But why would he put it on an internet-connected computer? The only purpose I can imagine is to deploy the NSA malware in his possession. Say it ain't so, Pho.
I suggest that a fine and retirement is a more suitable treatment than the slammer. But if that's what NSA wanted, why did it become a court case? Surely this isn't good for the image of the NSA. Slightly good for the image of Kaspersky, but not the NSA.
I'm probably going to tick off a lot of people here but...
The guy was sixty-seven? Seriously? SIXTY SEVEN?
No wonder the NSA can't seem to get their act together - they are employing the Gentrified Squad. You know, the people that are literally past the retirement age.
- and, yes, I say that as an IT guy with FAR too many years under my belt. The difference seems to be that I recognize that the tools have moved past me so I now lead by experience instead of completely screwing up by staying in the trenches.
Unlike this guy. Now I understand why he tried using a keygen to crack MS Office - he's still trying to relive his glory years from the early '90s.
I don't know either, but if he's referring to me, then no doubt the old NSA guy is indeed likely smarter. And I can make snide comments questioning what's going on with a home machine used for work, but the bottom line is that I'd never qualify for that guy's job in the first place.
What is this "retirement age" you speak of? There is no such thing anymore.
I plan on working until I'm dead. Many reasons why, not the least of which is that I perceive "retirement" to be a BORING concept. Now, infinite money and "I can do what the hell I want" sounds great but it would probably involve computers, electronics, and me running the show. "Retirement", however, is NOTHING like that. The words "fixed income" make me wanna lose my lunch. And gummint austerity is double-nauseating.
> Retirement", however, is NOTHING like that. The words "fixed income"
> make me wanna lose my lunch. And gummint austerity is double-nauseating.
Well, as a republican, it's what you cheered for. Far more important to give the money to the war machine, the 1%, and big pharma!
"No wonder the NSA can't seem to get their act together - they are employing the Gentrified Squad. You know, the people that are literally past the retirement age."
Ahem, I think you meant to write Geriatric Squad. I know, as you get older, word retention goes a bit loopy.
As maybe your knowledge of how fast retirement age is receding into the future. If you are under 57 start worrying your coding skills won't last that long.
What's wrong with 67? According to the SSA he's just one year past his full retirement age. When you consider he gets an 8% bonus for each year until he's 70 it could be pretty smart if he lives long enough to enjoy the ~32% increase. I suppose prison is one way to put it off applying for social security and 6-8 years seems a bit harsh for someone who was essentially a work-a-holic.
Oh I don't know, I moved house once and left a shoe-box in the attic - when I went back to retrieve it I found that the new occupants had already found the contents and were tearing up the cards to use for roaches - it was good stuff they had, so I said the heck with it and bought an ounce, went home and forgot all about the problem.
The new owners took a few days to realize that I had an extra set of phone wires running in so I could do cross-LATA forwarded calls for my data streams. This was back in the days when you might pay a roaming fee when the other party was one mile away.
Oh, forgot to say that the new owners worked for one of those 3-letter agencies. SNIP!
"Well, all I can say is: I never had problems like these when I was working with FORTRAN77 and punchcards."
My professor always sent me notes on the back of a used punch card. Now if I had collected them altogether, in order ... what are the chances would have generated some of this:
>Linux, BSDs, Mainframes
The use of these platforms and others is only part of the solution.
You also need to look at the programmers' workbench toolset you are using, specifically the code repository and version control and build systems; if these are cloud-based...
Obviously a goal of malware development would be when your current version runs without the AV reporting back to base.
But that's more an "acceptance test" the malware is ready for use.
And you wouldn't actually let the AV report back to base in the first place.
Because that would be kind of stupid.
" ...by Russian authorities to steal top-secret NSA documents and tools in 2015."
The opinion of the editor is quite visible here and it's even wrong.
When virus protection does what it's _designed to do_ it's not stealing and I bet there's mention of that in the EULA, so totally legal too. Anyone claiming that downloading a sample of new malware (so called "NSA tools") is not what it should do, is an idiot.
So the article has replaced facts with the opinion of the editor. Not nice at all.
Also it's Kaspersky, not Russian authorities. But, as in US/UK there's no difference the same situation has to be present everywhere else too, right?
How opinionated can you get, eh?
"The opinion of the editor is quite visible here and it's even wrong."
Really? Just let's go back to the article and get the fuller version of what you quoted:
Pho is understood to be the Tailored Access Operations (TAO) programmer whose home computer was running Kaspersky Lab software that was allegedly used, one way or another, by Russian authorities to steal top-secret NSA documents and tools in 2015.
Do you notice that word there: "allegedly"? Maybe you also missed the reports of the USG making such allegations for some weeks now. This entire paragraph is just straight reportage.
Read the article again.
also reading some from the time it happened would help.
There are two allegations here.
the first is the guy taking his sensitive work home.
There is no issue with kaspersky detecting and taking the malware. It is however further alleged that the Russian spooks then either hacked in and stole the stuff in question from kaspersky, or that it was simply handed over to them by kaspersky.
The issue isn't the malware being uploaded, it's with things that potentially happened after that.
"What's a good replacement for Kaspersky?"
Do you mean apart from not using Windows in the first place?
A lot of people here would reckon that they have more to fear snooping from their own governments than from a foreign government so, unless their government is Russian, Kaspersky would be their AV of choice.
[quote]What's a good replacement for Kaspersky?
Serious recommendations only please.[/quote]
Apparently it doesn't matter if it's good or not. Let's not forget that even though the villain in this piece (Pho) had Kaspersky's product on his computer, it was still riddled with malware! (Yes I know he reportedly disabled it at one point to install some sort of key cracker for Office, but that doesn't seem to explain it all...)
Has everybody missed the fact that even though the villain in this piece (Pho) had Kaspersky's product on his computer, it was still riddled with malware?
The article says that the NSA code that Pho had illegally taken from work and copied onto his home PC was detected as malware and reported to Kaspersky. There's nothing to suggest that any of this code or any other malware was active on his PC.
What's a good replacement for Kaspersky?
Kaspersky of course.
Setting the jokes aside it depends on your threat model and what data do you work with. The general rule of thumb is that if you work with data that is NOT supposed to leave your computer you do not use ANY AV. All commercial AV includes reporting and/or cloud components nowdays. It may end up exfiltrating some of your data by mistake.
@AC "What's a good replacement for Kaspersky? Serious recommendations only please."
No antivirus at all is your only option, the actions that Kaspersky are being badgered for is normal practice in the AV field and something that someone had to agree to during the AV installation.
That the whole US bunkom intentionally ignores the reports that their guy had already had to turn Kaspersky off in order to download and run a MS Office keygen carrying a Chinease trojan before Kaspersky uploaded anything. Reportedly when he turned Kaspersky back on again that was when the trojan and all the other malware was detected and the suspected new malware strains uploaded for analysis as per SOP.
Now if I was Kaspersky I would just add the US cyber warfare signatures to their database along with every other countries spyware attempts and let market forces do the rest. Sod the Republican Party's rhetoric, as soon as their politicians realise that all their dirty little secrets will be available to the world then they will be screaming to have it back again. In the meantime the rest of us have some protection from the spying of our own oppressive governments.
@AC - "Now if I was Kaspersky I would just add the US cyber warfare signatures to their database along with every other countries spyware attempts and let market forces do the rest."
That's SOP for AV companies. Look up the 'FBI Magic Lantern' controversy from 2001 - 2007.
Anyway, how do you know (barring the smoking gun of source code) that a malware sample was developed by an intelligence agency? Do they tell you, if you phone up and ask nicely?
I had to do this evaluation recently incase our company was required to change AV tools. Trend (new version) had more similar features that any other product that didn't required Cloud portals to use them (enterprise versions). I am happy with KL, and trust them more than the NSA - who intentionally, without denying it at all: leaves back doors everywhere they can at everyone's expense. I want an AV tool that detects those assholes.
It does look like comparisons of Internet security software is now going to have to include consideration of where a company's servers are located.
As it is beginning to look as if the US government is spreading 'allegations' ie. FUD about non-US security software, from places US agencies have little access to or influence over.
Naturally, all of this is outside of any trade deal/arrangements and thus the companies affected have little redress.
@lordminty "So Kaspersky are under fire because they are Russian" there FTFY.
Russians are the default fallback for civil distraction from the local problem created by western Governments and have been since the great depression.
If you live in the west then you must have seen it, "lost everything that you had to slave for and your family is homeless and starving? found out your government is corrupt and are making more efforts to curtail your freedoms than your country's official enemies? forget that we all need to pull together the Russians are coming. It has been like a broken record my whole life.
The current US government have a lot of things they need to distract their citizens from thinking about and the old "better dead than red" has always worked so well in the past.
I never used to notice the 'OMG The RUSSIANS ARE COMING!' scare tactics in the UK, but it's like a stuck record these days.
I'm less disappointed in TPTB as they are just doing what they do - they will do it (and more) until someone stops them. No, I'm disappointed in the general fuckwit in the street who is actively supporting their own enslavement. Even if you try and point it out to them they treat you like a mental case and think you're the enemy, somehow.
I don't know, it's almost like those things powerful people often say is true - such as 'People want to be controlled, as long as they get their fix of daily soaps and there's food on the table. They don't want to think about the BIG problems so they are quite happy for all their freedoms to be curtailed just so they can go down the pub on a Friday night and get pissed'.
I only wish there was somewhere for people who weren't either:
-Megalomaniacs hell bent on enslaving the human race, or
-Slaves who want the megalomaniacs to do their thinking for them
Seems like only the truly criminal have the right idea - after all the system supports them far more than it does their victims.
A conspiracy can be thought of, in a broad sense, as any data-set which is known by a finite number of people, and which these people intend to keep secret from others. The NSA can be thought of as a "conspiracy" keeping certain kinds of knowledge -- its attack-and-compromise codebase, in this case -- secret from others.
However, the more people involved in a conspiracy the more likely it is to fail. From a Plos One paper, lead researcher David Grimes, on the probability that a conspiracy will be exposed:
"The analysis here predicts that even with parameter estimates favourable to conspiratorial leanings that the conspiracies analysed tend rapidly towards collapse. ... For a conspiracy of even only a few thousand actors, intrinsic failure would arise within decades. For hundreds of thousands, such failure would be assured within less than half a decade."
The paper analyzes mostly single-event conspiracies, not the case of a large organization trying to keep a body of ever-changing knowledge secret. But I kinda think a general rule applies: it becomes harder to avoid leaks, whether intentional or accidental, as the number of those with inside knowledge grows. The number of people employed by NSA is classified; it's estimated at 100,000. Surely only a fraction have access to secrets like those revealed in this incident.
But it would seem to me that the upshot is: expect leaks. Plan for them; take it for granted that they will happen.
The problem isn't that this guy worked from home. Nor that he wasn't very good at securing his computer. Nor was it that he was using infected pirated software.
The crime was he removed classified material from a SCIF. It doesn't matter that he didn't intend to sell it to a foreign government, or do something else with it.
This isn't a workplace rule, or a contract term. This is a federal law. Part of the reason for the armed guards at the exit is to remind everyone of the rules, and how seriously the rules are enforced.
This is what happens when you stockpile exploits rather than helping to plug those gaps. I know that would be counter-productive for an agency like the TAO, but if the overall remit of an agency is founded on the principle of protecting the population from security threats then they have failed so badly that they need to be disbanded and re-started with a better mind-set.
On the other hand, what is stopping an ethical hacker from using the NSA toolkit to create a malware who's only effect is to put up a big banner on the screen saying 'YOU ARE HACKED, AND THIS IS HOW I DID IT - PATCH YOUR SYSTEMS NOW OR PRESSURE YOUR SOFTWARE VENDOR TO PLUG THIS EXPLOIT IF A PATCH DOES NOT EXIST.'?
Of course, once the tools got out into the wild I don't know why the NSA didn't do just that - they could have repaired a lot of lost trust if they had done that, but I'm going to go out on a limb here and suggest that the thought never even occurred to them. People who don't trust others for a living are hardly interested in being trusted, so it shouldn't surprise them that they aren't trusted. I sometimes wonder if these people are so focussed on one aspect of the world that they simply fail to see the bigger picture, because if they DO see the bigger picture and act like this anyway, then they are just proving that they can't ever be trusted and that they are, in fact, the actual enemy..QED.
I'm not saying that I endorse all, most, or any of the activities of the TAO or similar spook groups, but I feel like I should point out some obvious things:
1. Reporting individual vulnerabilities does, on its own, little to actually improve security. You can be assured that for each vulnerability known to the NSA/TAO (or any other actor), there's atleast one more that's unknown to them. So reporting the vulnerability they have would hurt their own abilities without necessarily hurting the abilities of their opponents.
2. If you want your country to have the ability to spy on other countries, today this means that they must be able to conduct hacking operations. Which means that they must be allowed to stockpile 0days, because having those is an important part of actually hacking stuff.
This, by the way, is not limited to purely offensive/"first strike" actions but also includes defensive things like counter-intelligence and retaliatory actions to discourage future incursions.
It also includes realistic simulations of attacks by foreign powers to test your own intrusion detection and incident response. To do this in a realistic scecnario, you need 0days, because you can be damn sure that's what an actual attacker is going to use. (What would the option be - hold back on patching so you can use public vulnerabilities? Intentionally introduce vulnerabilities and use those?)
While I'm all for some sort of international utopia where all countries hold hands and dance under the rainbow, this is not how the world works for any foreseeable future. You simply can not be worried about actions taken by foreign powers, say, Russia or China, and simultaneously want the US to unilaterally "cyber-disarm" (to use a somewhat stupid term). It's not logically consistent.
What they CAN and SHOULD do is stop spying on everyone all the time as described in the Snowden revelations, but this is not very related to stockpiling 0days, or groups like TAO. 0days quickly stop being 0days if you use them for mass exploitation so there's an obvious built-in incentive to only use them for the most important targets.
@patrickstar - "Which means that they must be allowed to stockpile 0days"
Why? If you keep it secret, you are extended the time that your colleagues within your agency, and your fellow citizens outside, are vulnerable to it. Defensively, you have more than one opponent, so (assuming equal "effectiveness" of the research teams) you won't be the first to discover most exploits and getting it patched is damaging your opponents' abilities to attack you.
Don't aim for a 0day stockpile, aim for a 0day treadmill, you keep searching for new ones before yesterday's are patched. Operationally, you can't assume your 0day is really a 0day for your enemy - maybe they found it last week, so your deployment strategy should assume that many, or even most, of the exploits are already known.
0days are like strawberries, they go mouldy quickly.
Did you actually read what I wrote? Because I specifically explained why that line of reasoning is inaccurate.
Also, exploits aren't "discovered". Bugs are discovered. Exploits are developed to use those bugs.
Finding and exploiting bugs is significantly more work than developing patches. If you report bugs as they are found, in the typical case you will be weeks or months away from having a usable exploit by the time most of your targets have patched.
And, again, since apparently my original message wasn't clear enough: It's pretty rare that the same bug is discovered and successfully exploited independently by multiple actors. So reporting them can be expected to hurt your side much more than the opponents. And it doesn't help your defense much either, since you can be pretty darn sure your opponents have bugs you don't.
This is somewhat related to the "90's mindset" I have lambasted before - that if we just keep fixing bugs after the fact then eventually all bugs will be gone and our problems over. Sorry, but doesn't work that way.
If your threat model includes a nation-state or similar, you have to assume that there are exploitable bugs you have no idea about in pretty much everything and design proper layered security around that. Your security posture doesn't improve much because of individual bugs getting patched.
Yes, I did read what you wrote, and I found your reasoning unpersuasive.
I do apologise for my sloppy use of exploits instead of bugs (maybe vulnerabilities would be even better).
I also think I expressed this wrongly: "Operationally, you can't assume your 0day is really a 0day for your enemy - maybe they found it last week, so your deployment strategy should assume that many, or even most, of the
exploits new vulnerabilities that you discover are already known." Sorry for my sloppy writing.
"Finding and exploiting bugs is significantly more work than developing patches." I suppose that depends on the bug. There are cases where malware exploiting a bug has appeared very soon after the patch was released. Which means that either it was quite simple to exploit, so the malware developer(s) thought it worthwhile to try to catch the slow patchers by reverse-engineering the patch to understand the bug and then develop the malware using it, or they'd already discovered it and were quietly using it on high-value targets, so it was no trouble to do a mass release when it was going to loose value anyway.
"It's pretty rare that the same bug is discovered and successfully exploited independently by multiple actors." Got any statistics for that? That seems like an overly-optimistic assumption. Be a pessimist: if you've found a bug, it's low-hanging fruit that almost anyone could find and someone probably already has.
At least we agree that we have to assume there are exploitable bugs we have no idea about, and we need proper layered security.
@ patrickstar and your support for hoarding rather than closing exploits.
Given that most users do not work as spies then by knowingly exposing them to attack you are failing in what should be your primary objective i.e. protection of your country's citizens.
The ends do not justify the means especially when you take into consideration the economic impact of cyber crime, ability for allied countries to loose their infrastructure to script kiddies and the fact that users still have to buy their computers.
This is just another example of the "security" services making certain that only they have some semblance of security and that only in retaining their jobs when they should have been locked up instead.
I did start my initial post with a disclaimer saying I don't necessarily support what the security services do, you know...
But my point is - if you want the spooks of your country to have "cyber" capabilities, you need to allow the people doing that to have the tools needed. And those tools do include exploits for 0day vulnerabilities.
And the probability of hurting your own capabilities by disclosing something is exactly 100%, while the probability of hurting even one of your opponents is much less. Unilaterally disarming would be interesting, to say the least, but I'm not sure the people calling for it would be very happy with the result.
Plus the fact that fixing individual bugs often do very little to improve security for anyone, which is always worth hammering into people's heads. If your security is only as good as the "weakest link" (whether that's buggy software or stupid users), you should fire whoever is in charge of it and hire someone who can actually do the job instead.
1. No AVs are able to protect against 100% of malware, so things can get through
2. Apparently, the user paused Kaspersky for the initial malware infection, else Kaspersky would have blocked the infection in the first place - https://www.theregister.co.uk/2017/10/25/kaspersky_nsa_keygen_backdoor_office/
Why a security expert would trust using a keygen or cracked software on their live system is beyond me.
It doesn’t matter what IS he was running at home. It doesn’t matter what AV he was running and what it did with the malware.
The big question is why an agency like the NSA wasn’t running any form of data leak protection software or preventing anyone sticking a USB key into a company device to copy sensitive information onto it.
Hardly inspires confidence in their “security” abilities.
An anti-malware product detects malware. The sample included the entire ZIP envelope that more than likely includes a whole bunch of malware tools and apparently source code. The ability to share with Kaspersky is a user controlled stting that is pretty clear at installation and in the settings.
Somehow the FSB gets hold of the samples, with or without Kaspersky consent or knowledge (or via some other bungle by Pho).
Why is there now a panic about the product itself? Why is everyone apparently assuming that McAfee and Symantec are not just as bad? Or that the NSA are not potentially looking at snapshots of your entire Amazon/Azure server & database estates.
Its not like the Americans don't do their own wiretapping and eavesdropping after all.
If I can't trust an anti malware tool provider from any country running at root or near root privilege where next?
This whole escapade is now a political front for other business and likely distracting from some underlying issue or embarrassment (i.e. it was already based on Russian malware or something)
There's so much FUD flying around here (mainly spread by politicians who no nothing about technology) that it's hard to see straight. Here's what I see:
1. The NSA TAO group had some pretty nifty hacks in their toolchest. Good for them. Reasonable people can quibble about whether they are playing fair by stockpiling zero-days: it seems analogous to a humint controller stockpiling juicy tidbits about a potential source, but I get that people might be strongly opposed.
2. TAO were targeting US citizens without a warrant, regular or FISA. Very naughty. Snowden's point essentially. No one has really been called to account for this.
3. The NSA's opsec was so poor that employees, including TAO members, were able to take work home undetected. This has been going on for years (witness the other case recently with the guy with a shed full of NSA documents). My response is slack jawed. WTF?! Massive fail here.
Everything after this is a corollary:
4. One TAO operator loaded up his work on a home PC fitted with AV, and the AV smelt it. the fact that it was Kaspersky is not relevant here, nor would it matter if the user's PC had been infected by other malware. AVs sniff out malware. TAO code obviously reeked.
5. The AV uploaded it to the mother ship for analysis. In this case, the mother ship was in the mother land. Would it have mattered if the mother ship had been in Oxford, or Redmond? Some analyst would have written it up, pushed an update, and it would have stopped working anyway.
Barclays emailed the other day to say they'd no longer be offering free Kaspersky to new customers.
"The UK Government has been advised by the National Cyber Security Centre to remove any Russian products from all highly sensitive systems classified as secret or above.
We’ve made the precautionary decision to no longer offer Kaspersky software to new users, however there’s nothing to suggest that customers need to stop using Kaspersky."
Barclays showing their cyber security know how is about on par with the average NSA TAO bod. I for one see the NCSC comments as testament to how good it is, or more likely driven by spite.
Talk about 'the biter bit'! These NSA tools are just state sponsored malware so like any good anti-virus toolit the Kaspersky A/V code not only detected it but uploaded it to K. central for anaylsis (which -- if you've read their documentation, is what it does whenever it finds any previously unknown virus).
You can see why the US government (and its pet, the UK) doesn't like Kaspersky A/V software. Kaspersky's signature feature seems to be detecting NSA exploits, they've been doing it for years, so it must be a bit of a drag spending all that time and effort developing the latest exploit only to have it sent of to Mother Russia for anaylsis. There is a school of thought that maybe the NSA should get the message and not only stop trying to poke holes in Windows but also help patch them as their contribution to National Security but then what do I know? I'm only a US taxpayer...... (no, not a Kremlin troll, just a long suffering taxpayer fed up with the waste of my tax dollars....).