So I guess the work around is to send all the time, or to send at random intervals. Both will impact resources needed and performance negatively.
Anyway, this is good, because it allows the TOR-developers to strengthen their program.
The Tor project has urged calm after new research found 81 percent of users could be identified using Cisco's NetFlow tool. A research effort led by professor Sambuddah Chakravarty from the Indraprastha Institute of Information Technology in Delhi found that well-resourced attackers such as a nation-state could effectively …
Will not help you.
All you need is to throw a DPI into the mix and do traffic shaping on the DPI in-transit instead of the compromised server. Shape a flow down, shape a flow up, compute a correlation coefficient, done.
In fact, this can be taken even further. The natural Internet congestion and different traffic flow rates resulting from it can yield the same results, you just need Bayes stats instead of simple correlation. This is a classic big data problem, given a sufficient dataset you can nail pretty much any client if you can get a data sample near source and near entry. You do not need the data itself, all you need is is basic TCP stats on it - window, rtt, etc.
"This is a classic big data problem, given a sufficient dataset you can nail pretty much any client if you can get a data sample near source and near entry."
Now, add in a nation state tossing tens of thousands of TOR nodes up and monitoring their traffic for analysis.
It's a well-known fact that with control over a sufficient portion of the pipes you can defeat any existing anonymizing system (TOR, Freenet etc). It's usually stated on the projects' webpages and/or READMEs etc. Last time I checked, TOR told you so at each startup.
Artificial lag, fake traffic -as you suggest- and aggregation make it harder, and they are used in most anonymizing schemes (all the ones I know, actually), including TOR. But they are not sufficient.
Imagine your network traffic as bullets from a gun.
It's much easier for the enemy to locate your position if you are using tracer rounds and rapid fire (lots of traffic and identifiable cookies etc.) or even to use sound triangulation if you are not.
The individual sniper shot from a fresh location in the midst of a battle is much harder to track down.
Snipers often try to blend into their backgrounds. Would it therefore be an impovement to finds a means of "piggybacking" on someone elses packets... ie blending in to their network environment.
I have no idea if this is possible, nor any idea if it is even remotely feasible.
Theoretically : connecting via a FON connexion, spoofing an existing users MAC/IP address, creating some outgoing traffic, adding headers to the packet that would allow the returning packets to be filtered and returned to you. Idea being that it looks like someone else was TOR browsing.
[Edit : Just read further down the thread and it appears that the basic idea is already being covered, notably using TAILS as the client OS for even further anonymisation.]
Prove it isn't.
It's a US Department of Defense (Navy, then DARPA) project in the first place, so the default assumption has to be that it doesn't protect you against USG. And the NSA is part of the DOD and its chief is ... an Admiral of the US Navy.
And: Yes I think most anonymity services and software packages are honeypots.
Probably they aren't *all* honeypots, but who can tell which aren't? Surely the question is not whether they are compromised by government, but by which government?
And even those which aren't pwned by the NSA (or another agency - probably more than one), are effectively honeypots to the NSA because they can de-anonymize any real-time traffic just based on their overview of network activity.
The other week there was a prime time documentary on tor and dark net - we were told that tor was safe.
As I have already written, French TV is heavily censored - so if they even go as far as mentioning tor it means the TV channel got clearance from intelligence officials. This means the French intelligence agencies can get anybody using the service if they want to.
The French are so predictable.
They will not be able to get you if you have a USB stick you use from a McD hostpot, for example, but that is something else. Switch off mobile, do not disclose identifying data by ordering stuff online (home address, cc...).
Oh, the pain, the horror! To be told that weak anonymizing protocols don't count for much! Tor should have a FAQ about how many ways its anonymity can be countered. It doesn't matter how many times the packet bounces around Tor's echo chamber, there are only so many entries and exits.
Tor is broken. Time for better protocols, where source and destination are anonymous, despite the fact that everything is in a big glass fishbowl!
"Ideal scenario will only be reached if (all) the governments and (all) the communication providers stop storing the mapping of last mile connection exits to the end users."
Yep, it would also be ideal if they gave we a few million quid as well, and maybe a flying unicorn...
There could be some different things that could be done in order to mess with these algorithms depending on the types of NetFlow being implemented. The vast majority of deployments is V5 and V9 with static fields being exported. The fields that are tracked are generally defined as key and non-key fields. Key fields are unique identifiers for creating a new flow entry. Some of the fields that will extrapolate this data like TTL are non-key. TTL being non-key can change throughout the coarse of the flow and never be identified. This could be exploited by injecting junk packets into the flow that match the key fields with a low TTL so they are lost inside of ToR when the TTL expires. This could be used to do the following:
-Obfuscate the start and end times of the flow e.g. Send extra packets with NULL flags and low TTL after a FIN is received
-Obuscate packet and byte counters for a flow by injecting junk packets into the flow that you intend to expire inside of the ToR network.
In newer "Flexible Netflow" implementations, you can certainly pick and choose which fields are key fields. So using TTL to accomplish this may be ultimately of little utility since the "junk" packets would be identified as an altogether different flow.
With "standard" NetFlow implementations and hosts behaving this way on both sides, we could introduce enough entropy in the flow data exports to make analysis too tough.
As long as your internet usage ranks up there with Grandma's. tor rules. Its nice its an option but its latency and bandwidth constraint alone makes sure it will never be very mainstream. First time Joe Q Public tries Netflix through it they will be calling their ISP bitching which is too bad because more users would probably make this attack harder.
More users wouldn't appreciatively effect the false positive rate because of the metrics being used provide a pretty good indication of the flow that is occurring. Just the flow start and flow end timestamps in conjunction with byte and packet counters can probably correctly identify IP address correlations no matter how high the volume. The only way to solve this problem is to make those items different on both sides of ToR. That is a very difficult proposition.
More entrance and exit nodes makes it harder to install monitoring on all those points. If you have access to the data, its just as easy.
Biting the hand that feeds IT © 1998–2022