"it is possible for malicious JavaScript in one web browser tab to spy on other open tabs"
Not with NoScript it isn't.
Computer science boffins have demonstrated a side-channel attack technique that bypasses recently-introduced privacy defenses, and makes even the Tor browser subject to tracking. The result: it is possible for malicious JavaScript in one web browser tab to spy on other open tabs, and work out which websites you're visiting. …
actually, I run noscript, ONLY allow a very small number of domains, and if a web site is persistent and for some reason I _must_ use it, I do the following:
su - differentuser
export DISPLAY=localhost:0.0
firefox &
then the firefox settings for 'differentuser' are:
a) dump all history on exit
b) allow script anyway
c) don't keep login information in the browser's settings either
then paste the URL into the "other user" browser, and run as usual. expect longer delays [loading all of that scripty crap and no cache]. When done, exit the browser, kinda like flushing the toilet when you're done with "whatever".
(NOTE: you'll need to allow TCP for X11 and NOT be running windows for this to work; windows may alternately let you use 'run as user' with firefox for a similar effect, but I haven't tried it, and I always recommend to NOT run a web browser and surf the web in windows, ESPECIALLY not a user with admin privs)
secondary point: to allow TCP on an X server these days, you may need to set up your system for "multi-user" (i.e. don't boot into the GUI 'gdm' etc. and use 'startx') and have a '~/.xserverrc' file that looks like this:
exec Xorg -listen tcp
then make sure you block port 6000 at the firewall, so nobody else tries to connect to you. Also will need to execute "xhost +localhost" so that the 'export DISPLAY=' trick will work
“su - differentuser
export DISPLAY=localhost:0”
Yep, you’ll get far with Jo Public with that workaround. Easier still, break their internet experience for them by using NoScript with no exclusions.
Everyone who doesn’t frig around with IT, ie the 99%, have no option but to rely upon their security software to do the job.
But the script in question can simply be attached to or be a part of one of those "needed to run the site" scripts. Hard to defeat "piggyback" scripts without breaking sites you frequent. Unless and until there's a mandate to make websites as simple as possible (over and against the objections of John Q. Public), this will continue. Indeed, JavaScript can simply be one means. The big big reveal is that it's possible to identify your action through broad things accessible to the average user (CPU and cache utilization) in a way that can nail down the browser perhaps even while other things are going on (meaning it can filter noise). Perhaps the next step will be to find a way to snoop that can't be shut off; then it's decision time for those who can't afford or logistically use more than one computer.
As I understand it, the authors dismiss the "other things [...] going on" because the browser uses a large proportion of resources, and the noise is "filtered" because "deep learning" (which would in most cases eliminate basic, predictable system activity). One of the problems, in my view, is that this "broad" approach is unlikely to work if there is significant unpredictable system activity going on at the same time (say, you're retreiving your mails via ClawsMail while loading the page).
Also, even if 'net browsing is the only thing going on, I wonder how well the technique works when tab number increases. I'm guessing "not well at all". My 2 primary uses for tabbed browsing are comics binge-reading, and wide-scope documentation. In both cases I often have 10+ tabs loading at the same time, good luck with that, cache-lurkers. (Well of course I don't allow JS to begin with because I like resource frugality -and not because I have shitty slow 'puters, as some may malignantly suggest- but that's beside the point)
I wonder how well the technique works when tab number increases. I'm guessing "not well at all". My 2 primary uses for tabbed browsing are comics binge-reading, and wide-scope documentation. In both cases I often have 10+ tabs loading at the same time, good luck with that, cache-lurkers
Yeah, I should probably do "secure" browsing while I'm visiting TVTropes. I'll have 15 or 20 tabs open by then (at minimum).
No they were kipper ties, flannel shirts and flared corduroy trousers, When it is cold, tweed jackets with leather patches on the elbows.
A pipe and a perm are also usually a prerequisite, the latter at least among the more hirsute, otherwise a brushover is recommended.
This is literally a cute way to do something you could always do.
Load a object, compare load/compile/execute times to determine if it's was downloaded or already downloaded.
Bucketing the cache per "requester site" would resolve this, but also impact performance.
The real impact here is when you consider iterating though a graph of social media to find the particular persons profile using basic set theory & finally iteration over the short list of people who've seen the recent posts from DUP Supporters, UK Parliament News and LGBT weekly.
This post has been deleted by its author
Well, possibly a little more.
But shirley there are a number of defences and browser fixes possible?
It would have an efficiency hit but could the browser do a bit of random cache grabbing all the time? So the pattern is unpredictable? There are enough delays in loading and rendering a page these days that there is ample spare time to play around.
And does this attack work cross browser, so could you defend by using Chrome for the stuff you don't care about ('cos Google is snooping on everything anyway) and then say, Firefox in private mode for the banking site?
They're fingerprinting the processor cache, so the script would "detect" a page loading in another browser (whether it could identify it rather depends on how the browsers load pages and how the fingerprint database was constructed).
The aim for this technique would not be to construct a map of every website you visit, as their "open world" setup shows. Rather, it would seek to determine if you are visiting a "sensitive" website, and as such are overdue for a friendly chat in the back of an unmarked van.
". Disabling JavaScript completely will kill off the attack, but also kill off a lot of websites, which rely on JS functionality to work"
problem is most sites which "rely on JS", use it for functionality that could be achieved without JS - either eye candy bells and whistles (plenty of which could be done via CSS) or JS calls to dynamically get content (instead of all being served from server) which basically puts more workload on user browser (and their bandwidth) and less workload on server.
JS is too often the lazy option.
Instead of Chrome et al being focused on non https warnings, would be good if they warned about JS use (given Google are a big JS abuse miscreant, I'm not holding my breath)
@Bod : "Let's go back to static pages [...] Back to Web 1.0"
well, if you're using Tor, then it's for privacy reasons, so it is probably a very strange idea to browse with JS enabled AND have multiple tabs opened visiting different and sensitive sites. For example I use a different browser for banking than for regular reading, and I use the banking browser only for that, with JS enabled since they need it.
"Let's go back to static pages where you have to keep clicking next page to scroll through thousands of items instead of dynamically loading them then."
For this particular example, I really, really wish that sites would go back to behaving that way. Dynamic loading is something that gets in my way on a daily basis.
"Back to Web 1.0"
preferable to the bandwidth wasting script intensive bell-whistle-new-shiny market-platform track-via-ads bright blue on blinding white 2D FLATTY "shit show" we're exposed to on a daily basis.
yeah, been here a LONG time. You can make things look good without cat video ads playing in every corner of the page.
At Tiggity, I just turn off JS entirely. If a site refuses to load because of it, then I do a search for "cache: $URL" to get a plain text version of the page & read the content anyway. A site can whinge about cookie policies & JS requirements all they want, I'm not agreeing to (nor allowing) either "requirement". If I can access my (new! WOOHOO!) banks site without JS enabled then a simple news site can jolly-well cough up their content without it either.
At Bod, you say returning to HTML 1.0 as if it were a BAD thing. Given how much shit everything newer keeps introducing into the mix with which site builders then use to fuck with us, I'd say pruning said bits out with a chainsaw & extreme prejudice is a GOOD thing. So what if you have to click next to get more content, at least then you don't have to worry if the JS they're using to serve up such pages is about to deliver a virus instead.
I realize & accept that I'm a luddite in this regard, but then maybe my "luddite paranoia" has grounds given all the shit being flung at us in the hopes that some of it sticks. =-(
"problem is most sites which "rely on JS", use it for functionality that could be achieved without JS"
Yep. And well-designed sites will continue to work properly even if they can't run scripts in the browser -- they just drop the bells and whistles (which, half of the time, makes the web site faster and easier to use anyway).
My standard practice is that if a site doesn't work properly without Javascript, then I just don't use the site. There are very, very few websites which are actually indispensable.
"problem is most sites which "rely on JS", use it for functionality that could be achieved without JS"
The WORST ones send back an error from nginx - some CDN out there uses javascript to load their pages, and when the load/redirect (via script) fails, you see that 'FORBIDDEN' error from nginx.
It's a filter that KEEPS ME FROM USING THAT WEB SITE. I'll go elsewhere, and flame them every chance I can, for doing that. [if it's a web site rental, I'll ask the owner nicely to use a different service provider, with a nice easily understood explanation as to why]
/me considers a javascript in some of my pages that loads the "you are an idiot" flash, infinite instances of it. So if script is OFF, you are fine. If you enable it, "you are an idiot, ha ha-ha ha ha ha ha ha ha ha haaa!" with a rapidly growing number of instances filling your screen. It'd also be an 'idiot detector' for people who still have flash player enabled.
JS is too often the lazy option.
Well, yes. But saying why it gets used does not really get us any closer to preventing its misuse.
Whatever the reason, the fact is it's very, very widely used, and you can't simply turn it off without breaking a (very) large part of the web we have now.
It would be more helpful to identify the specific JS functions that are used in this attack, and how they could be rewritten or redacted entirely to suppress it.
"Whatever the reason, the fact is it's very, very widely used, and you can't simply turn it off without breaking a (very) large part of the web we have now."
Fine by me -- that's what I've been doing for years anyway. Sites that are so poorly designed that they can't run without Javascript are sites that are so poorly designed that they don't deserve my attention.
"It would be more helpful to identify the specific JS functions that are used in this attack, and how they could be rewritten or redacted entirely to suppress it."
But that would only plug this one specific hole without addressing the underlying problem that scripts have entirely too much access to your browser and computer to be considered generally safe.
"Fine by me -- that's what I've been doing for years anyway. Sites that are so poorly designed that they can't run without Javascript are sites that are so poorly designed that they don't deserve my attention."
Problem is, what if it's the ONLY way to access your money (because it's your bank, for which there are no local branches of ANY bank within a reasonable drive--and no, EVERY employer is forced to direct deposit for tax reasons--those who don't tend to get sniffed by la migra)?
It also relies on connection speed, browser brand and version, websites staying the same over time -the attacker needs to build a fingerprint database-, and overall ressource consumption. As such, it might achieve 70% accuracy in a lab setup with a limited set of fixed pages, constant and known connection speed, known browser and no other system activity, but I can't see it working in the real world.
Yet another reason why running random, unattributed, dunno-where-it-came-from, I'm-sure-it'll-be-OK-really code off the Internet is a bad idea.
One day this is going to reach a point where it's indefensible for companies like Google to persist with Javascript as a technology. Many, including myself, think we're already there, and have been for quite some time...
Interesting to think that the large tech comanies are effectively one major browser breach away from unrecoverable reputational damage. That's a risk that they cannot wholly control - people use their browsers for more than just accessing (for example) Google services, and those other sites are potentially able to attack Google's services security without Google being able to control or even detect that.
Not that the old fashioned way was inherently secure - people got software nasties through dodgy shareware, USB sticks, all sorts of vectors. But at least with those you were knowingly installing that software, or plugging in that free USB stick, or connecting to that public network with file sharing enabled, etc. Nowadays just a little light web browsing to even perfectly standard websites can result in someone somewhere getting code running on your machine, and hijacking your data.
One day this is going to reach a point where it's indefensible for companies like Google to persist with Javascript as a technology
I truly hope you are right, since running arbitrary code provided by poorly controlled third parties is obviously a REALLY BAD idea from a security POV. But the fact that it's a bad idea doesn't seem to have much impact on Web Developers. (As an internet user, Web Developers are not my favorite people). There's also the fact that a lot of sites that deal with maps or text editing or such actually need scripting unless and until alternate approaches can be developed.
My guess is that if things get bad enough, there will eventually, in the face of much protesting, be a ban on running third party scripts. I have my doubts that'll work well enough to provide us users with adequate security. And it will cause a lot of short term problems. But it would certainly be a step in the right direction.
Time will tell.
I've been calling it "Safe Surfing" for a while. It includes things like:
a) don't use internet explorer or Edge or MS Outlook [aka virus outbreak]
b) don't be logged in with admin credentials for e-mail or web surfing
c) if possible, don't use windows to surf the web or read e-mail
d) run noscript or its equivalent
e) only (pre)view e-mail in plain text, NEVER with attachments inline
f) always save attachments to disk, then open with "the application" (not double-click) by running the application FIRST and then using 'file open', and have SCRIPTING TURNED OFF when you do it.
etc.
We show that we can spy from one browser tab on another and even from one browser on other browsers running on the computer.
Not really. The technique can detect which sites may have been visited with reasonable accuracy. That's enough for some degree of profiling but it's a long way from spying. And it relies on the really paranoid using multiple tabs and keeping the browsers open.
If I was really paranoid I suspect I'd disable tabs and possilby even run each browser instance in a VM.
"If I was really paranoid I suspect I'd disable tabs and possilby even run each browser instance in a VM"
I guess that I'm half-paranoid. I am not really a user of tabs in the first place (not for security reasons, but because I find it better from a usability standpoint to run multiple instances of a browser rather than having a bunch of tabs in a single instance), but if I have to use a site that doesn't function without allowing Javascript, I always do that from inside a VM.
"Hypervisor attacks ARE a thing, you know?"
Oh, yes, I know very well -- which is why I cringe whenever I hear people advocating VMs as some sort of security panacea. I was not claiming they are! My primary defense against client side scripting is to disallow it and avoid websites that require it. There are one or two that I need to access, though. I acknowledge that it puts me at risk, but running inside a VM does reduce my vulnerability. It certainly doesn't eliminate it, though!
I'm glad that I have the option to have private tabs and such; but just because I do security work around HTTP, I much prefer completely different browsers--each with it's own script-restricting addon, cookie-restricting addon (including Flash cookies), and tracking-control addons (Ghostery and Privacy Badger).
Besides, there's a difference between paranoia as an affliction and paranoia as a hobby. I thinking of getting a really fine tin-foil fedora.
That's pretty horrible if it's true.
Or maybe it just times page-load times for other tabs and notices they load faster if they come from the browser cache which is still pretty horrible as it means a JavaScript page can enumerate other pages that it shouldn't really know about. I don't know as my JavaScript skills are created when needed (which is not often) and destroyed when finished with... otherwise known as garbage collection.
It has access like all software - it runs from the CPU cache thus it can measure how other stuff sharing the same cache is operating, and fingerprint it.
This normally requires precise timing, and thus requires access to high-precision timers that are being locked off from developers for this reason.However, this technique gets around that.
Check out the linked-to paper, it's why we link to original materials wherever we can.
C.
Dear Christ live a little!
‘I turn off JavaScript.... etc etc. You might as well walk infront of a train with a red flag.
Ok so yes some clever chaps have found you can work out roughly what’s in the cpu cache, but not a lot and a proof of concept does not mean that it’s a civilisation ending zombie apocalypse.
JavaScript is a wonderful thing, yes it has some funny quirks and bugs but it also allows us to do amazing stuff like build little calendars on websites that people can use to book appointments to cure cancer.
Funny you'd mention that*, as allowing JS exactly has the effect you describe, i.e. slowing down the 'tarwebs, sometimes to the point of uselessness. When it's not crashing it entirely, that is.
Of course considering the rest of your post it's entirely possible the whole thing went "WOOSH" over my head, my sense of irony being a bit off these days.
*appart from the obvious fact that it was supposed to be cars, not trains
It's not necessary to block all JS, just the malicious one. And since this attack will certainly have many fingerprints identifying it, in very short order will there be tools to identify and remove or mitigate the threat.
In short, the only people who need to worry about this in the long run are those that don't take security seriously and / or people who run unpatched / insecure systems.
It isn't something that just happens. "there will be tools" ...because someone is already down by the hull frantically trying to plug holes, right? Good for us, eh? I hope they make it easy to donate to their efforts, but it's still just the broken window fallacy: problems were created by people being careless and running headlong into trendy nonsense (along with the herd, naturally), and some sharp people's valuable time got pissed away into making the aftermath suck less.
Often it is not possible to just block the malicious one without blocking everything from that site, bricking it. It'll be funny when NoScript likewise tacks on some ML bits for deciding that a lone script or group of interdependent scripts is malicious. And then it'll be defeated when sites learn to always load malicious functions from otherwise "necessary" scripts, as opposed to merely putting them all on the same necessarily whitelisted domain... and then it'll be neat again when NoScript gets even smarter... well, until the web browser gets too smart to be trusted and privacy is over forever.
It's not necessary to block all JS, just the malicious one
Well Doh! how the hell can you tell the non-malicious script from the malicious script?
Does it display a nice little smiley face with the text "I'm nice and not malicious" and does the malicious script display a devils head and the text "I'm evil".
The "attack" doesn't seek precise timers, so as long as your background task can get hold of the cache faster than (or as fast as) the browser does, I guess it could confuse the attacker indeed. Caveat: most background tasks would be set, by design, to low priority, allowing browser activity to "emerge" nonetheless. Re-nicing Prime95 to -20 and the browser to +20 might fix that, although I'm not sure I'd want to browse the web from such a setup.
Use different VMs to separate general browsing from secure browsing.
My financial sites browser only connects to limited sites with certificate checks. No searches or random sites accesses are done on that VM.
IDK if it has changed, but NoScript doesn't see "blobs" which contain Javascript code. For that an ad blocker that can block all third party objects does the job.