
JavaScript is only a threat when it runs
And NoScript stops it from running.
Use NoScript, or any other extension that controls what JavaScript runs and when.
That is what protects us.
Boffins from Graz University of Technology in Austria have devised an automated system for browser profiling using two new side channel attacks that can help expose information about software and hardware to fingerprint browsers and improve the effectiveness of exploits. In a paper, "JavaScript Template Attacks: Automatically …
It depends. Extensions can only work once they've been loaded themselves, they don't disable JS at the browser level then turn it back on for specific sites, it's left on by default and blocked when told to. If the browser tries loading the website first (you click a link from some other application) then the extension might not be available to block it.
Use NoScript, or any other extension that controls what JavaScript runs and when.
Genuine question here - how much of the web degrades gracefully if you turn off JavaScript? My expectation based on places I've worked, and admittedly no actual research, is probably very little?
TIA
It depends how well or lazily the website is written. El Reg, for example, works perfectly fine without JS, the only thing you miss is the thumbnail images on the front page. You get good at recognising which sites will work and which won't (they usually have a Metro-esque layout).
It varies depending upon the website as well as the functionality intended. Frankly, I don't mind about 80% of the breakage as long as I am getting a say as to what is going on, read: I'm not so lazy to think that just because something broke on the web page, that my life is over because I can't be bothered to activate the required JS and reload.
In today's modern world, privacy requires both commitment and some effort - nothing comes for free.
"Gracefully?" Approximately 0%. 65% of the time I can turn on scripts from the originating site and "originating site.static-content" it's good to go. Some sites, typically stores, are out of control and too much trouble to figure out, with a third-party cart, multiple behavior trackers, customer relations engine, multiple advertising vendors, edge networking cache drivers, etc, it's a chore to figure out which scripts are part of the site and which are behavior-analyticals garbage. So I end up turning on temporary permissions to buy a toothbrush.
Sadly, the ads always break. It is a tragedy.
Not that much, most of the time.
- many sites work
- some sites don’t work at all
- you can decide whether temporary allows is worthwhile for that site
- you perma trust/distrust some JS sources (the main problem being websites that use 10-20 JS sources besides themselves). sometimes you need to experiment a bit with temp allows to identify the 2-3 need to have sources for a site.
- best to have a backup browser for when you quickly need to do something, NoScript is choking, and are willing put up with their crap. online movie ticket sites are some of the worst here. I use Vivaldi as that fallback.
Overall, not too bad. I think of Noscript less as tracking suppression and more as probabilistic JS malware exploit avoidance - hopefully whoever got compromised this morning is not on your limited trust list.
Plus, I tend to downgrade my interest in sites that are too interested in making you jump through JS hoops. Lazy and intrusive coding and attitude.
We as geeks can simply maintain our blocking lists for NoScript, and when a site wants to execute Javascript from dozens of external domains, we leave that site.
I think the long term goal must be to abolish client side execution of Turing complete code. Clients must not behave in unpredictable ways. One way of doing this is to switch applications to some sort of "Terminal standard". This could, for example, be done by using Websocket in some well defined way to edit the document tree. Alternatively one might approach the issue from the Videotex side and start from text terminals.
I don't understand why you are always so insistent on telling others what to do.
You’re welcome to turn off JS and use Lynx.
People are welcome to manage their browsing as they see fit.
I am pretty sure the non-geek common man would mostly prefer to keep their websites as they are and would massively vote down your idea. I know I don’t want it and struggle to think of anyone I know personally who might see any validity in your proposal.
“But it’s for their own good”.
By that reasoning we should not have access to personal motorized vehicles since they’re a significant risk.
Stop telling people how to live their life!
Text terminal? WTF??? And this coming from someone who’s happiest coding in bash.
People are welcome to manage their browsing as they see fit.
Right, the problem is that's very difficult. "All or nothing" isn't the kind of phrase people usually associate with being given the full freedom of choice.
Simply put, JavaScript owns the Web, and many sites simply do not work without it (eg. they pull data via an API and the site content is not actually included with the initial page request). Companies intentionally record your personal information because it makes money. Allowing users to browse your S00PER C00L!! website without JS loses them money. As an aside, I've heard ideas like paid access to certain sites to replace advertisements and tracking, but there's no way companies would stop doing either even if they got paid for views.
I think what the guy meant is that we need new, more user-centered standards that don't need something as complicated or heavy (in terms of implementation) as JavaScript just to show the user information. Of course, there are other reasons people use the Internet than for information, so there are other avenues to consider.
And like you said, Jill Jillson just wants to get on Facepoop and doesn't care about anything else, so what does it matter what the site is using under the hood to show her cat videos?
>I think the long term goal must be to abolish client side execution of Turing complete code.
You and Herr WhatsGoodForOthers both realize a Turing-complete language ain’t a big thing? Right? A ps printer could run one. What matters is OS API access and paranoia. You can have all the non-Turing limits you want if you can reach the OS. JS gets that, though LOTS more sandboxing needs to happen. No need for world+dog code to be hitting high res JS timers, we’ve seen. Or this. Java really never got it and neither did PDF, Flash or Office VBA.
But the kind of idiotic think of the children he proposes ends up with first time website teen guys and gals having to run websockets service just to serve up a Nirvana tribute page. Graphical, cuz it’s a Nirvana tribute page, FFS. And me from running the dumb Nevermind page and it’s my effin choice to do so.
I loathe FB, have it DNS sunk at router, along with Zergnet and Taboola whom I loathe more. But yeah, I like to think most people are honest, and clever enough as end users, not to have their tech controlled by either of you.
I don’t want to be told not to tinker with radio sets cuz safer.
https://en.m.wikipedia.org/wiki/PostScript
I think you're missing the point.
It's not about making end users lives more difficult (though the extremist "lets all go back to text terminals" is surely enough to raise an eyebrow) but about getting rid of all the needless complexities, both for the user and for the hapless webdev.
And like you said, Jill Jillson just wants to get on Facepoop and doesn't care about anything else, so what does it matter what the site is using under the hood to show her cat videos?
Of course it wouldn't be that easy to just replace useragent JS with something less monsterous while keeping expected site functionality, but the end result would be a net gain, in a perfect world. Imagine every website loading 10x faster, ping times down globally, network traffic and energy use plummets... but the same amount of human-readable information goes through.
People are actually hired to optimize web code when, really, it has a potential to end up better when you rip out the Reactionary Angulicious Atomic Bootstripper gubbins once and for all. I have had the utter misfortune of working on a project that simultaneously used NPM, Grunt, Gulp, Bower, and Webpack—if you are not aware, they all have features ranging from similar to directly competing. It was a nightmare that I want to nuke from orbit.
JS isn't even that awful anymore and the ECMA specs have slowly been getting more focused in their scope, at least in some areas. I just want this websit garbage to stop.
Ghostery has been known to track their users in the past.
https://lifehacker.com/ad-blocking-extension-ghostery-actually-sells-data-to-a-514417864
https://www.businessinsider.com/evidon-sells-ghostery-data-to-advertisers-2013-6
I wouldn't recommend trusting them. Better to use an extension that doesn't phone home.
Right, I forgot about that. So the "we hijack page ad revenue AND show our own ads" thing is still going strong huh?
I have a huge moral issue with that, if that is how they ended up implementing it. I get not wanting to see ads, but then having the gall to replace that lost revenue source with your own? Don't you think that's a little mean?
Of course, blocking ads at all and removing potential revenue streams is mean too, but I'm sure you see my point.
Oh and by the way ds6 you'll find that that dial home feature? Not a default. You needed to actively opt into the scheme.
Ghostry were and always have been quite open and transparent so not really sure why you got your nickers in a twist. I mean you clearly didn't even read the article you posted! lol
Doesn't matter to me if it's opt-in or not—it's the principal of the matter. Additionally, in my experience with them it wasn't so forthright. In the article you posted, they even remark on the old business model being confusing. Nickers, meet twist?
"browser privacy extensions may just make matters worse...
And yes, you could disable JavaScript execution"
Seems a bit contradictory here. Browser privacy extensions might make things worse, but they're also perfectly capable of blocking it entirely.
From a scan of the actual paper, there doesn't appear to be any mention at all of privacy extensions making things worse. Most of the extensions they looked at can be detected, and some circumvented at least to some extent, so they may give a false sense of security but don't actually make things worse. In all cases, detection comes down to knowing in advance which properties they modify and inferring their presence from that, so no additional information is actually leaked by using such extensions. In most cases, while it's possible to detect an extension by seeing what properties it's modified, it's not possible to recover the original information so they do achieve something even if it's not as much as users might expect. Only Canvas Defender (can't say I've ever heard of it) appears to be completely useless.
"You missed this sentence in the article"
No I didn't. As I said, what the article says and what the paper actually says are not the same. The boffin's exploration of the JavaScript environment reveals the ability to infer the presence of privacy extensions by looking at variables that already exist. This doesn't add any additional ability to fingerprint anything, as Paul Kinsler suggests, because no additional information is added. Saying "This person is using Canvas Defender" does not tell you anything more than "This person is using specific pattern x for these 250 variables". In most cases information is actually lost, since you are reduced to only being able to say the former instead of having individual values for all 250 variables.
I was hoping that the authors would have set up a demo site to try, but neither the paper nor the Reg mention one.
I don't preserve long-lived cookies, but I use Google Maps, meaning that I have to click off Google's laughable privacy policy every so often. I would be very surprised if Google didn't try to fingerprint my browser to bridge the cookie gap.
That's you. For most people, though, breaking scripting breaks the page which they MUST see (Facebook or Bust, Baby), and they outnumber you.
Unless you can rule the world or at least require a license to use the Internet, we're gonna get shouted down every time.
the page is usually more readable without [JavaScript]
Unfortunately I'm coming across more and more pages that show four fifths of fuck all if JS is disabled. I usually can't be arsed with them, but not to show anything at all is either wilfully antisocial or smacks of a dependence on the latest trendy framework.
Right click. View source.
Ctrl+U for me. However, even if the content is actually in the page, it's usually buried in a haystack of analytics, ad slingers, trackers, web fonts, bits of frameworks, styles, more trackers, more bits of frameworks, more bloody ads, more web fonts, and yet more trackers. Oh, and the content bits are all unwrapped so 250+ characters wide with lots of redundant spans throughout.
Yes, I am having a bad day. How did you guess?
Unfortunately I'm coming across more and more pages that show four fifths of fuck all if JS is disabled. I usually can't be arsed with them, but not to show anything at all is either wilfully antisocial or smacks of a dependence on the latest trendy framework.
I'm arguing at work that anything client facing should degrade gracefully, but I'm losing the fight. Everyone wants to work with the latest vue or react type frameworks, meaning its JavaScript or nothing. Literally nothing.
I'm arguing for presentation of basic html & css as a downgrade (for UX designers) option, but the bank simply no longer want to pay for "two sites". I kid you not.
What's everyone else doing? How are you winning or holding back against the seemingly relentless march of js at your workplace?
This, a thousand times. "I don't care about your new shiny if it means we're in breach of Disability Discrimination legislation. Yes, that's a thing."
Out-law is, as ever, a handy resource:
Has anyone tried referring the project to your appropriate agency for legal compliance with disabled customers
Actually, yes, I have. Our websites are AA compliant, but still rendered by js (react usually). Unfortunately under-powered machines isn't in breach of any act.
I may well build sites using js, but I do push back where I can and as much as possible. I realize this has an air of "I was only obeying orders", but unfortunately, keeping a roof over my daughters head trumps privacy (yours and mine). It just does. Sorry.
I'm always open to new suggestions for how anyone else is building sites while resisting the tidal wave of js frameworks, and always willing to push back where I can, but unfortunately, it's simply not my decision most of the time - where it is, there's always a minimal js version as a "downgrade" option.
A ton of websites use Ajax to update a small portion of the page without the tedium of reloading all the other content. A case in point might be The Guardian comments section, where you can post a comment, scroll through pages of other comments, all very quickly and efficiently. You can kiss that kind of user experience goodbye if you hobble JavaScript.
Then it sounds to me like the site cannot be trusted AT ALL and should be blacklisted, if the ONLY ways it can run are security threats. Well, either that or it tediously reloads the entire page as you've described; it's the only way to be sure, it seems.
Then there's this which was turning up this morning whenever I hit a "raw_input" statement I had in some non-web related Python code for debugging.
WARNING: At least one completion condition is taking too long to complete. Conditions: [{"name":"TelemetryController: shutting down","state":{"initialized":true,"initStarted":true,"haveDelayedInitTask":false,"shutdownBarrier":[{"name":"TelemetrySession: shutting down","state":{"initialized":true,"initStarted":true,"haveDelayedInitTask":true},"filename":"resource://gre/modules/TelemetrySession.jsm","lineNumber":1389,"stack":["resource://gre/modules/TelemetrySession.jsm:setupChromeProcess:1389","resource://gre/modules/TelemetrySession.jsm:Impl.observe:1791","resource://gre/modules/TelemetrySession.jsm:this.TelemetrySession<.observe:638","resource://gre/components/TelemetryStartup.js:TelemetryStartup.prototype.observe:31"]}],"connectionsBarrier":"Not started","sendModule":{"sendingEnabled":false,"pendingPingRequestCount":0,"pendingPingActivityCount":0,"unpersistedPingCount":0,"persistedPingCount":453,"schedulerState":{"shutdown":true,"hasSendTask":false,"sendsFailed":false,"sendTaskState":"bail out - no pings to send","backoffDelay":60000}}},"filename":"resource://gre/modules/TelemetryController.jsm","lineNumber":772,"stack":["resource://gre/modules/TelemetryController.jsm:setupTelemetry:772","resource://gre/modules/TelemetryController.jsm:Impl.observe:868","resource://gre/modules/TelemetryController.jsm:this.TelemetryController<.observe:198","resource://gre/components/TelemetryStartup.js:TelemetryStartup.prototype.observe:30"]}] Barrier: profile-before-change2
Where was it coming from? No clue. Something running in my ancient version of Opera I think.
"Where was it coming from? No clue. Something running in my ancient version of Opera I think."
Actually,I believe that may be coming from your Firefox browser.
More specifically, the "Normandy" or "Studies" that send Telemetry back to Mozilla.
Entering this into the URL bar of my Firefox browser on a Linux box shows me relevant info:
resource://gre/modules/Services.jsm
Perhaps "app.normandy.enabled" is set to "true" in "about:config"?