A very similar security problem has been encountered and dealt with by Adobe in their Flash Player about 7 years ago. Welcome to the new era of buggy, security holes and crashing "HTML5". And bring on the HTML5 "Skip Intro" pages.
Software developers at Google, Apple, Adobe, and elsewhere are grappling with the security risks posed by an emerging graphics technology, which in its current form could expose millions of web users' sensitive data to attackers. The technology, known as CSS shaders is designed to render a variety of distortion effects, such as …
"It works by providing programming interfaces web developers can call to invoke powerful functions from an end user's graphics card."
Does this mean that websites will be able to avail themselves of my gpu cycles and maybe manage to get my graphics card to draw 200w or more? Sounds great - just as great, I am sure, as the sound of a gfx card's fan trying to cool an overheating graphics card that is running at top speed caused by visiting a webpage that has been injected with malicious code specifically designed to exploit this new "ability". For, you know, "lulz".
I would be willing to be any sum of money that websites maliciously exploiting these CSS shaders will outnumber websites using them "benignly" by orders of magnitude. And that, ultimately, there will be no real benefit whatsoever for people browsing even the "benign" webpages.
And isn't this what the world really needs, millions and millions of computers now using even more electricity rendering unneeded graphics on webpages that are already too heavily encumbered with jpgs?
(I wonder how long it will take for criminals to develop "cuda zombie-nets" that will enable them to use infected machines to form a kind "crime-cloud" for brute-forcing passwords etc. Well, even if that is not possible, there is one thing of which we can be sure: inventive minds are going to find a way to make the vast majority of web-users sorry that this was ever created).
Never attribute to malice that which is adequately explained by stupidity
"Even if you tuned a CSS attack to a given browser whose rendering behavior you understand, it would take many frame times to determine the value of a single pixel and even then I think the accuracy and repeatability would be very low,"
Possibly even lower if the browser does use hardware acceleration, because then you need to also understand and account for differences between makes and models of GPUs and the various driver versions and settings.
Not to mention the make/model of the CPU. Plus the amount and speed of RAM. Oh, and what other tasks might be running.
No, the only way I see this as feasible is if you have a target pool of a million or more devices, all with the same hardware profile, running the same OS, with few to no options to tweak graphics settings, with a lack of or severe constraints on multitasking, and which are only allowed to run one specific browser. But who would be stupid enough to buy something so obviously crippled as that?
"No, the only way I see this as feasible is if you have a target pool of a million or more devices, all with the same hardware profile, running the same OS, with few to no options to tweak graphics settings, with a lack of or severe constraints on multitasking, and which are only allowed to run one specific browser. But who would be stupid enough to buy something so obviously crippled as that?"
Sounds a bit like the iPad. I'm using one now to reply to your message.
instead of "this website best viewed at 1024x768" due to non proportional fonts or non dynamic frames buttons and the what not, now we get:-
best viewed on DX11 hardware...
what about netbooks? oh wait. an actual use for on-live GPU streaming? (pay subscription, datacentre renders page and feeds it down via streaming video to netbook)
this'll work when advertising creative types show off their wares to clients.
Client: "wow awesome! hey wait, this only works on 5% of our already reduced demographioc? you're fired!"
@"In my view, the most promising approach is to find a subset of the GLSL shader language in which a shader always takes the same amount of time to run, regardless of the input."
Seriously, WTF?!. This "same amount of time to run" is doomed to fail in so many ways, it would never be reliable.
For example, what happens on a new GPU that is more optimal, that hasn't even been designed yet. That could easily use a different amount of time and what happens with new GPU instructions added years from now, that could easily change the amount of time used and what about new GPU architectural change that effect execution time and create so much complexity, we can no longer even count cycles as caches and other pipelining considerations change the execution speed (as they do even now).
Plus that is before you even add in the effects of more cores on some cards competing for the same memory bandwidth that could stall some cards more than others. Also GPU board manufacturers use different data widths for memory buses all the time so the code thinks its the same GPU yet the memory is a different speed on some cards. Then what about overclocked cards with different speed memory and newer very low powered GPU's like on phones, that may have to take time to wake up from low powered modes, before they can run code. Plus what of filling different size caches on different cards and what about GPU driver changes, that effect the speed and possibly even what instructions some GPU's could use as they can easily translate shaders into their own internal card specific languages, that could also be different on future generations of graphics cards and even different between driver updates.
There is so many potential ways this "same amount of time to run" would fail that I am utterly amazed he would even suggest such a solution.
So there is no way at all that “same amount of time to run” approach would ever work reliably.
You're confusing "time to run" with "time between client issuing the command and being informed that it has been completed". You're also confusing "same on this hardware in this particular session" with "same on all hardware anywhere ever". Lastly, you're confusing "eliminate this covert channel entirely" with "reduce its bandwidth to a useless level".
The proposed solution strikes me as both affordable and reliable, in theory.
"It's not frigging rocket science."
Getting a graphics system that works across multiple versions of multiple browsers running on multiple versions of multipe operating systems on countless device types? I'd suggest it's not far off rocket science.
That the Flash player only had a few incremental updates is astonishing given how widespread its user base.
text: those "letter viewed through a bottle" and "incriminating note curling as it burns" effect from 1930s movies.
Cat Videos: Who wouldn't want to tweak them with custom css to turn the original cat into one's own?
Porn: Likewise, a webpage with a single, not-particularly interesting "actress" could, via CSs, look like the celebrity of your choice. A natural Freemium business model: give away the porn, sell the CSS.
Biting the hand that feeds IT © 1998–2021