back to article FYI: Code compiled to WebAssembly may lack standard security defenses

WebAssembly has been promoted for its security benefits, though researchers in Belgium and New Zealand contend applications built in this binary format lack important protections. WebAssembly, or WASM, is a bytecode language produced by compiling source code written in languages like C/C++ and Rust. These output files are …

  1. karlkarl Silver badge

    Luckily with Emscripten (the C/C++ -> Javascript / WASM transpiler) you can opt to emit ASM.js rather than WebAssembly.

    We currently do this to generate a fallback for older browsers but the nice thing about ASM.js is it is sandboxed by the Javascript interpreter whereas the WASM implementation has more freedom.

    Either way, no chance in hell would I deem anything running in a browser as safe.

    1. Pascal Monett Silver badge

      Agreed

      The last time I was called on to develop a mobile-phone-compatible web app interface for a document management system under Notes, I laid down as a rule that no code would be running on the phone. All links would point to a webservice running on the Domino server.

      That way, I had full control over what was expected to happen and anything outside of those boundaries was discarded. A full activity log was obviously in place as well.

      Thankfully, the person who called me in for the job was the head of IT and more than a bit knowledgeable about Notes, so my proposal was accepted without question.

      1. cyberdemon Silver badge
        Devil

        Re: Agreed

        > That way, I had full control over what was expected to happen and anything outside of those boundaries was discarded. A full activity log was obviously in place as well.

        Yes this, obviously this. Otherwise your test-space for critical functionality is practically infinite.

        Am I missing something here? It seems to me that the only use-case where WASM makes sense, is DRM.

    2. emfiliane

      I don't think you understand the problem. The failure here isn't that it breaks out of the sandbox, it's that it does unexpected things that are controllable by the attacker. Compiling to ASM.js instead of WASM gives you nothing in that case, because both are still at the mercy of your shitty C code's failure to check the length of a string or double-freeing.

  2. MacroRodent

    Stack smashing

    Something does not compute. Since WebAssembly has a separate control flow stack (stack of return addresses), smashing the data stack should not allow the attacker to redirect the program by rewriting the return address of a function. Other bad things can happen, but not this. Or am I missing Something?

    1. Warm Braw

      Re: Stack smashing

      Or am I missing Something?

      Don't think you are. The paper acknowledges this, but says:

      it is still possible for a vulnerable executable to see its control-flow redirected to call it with untrusted data

      Which, I think, means you could potentially alter the data in memory so branching and looping conditions were changed and so unexpected data was passed to sandboxed functions. Which is hardly surprising if you're starting with a fundamentally unsafe language like C.

      There's also an implication that it's somehow "better" if the program crashes because an out-of-bounds memory write trashes the stack than if the program continues but gives the wrong result because only its data was compromised. My view would be that programs that run but produce the wrong result don't really fall under the "security" heading.

      1. thosrtanner

        Re: Stack smashing

        That rather depends on what the wrong result is and how it is used. Sending a payment to the wrong bank account is definitely some sort of issue.

        And if the webassembly program happened to read a password...

      2. MiguelC Silver badge

        Re: programs that run but produce the wrong result don't really fall under the "security" heading

        As an example, if the program that can be tampered with to give the wrong result is checking for access privileges, what's that if not a security problem?

        1. Warm Braw

          Re: programs that run but produce the wrong result don't really fall under the "security" heading

          Lots of programs produce the wrong result - there's an oft-quoted statistic that nearly 90% of spreadsheets contain errors and the results of those errors can be far-ranging.

          Whether that's a "security" issue depends on the context of the program's use.

          The paper appears to imply that a program that crashes in the face of a specific type of error is more "secure" than a program that doesn't. My view is that a bug that can reproducibly crash your machine is as much of a security issue as a bug that silently fails to check credentials correctly and that it's a category error to suggest they're inherently different things and therefore the "security" label shouldn't be applied to one and not the other.

          1. Ben Tasker

            Re: programs that run but produce the wrong result don't really fall under the "security" heading

            > My view is that a bug that can reproducibly crash your machine is as much of a security issue as a bug that silently fails to check credentials correctly and that it's a category error to suggest they're inherently different things and therefore the "security" label shouldn't be applied to one and not the other.

            TBH, it depends.

            If you have code that

            - accepts the overflow

            - acts on the untrusted data

            - crashes

            Then you're no better offer - except in that it's a little more detectable in certain circumstances (you'll log it crashed, or a user will report it crashed)

            If you have code with a stack canary thatt

            - accepts the overflow

            - Tries to act on the untrusted data and crashes

            Then you're much better off than it being silently affected - although the underlying issue is there, you've made it harder to use it maliciously (you should, of course, still fix it).

            Finally, if your code

            - Accepts it

            - Acts on it

            - Carries on it's merry way

            Then you've got the worst of both - you lack the detection vector provided by crashing out *and* would acted on the untrusted input.

            All are vulnerable, two are more easily exploitable, but only one can more easily happen again, and again, unnoticed

            > that it's a category error to suggest they're inherently different things and therefore the "security" label shouldn't be applied to one and not the other.

            100% agree - the security label should apply to all of the above. It's a security vulnerability and needs to be addressed - the stack canary is a mitigation, not a fix.

          2. Allan George Dyer
            Holmes

            Re: programs that run but produce the wrong result don't really fall under the "security" heading

            @Warm Brew - 'The paper appears to imply that a program that crashes in the face of a specific type of error is more "secure" than a program that doesn't.'

            Yes, it's a fail-safe, it fails to a safe condition, giving someone the chance to fix the problem.

            'My view is that a bug that can reproducibly crash your machine is as much of a security issue as a bug that silently fails to check credentials correctly'

            They are both problems that should be fixed, but silent failure doesn't result in an opportunity to find the problem.

            'therefore the "security" label shouldn't be applied to one and not the other'

            Think of it as an example of 'Defence in Depth' - we know we don't have perfect programmers and testers (do developers still test code, or is that left to users nowadays?), so this will stop the program doing Bad Things<supTM</sup> when a common type of programming error is exploited.

        2. Anonymous Coward
          Anonymous Coward

          Re: programs that run but produce the wrong result don't really fall under the "security" heading

          Anyone who allows browser/client execution environment code to perform an application's security checks obviously hasn't taken their "introduction to applicaiton security for dummies" book home from the library yet. :)

          You NEVER let client code make security decisions, only decisions about how to *present* information about the state of the secure environment (e.g. enabling/disabling action buttons and menus.)

          1. Blazde Silver badge

            Re: programs that run but produce the wrong result don't really fall under the "security" heading

            From the point of view of server security yes, but the client may have it's own security goals in opposition to the server ('no sensitive information is uploaded to our servers' type guarantees), and usually will be part of a joint goal with the server to secure the process against 3rd parties between them, or in other applications etc adjacent to the client.

            "(e.g. enabling/disabling action buttons and menus.)"

            A classic example of something a client can be responsible for the integrity of, with security implications when it fails.

      3. Will Godfrey Silver badge

        Re: Stack smashing

        A company I worked for about 30 years ago produced railway information systems. We were specifically instructed that any error must shut the system down.

        No information is a warning.

        Wrong information could be fatal.

      4. teknopaul

        Re: Stack smashing

        Tempted to agree with this not being a new "security" issue. If every sublty different behaviour on different platforms that don't crash bang or allow remote exec are considered security issues, a whole lot of commom bugs are security issues and this muddies the waters. You don't get the same bounty for bugs so people are overly keen to find a security angle to potential bugs.

        Almost any type of bug is a security issue in some code but I don't see it helps to categorise them that way.

        Basically take note: certain types of bug don't crash they continue on wasm and build your code conscious of that. Depending what you are doing this wasm feature is likely not a security issue. It's certainly not something you didn't already have to worry about.

  3. Mike 137 Silver badge

    "Thus, common mitigations [...] are not needed by WebAssembly programs"

    Effective defence is necessarily defence in depth (provision of multiple protections to cover for a range of alternative attack vectors and redundancy so that a fragility on one provision is covered by a robust provision offering protection against the same threat).

    The assumption that protections can be pared back because others can be relied on is (to use a technical term) barmy. It's comparable to saying that because a troop carrier is fast so it's harder to hit, it need not be armoured. That only works (if at all) until the enemy invents a better tracking missile, and you'll probably find out they have only when the first one scores a bulls eye.

    1. DS999 Silver badge

      Re: "Thus, common mitigations [...] are not needed by WebAssembly programs"

      We've also heard some new development "eliminates" a class of attacks, only for attackers to find ways around that new defense its creators did not anticipate.

      Look at all the promise heaped upon stuff like ASLR, no execute permissions on the stack, and on and on that were supposed to make a material difference but just ended up raising an easily avoided speedbump in the path of attackers.

      So I'm skeptical when I hear the promises of WebAssembly. Anytime you are downloading something from the internet to run on a client device, there are associated risks. Trying to increase the performance by using native code is not a worthwhile tradeoff as far as I'm concerned. I'd rather have slow and secure (or if not secure at least the "known knowns" of something that's been around forever like Javascript) than fast and risky, especially when the risk is not well quantified as WebAssembly isn't widespread enough for attackers to have paid much attention to it so far.

      1. -v(o.o)v-

        Re: "Thus, common mitigations [...] are not needed by WebAssembly programs"

        I wonder what's next now that ROP may be destroyed by CET.

  4. DrXym

    The webassembly code is not the place for this

    The runtime should NEVER trust the webassembly to be correct. It doesn't matter what safeguards or checks it might claim to have, or what compiler was used. The reality is a bad actor could just produce a wasm module which doesn't contain any checks, or checks that are malicious.

    Instead the runtime should be inserting its own canaries into the code when it executes it, either just in time or whatever - boundary checks and suchlike. So that regardless of what the code does or doesn't do it can't break out of its sandbox.

  5. _andrew
    Pirate

    WebAssembly execution involves a compilation step...

    so anything that you can add by way of stack canaries in a C compiler you can also add in the WASM back-end. Just another compiler. And anyway: C compilers don't emit "stack canaries" by default, and certainly haven't for most of the history of C compilers. That's new security theatre.

    Buffer overflow is these-days strictly a C or C++ problem (not that that makes it a small one!) Other languages, such as the Rust mentioned, trap buffer overruns in the language definition, so as long as the compiler produces a correct translation of the program the problem is dealt-with at a different level.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like