back to article Rust can help make software secure – but it's no cure-all

Memory-safety flaws represent the majority of high-severity problems for Google and Microsoft, but they're not necessarily associated with the majority of vulnerabilities that actually get exploited. So while coding with Rust can help reduce memory safety vulnerabilities, it won't fix everything. Security biz Horizon3.ai has …

  1. Lee D Silver badge

    Rust is only useful where all your Rust code resides in a "safe" block. Literally all.

    Because one "unsafe" block can break the guarantees of safe code around it.

    And guess what you need if you want to convert arbitrary memory into a data structure - like you would for pretty much every driver, hardware interface, and all kinds of lower-level activities? An unsafe block.

    Yes, now you're back to square one running unsafe code making assumptions about arbitrary memory contents to try to turn them into data structures, function calls, etc.

    Great for basic applications. Not so great for anything that deals with OS or hardware.

    1. Blazde Silver badge

      You're not back to square one. You've profoundly reduced the footprint of dangerous code that needs reviewing, and if you're anything like the average Rust coder you'll work zealously to reduce or eliminate unsafe blocks and refactor them into the simplest, most readable, most likely to be correct code that's ever been written.

      (I tried to find a good example but the first 3 driver code bases I checked had no unsafe code. Two of them even had #![forbid(unsafe_code)] at the top. The 4th I checked had four unsafe one-liners. Two were straightforwardly unnecessary. The other two indeed involved a risky pointer re-cast which I couldn't immediately judge the safety or necessity of because it involved device specific knowledge, but I know if I wanted to it'd be several orders of magnitude easier than checking an equivalent C driver for memory-management related vulnerabilities).

    2. karlkarl Silver badge

      > Because one "unsafe" block can break the guarantees of safe code around it.

      Indeed. The unsafe block isolates the "entry" and "exit" points for i.e memory bugs to manifest themselves. However the bug is likely not at these points but within the wider software design.

      This is further exacerbated by the fact that almost all Rust software relies on one or more *-sys crates linking it to the underlying OS (almost exclusively a C API). So there is many of these unsafe points that people don't even know about; so how can they know to be careful?

      Rust is close to safe. Closer than Ada in my experience, especially where C boundaries are concerned. However in more use-cases than we like to admit, keeping with homogenous C (or even C++ with direct interop with C) is more appropriate.

      1. jaypyahoo

        Actually Ada subset SPARK programming language is more secure way to code. Ada doesn't compete directly with Rust.

    3. fg_swe Silver badge

      FALSE

      If 99 out of 100 lines of code are memory safe, you gain massively as opposed to 100 percent unsafe code. Gain in security. Well defined halt as opposed to undetected memory cancer.

      I know this from my own language and projects.

      E.g. the memory safe web server Gauss.

      Http://gauss.ddnss.de

      1. karlkarl Silver badge

        Re: FALSE

        Imagine you have 2 unsafe sections:

        1) One that allocates some data and passes it out as a wrapped object.

        2) Another that causes the data to dangle.

        You are passing that wrapped object around potentially the entire codebase. I would now say that 100% of the Rust codebase is compromised. The issue is the order / state of calls that ultimately called that 2) section. This is likely why Rust is finding it hard to establish a common GUI library; those things are very risky with regards to dangling data when wrapping and making bindings.

        Yes, this is better than an entire codebase of C that could also be compromised but that loss of direct interop with C may not be worth the difference. Or at the very least, C++ is a pretty strong compromise between the two.

        1. Blazde Silver badge

          Re: FALSE

          The issue is the order / state of calls that ultimately called that 2) section.

          No. The issue is that you're dropping data without being sure there aren't continuing references to it (if I understand the example). So the issue is in section 2 - your unsafe code is bad, The fact that there *might* not be continuing references if all the rest of your code colludes with section 2 is irrelevant. The big clue that the problem is in section 2 and not somewhere else is that you've wrapped it in an unsafe block. That's what Rust does for you, it helps you understand the root cause of the problem.

          The solution is to find a memory-safe way to do it. (egs: wrap the dropped data in an outer reference which remains valid and make sure the real reference only exists once, or replace it with dummy data which remains valid, or make it undroppable as long as your program runs). It is true that you might incur a small overhead to fix the problem but it's nothing you shouldn't be doing in any language if you want bug free, maintainable code.

        2. fg_swe Silver badge

          Re: FALSE

          It is all about likelihood. If your inline_cpp[[]] sections are small, few and well-reviewed by senior C++ developers, chances are high that you preserve memory safety.

          If, on the other hand, you link a huge C++ program to your memory safe code, chances are that you will have the dreaded memory cancer. As with all non trivial C++ programs.

  2. Mike 137 Silver badge

    "Security is a process, not a product. Nor a language"

    More than any of these, it's an outcome of a mind set. In addition to tech-specific factual knowledge, it requires solid understanding of first principles, attention to detail, the ability to synthesise possible outcomes from what's observed, and most of all an ethical commitment to taking responsibility for the results. No tools, languages, dev systems or other tech can substitute for these human capacities, which are in reality elements of character. But, unfortunately, exercising them during development and testing slows it down and greatly increases the cost of production.

    These capacities also have to be learnt, and they're not being taught. Just for example, a recent report by the Social Market Foundation entitled 'Character building: Why character is essential for career readiness' mentions '• Self-belief • Determination • Self-control • Coping skills' but makes no reference either to cultivating enquiring minds or to commitment. And IMHO self-belief should not top the list as an objective in itself, but be an outcome of verified capacities. There's far too much unfounded self-belief in the IT sector, witness the deluge of project failures and the parlous state of infosec.

    1. Charlie Clark Silver badge

      Re: "Security is a process, not a product. Nor a language"

      Wanted to make a similar point. As other areas of engineering have shown: security and saftey are cultures and require a no-blame environment so that when problems occur, and they always will, minds are focussed on solutions and not on blame.

    2. AndrueC Silver badge
      Boffin

      Re: "Security is a process, not a product. Nor a language"

      A degree of paranoia helps. I spent fifteen years in data recovery and developing software for the same. You eventually learn where to put your gatekeeping logic and to be infinitely suspicious of any data structure that you have obtained from external code or external data. You might think that you've just read a directory record but it could actually be any old crap and lead you on a not-so-merry dance ending with a crash. I wrote applications that processed hundreds of gigabytes of partially corrupt data and needed to run for a couple of days (this was in the 90s). One thing you don't need when there's a customer waiting to get their data back is to arrive in the office at 9am and discover that your data extraction failed at 11pm the previous day.

      It meant that I missed the other half of the equation which is passing your data safely and securely to external code but it's a good starting point.

      1. Paul Crawford Silver badge

        Re: "Security is a process, not a product. Nor a language"

        It is not paranoia - somebody is out to get you.

        It might be the $BADGUYS in t'Internet, or it might be Smithers in accounts to carelessly copies and pastes, or it might be one of your dev team who didn't pay enough attention to $SYSTEMCALL, but they are out there so you need to be building defences in from the very start...

        1. Anonymous Coward
          Anonymous Coward

          Re: "Security is a process, not a product. Nor a language"

          Ugh. I've talked to far too many "developers" where you say "what if someone does a Little Bobby Tables here?" and they go "but why would anyone do that?"

        2. fg_swe Silver badge

          ₽badguy

          They enjoy the protection of their own government, because ours is at odds with theirs.

          Very dangerous.

    3. fg_swe Silver badge

      Deflection

      Memory safety is another arrow in your security quiver. Similar to ABS in cars and airplanes.

      It adds to your hopefully great software engineering and security processes. Please do not belitte it.

  3. halfstackdev

    A few random points

    - It has been demonstrated already that safe Rust is still vulnerable to speculative execution exploits, and memory buffer over read attacks

    - Rust sits on top of LLVM, which has thousands of outstanding issues, and a slow response time to getting them fixed

    - Code that the Rust compiler can prove to be safe is only a subset of code that is safe. You can write well designed and safe code that the borrow checker cannot compute. To get it through the compiler means breaking a good design and making compromises. A lot of people consider this a good thing … different type of people consider this a bad thing. YMMV

    - If you have ever spent a more than a week working triage on large systems, you will already know where most of the bugs come from. It’s usually the code that is hard to read, hard to review. Complex, clever, overly verbose abstractions and data structures have a unique “unsafety” all of their own

    - If your product is a static native binary that your users download and run, then yes, memory safety is obviously important. If your product is a hosted service that you maintain.. what is important is the turnaround time for finding + fixing + releasing the bug update. A lot of the time, it is not even the original author that has to do this. The turnaround time is generally a function of how fked up and abstracted the code is.

    1. Bebu
      Windows

      Curious

      "Code that the Rust compiler can prove to be safe is only a subset of code that is safe."

      Is this necessarily so? ie is it that no finite computation can always determine the memory safety of arbitrary chunk of code? I don't imagine its equivalent to the halting problem but I could imagine for any given memory safety checking algorithm an input for that algorithm might be constructed whose memory safety cannot be determined by that algorithm.

      I have to wonder whether static memory safety checking could be hybridized with run time automatic memory management (garbage collection) to the benefit of both?

      1. fg_swe Silver badge

        Re: Curious

        I recall reading that some JVMs do use Stack Allocation of memory that is requested with the "new" operator when this is safe.

        The problem of GC is its inefficiency. GC does waste memory for very basic reasons and it interrupts the program at unpredictable points in time.

        That is why Rust and Sappeur use reference counting(ARC). With ARC, the developer can finely control the deallocation of heap objects. This is bought by the fact that ARC cannot collect cycles of trash. Developer must first break the cycles with explicit code.

      2. Daniel Pfeiffer

        Re: Curious

        You are right. There are scenarios where the static safety gets in the way. For this Rust has RefCell: A mutable memory location with dynamically checked borrow rules. Same safety, but guaranteed at run time. If you must share it concurrently, there’s RwLock, which implements the borrow checker’s one writer or many readers across task or thread boundaries. And still no erratic Garbage Confounder needed.

      3. SCP

        Re: Curious

        Is this necessarily so? ie is it that no finite computation can always determine the memory safety of arbitrary chunk of code? I don't imagine its equivalent to the halting problem

        To me this looks pretty much identical to the Halting Problem. Adapting the classic simple proof ...

        Assume we have a function MemSafe(Code) that returns true/false depending on whether the supplied Code (for this we can take it to be the text of the code) is memory safe or not.

        We now create the code (assuming the programming language we are using allows us to do something that is not memory safe):

        if MemSafe(Code) then

        <<Do something that is not memory safe>>

        end if;

        If we now apply MemSafe() to this code does it return true or false? It is a classic self-reference paradox (remembering that 'arbitrary' is a rather broad category). The natural conclusion is that we cannot create the function MemSafe() that will work for any arbitrary code.

        Of course if the programming language does not allow violation of memory safety the whole question is rendered moot and the function can simply be "return true;"

        Anyhow, whilst this is an interesting point in Computer Science it really only serves to show that you cannot write arbitrary code and hope to always be able to prove it is safe.

        Those with an interest in proving memory safety should not be writing arbitrary code. They should be following design and coding practices that make proving memory safety straight-forward - in the same way that those concerned with proving the real-time behaviour of their code should not be adopting design and coding practices that make establishing worst case execution time difficult (which would certainly preclude code where it is not possible to determine if it halts).

        This all leads back to the inconvenient truth that safety and security (and verifying it) needs to be considered as a fundamental aspect of design and implementation.

  4. elsergiovolador Silver badge

    Classic

    There is a legacy system that has a pile of tech debt and unresolved issues, but it stable and works 99.999% of the time.

    New young and determined manager comes in. Looks at issues.

    "We need to rewrite it in Rust. It will solve all our problems and we will be able to hire cheaper developers because Rust doesn't allow to make any mistakes that will impact security".

    Fast forward two years.

    Legacy system is still being in use, however, it has not been maintained because all resources were committed to rewrite.

    Cheaper developers have no clue how to code something slightly more complex than CRUD.

    Best since sliced bread new system doesn't work.

    Manager who came up with the idea collected the bonus for cost cut and is long gone.

    1. Adrian 4

      Re: Classic

      "We need to rewrite it in Rust. It will solve all our problems and we will be able to hire cheaper developers because Rust doesn't allow to make any mistakes that will impact security".

      Well said.

      I have no doubt that the writers of Rust have the best of intentions. But they probably write good C code too and don't make rookie errors already. The worry is the fanbois manager who thinks it will solve all those memory allocation bugs he's suffered from his inexperienced C programmers.

      And again, while the Rust devotees are likely careful souls who'll think about all the other issues too, someone pushed into using it because it will make them more reliable doesn't have the same attitiude - they'll use whatever hack is needed to get their code to compile.

      Clue : inexperienced programmers will make mistakes in any language.

      1. fg_swe Silver badge

        FALSE

        What you describe as "rookie error" exists in ANY large C program from VxWorks to Windows, to all Unix variants. First time the Unix tools such as sed, grep, uniq were run under valgrind, quite a few memory errors were detected. If your claim were true, Unix would have been written by rookies and then never touched for decades. All of which is untrue.

        Likewise, the VxWorks TCP/IP stack had several remotely exploitable memory bugs. Windows and Linux kernel memory bugs existed in the hundreds and probably some of them are still undiscovered. OpenSSL's horrible memory management practices are legion.

        In other words: your description is whitewashing the situation. It is the old "if developers were only perfect" fallacy. They are not. They have bad days. They must deliver to a deadline. They have a cold. They needed a beer too much the evening before they created a bug.

  5. Rich 2 Silver badge

    "Rust will stop you using data after it's been freed"

    [see title]

    There are a couple of aspects to this.

    Firstly, who makes memory access errors like this in their code? I use C and C++ every day (mostly C, which seems to get slammed at every oportunity for being "unsafe" and the work of the devil) and I literally cannot remember the last time I created a memory access fault of the types Rust protects against. If you are writing these sorts of bugs then, quite frankly, you are either inexperienced (no fault of yours, but you need to get up to speed, and quickly) or you are incompetent (in which case you need to go back to school or find another line of work), or there is a serious failure in the program structure (likely, poor division of responsibility/ownership) in which case the program architect needs to go back to school.

    It's similar to the argument about 'Garbage Collection' and the fact the C++ doesn't have this "feature". The thing is, GC **ONLY** cleans up dead memory. Nothing else. It doesn't clean up dead file handles, dead socket handles, dead driver handles, etc etc etc. In contrast, C++'s destructor model can handle ALL these things. Yes, C++ is horribly HORRIBLY complicated, but in this aspect, it got it right. GC works. But only for very limited cases.

    1. AndrueC Silver badge
      Boffin

      Re: "Rust will stop you using data after it's been freed"

      You can fix a lot of memory issues in C++ by using RAII and keeping as much as possible on the stack or fields. Make good use of const & and implement private copy ctors by default.

      And FFS use proven libraries eg the STL and friends.

      The thing I found with C++ is that if you're lazy it's easy to screw up. But if you make the effort and invest it in libraries (yours or proven external ones) it can be safe. Its downside is that a lot of developers lack the skills or management won't give them the time to do the job properly. In that respect languages like C#, Java or Rust win out.

      It's all well and good to say that C++ can be safe in the hands of a competent programmer but unfortunately the world doesn't have enough of those. Quite frankly it doesn't have enough 'vaguely competent' programmers. For all our sakes we need to use languages that support and assist mediocrity in the people using them.

      1. fg_swe Silver badge

        Re: "Rust will stop you using data after it's been freed"

        According to Sir Tony Hoare there are plenty of scientific FORTRAN programs with memory index errors, too.

        The same was detected in lots of "tried and tested" Unix tools when first run under valgrind.

        To err is human, who would have thought ?

        1. Anonymous Coward
          Anonymous Coward

          Re: "Rust will stop you using data after it's been freed"

          Sir Tony Hoare invented (in the Elliott dialect of Algol) pointers as a general-purpose high-level programming construct - i.e. the idea of turning an object reference into a plain integer, that you could do arithmetic on, and then turning that back into an object reference. This then got adopted into BCPL, and then C. It's all his fault. People have had their knighthoods stripped for less.

        2. technovelist

          Re: "Rust will stop you using data after it's been freed"

          "A real programmer can write FORTRAN in any language" -- anonymous

    2. fg_swe Silver badge

      RAII

      So you are complaiming about the lack of destructors and RAII in C# and Java.

      See my memory safe C++ variant Sappeur for both.

      1. This post has been deleted by its author

        1. fg_swe Silver badge

          Re: RAII

          You could say that almost all types of Resource Handling is easier if you have synchronous destructors, as C++ and Sappeur have. Destructors are a gem !

          In Java, Go and C# the developer needs to create loads of extra code to handle that.

          1. Daniel Pfeiffer

            Re: RAII

            Well, Java owned up to f*cking up when they stripped destructors. They brought them back, but in the ugliest possible way: A class has to implement Autoclose, and still you only get them if you use the weird try with ressources syntax,

    3. brand_x

      Re: "Rust will stop you using data after it's been freed"

      The one thing I'd like to point out here. For background, I'm a C++ veteran. C as well, but I switched to C++ as my primary in the mid 1990s. I led moderately large teams of C++ devs - between 15 and 120 - from the mid 2000s through the mid 2010s. I'm that time, I authored and architected everything from concurrent executors to Unicode libraries. I made the jump to Rust in 2018.

      Rust protects against a class of errors most articles like this entirely miss: concurrent access memory errors. It also dramatically reduces the criticality of the damage done by novices. The most common novice error is excessive or even logically incorrect cloning - making deep copies rather than correctly solving reference management. Contrast this to the common novice mistakes in Java, go, C, or C++, and you're looking at a much better outcome.

      The premise is correct, security is a process and a state of mind. But the worst vulnerability I've ever seen was introduced by a thirty year veteran C programmer with tremendous self assurance, and by all rights the chops to back it up. Except that he hadn't been using threads all that long, and gcc had some very arrogant and opinionated maintainers at the time who insisted on ignoring memory barriers in their optimizations, and a critical bounds check wasn't executed when and in the other he thought it would. How do you reintroduce inspection of asm in critical points of the code? Especially as we move to containerized VMs.

      1. fg_swe Silver badge

        Re: "Rust will stop you using data after it's been freed"

        Rust and Sappeur are the first languages really dealing with the idea of "memory shared between threads in a robust manner". C and C++ came from the world of single-threaded execution and have still not addressed the problem of "accidental sharing between threads".

  6. Bogusz

    I don't care about security in Rust so much, it is speed that is the reason I use it. API & middleware. I am quite surprised how many people are lazy and still use Java/C#.

  7. coconuthead

    rewriting it in Rust instead of something safer

    One thing I notice about many of the “rewrite in Rust” exercises is that Rust is still too low a level language for the problem.

    It seems to be repeating the industry-wide mistake of 30 years ago of rewriting things in C. A compiler or text processing tool in C? Why on earth? Back then, things like Modula-2 would have been better, but industry leaders turned up their noses at them because they were too easy to use. True to type, Rust code looks inscrutable, which makes its proponents feel good about themselves.

    Inscrutable code harbours bugs.

    1. fg_swe Silver badge

      "Something safer"

      I agree with your analysis that Rust syntax is somewhat cryptic.

      But I disagree that there are languages which are more memory safe than Rust. Java, Go and C# have no notion of multithread-safety, for example.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like