back to article The empire of C++ strikes back with Safe C++ blueprint

After two years of being beaten with the memory-safety stick, the C++ community has published a proposal to help developers write less vulnerable code. The Safe C++ Extensions proposal aims to address the vulnerable programming language's Achilles' heel, the challenge of ensuring that code is free of memory safety bugs. "This …

  1. karlkarl Silver badge

    I look forward to what they come up with. I dislike bindings and and excess dependencies, so Rust is not an option for me personally.

    Currently my compromise has always been a "safe" STL alternative that offers runtime safety checking (and overhead) in the debug builds. If I can get access to this stuff at compile time, I am very interested.

    1. matjaggard

      I don't quite get the connection between Rust and "bindings and excess dependencies" surely all languages need bindings - do you mean cross-language bindings because you use some other language elsewhere? Similarly, why would Rust mean excess dependencies - I think Crates that compile in and libraries that you link to are also equivalent to things you'd use in C and others?

      1. Jon 37 Silver badge

        For C programmers, since all the world is C, gradually moving to Rust requires bindings to existing C code. Staying in the C world does not require those bindings.

        If you live in the Rust world, then you will have a different perspective, where all the world is Rust and you use native Rust libraries, including libraries that wrap the few C APIs that you use. You don't need to write bindings.

        Different perspectives, different worlds, both valid.

        1. Richard 12 Silver badge
          FAIL

          Rust doesn't do dynamic linking, at all.

          Which means it's unsafe every time it has to interface with anything - including another Rust module - as Rust must use a C binding to bind to Rust!

          C is the universal ABI, the standard specifies it all.

          C++ has a toolchain-specific ABI, as the standard only specifies the memory layout of a small set of primitives, not everything. Eg a clang std::string can't be read by MSVC, but MSVC modules can dynamically link to other MSVC modules, clang to clang etc.

          (Within certain version ranges)

          1. ssokolow

            There's unsafe and then there's "unsafe"

            The Rust ecosystem has a bunch of crates like abi_stable (Rust<->Rust), PyO3 (Rust<->Python), and so on which let you write high-level, safe-Rust APIs and have it generate the marshal-through-C-ABI code under the hood for you to provide a compile-time guarantee that everything matches up.

            Given prior art, it's part of the Rust philosophy that they're very conservative about putting stuff into the core language or standard library (and, thus, promising to include and maintain it in the core Rust toolchain download forever) when it can be implemented as a crate. (And, thanks to procedural macros, a lot can fall under that "prove there's that much demand for a specific design first" heading.)

            ...and, so far, it's served them well, with things like once-cell getting into the standard library instead of precursors with less comfortable APIs like lazy-static.

          2. FIA Silver badge

            C is the universal ABI, the standard specifies it all.

            So... how big is that char? ;)

            (also... which C ABI?)

            <grin>

            1. Richard 12 Silver badge

              Yeah, I did rather walk into that barn.

              It's mostly true on any given target platform, with a small number of exceptions.

              1. joeldillon

                So....just like your C++ example then? There are essentially two C++ ABIs, MSVC and the various Unixes (these days mostly meaning gcc+clang), all of which these days follow architecture-specific variations of what used to be called the Itanium ABI.

                Meanwhile Windows and the various Unixes can and do have different C function calling conventions on the same architecture (x86-64 for example) in exactly the same way.

          3. Roo
            Windows

            "C is the universal ABI, the standard specifies it all."

            Nah, that's FORTRAN, you can write FORTRAN in any language. :)

    2. bombastic bob Silver badge
      Devil

      I came up with this idea a whikle ago...

      to prevent use-after-free bugs, try this:

      #define FREE(X) { if(X) { free(X); (X) = NULL; } }

      then use 'FREE()' rather than 'free()'

      Similarly

      #define DELETE(X) { if(X) { delete (X); (X) = NULL; } }

      too easy, yeah. Also implied, always pre-assign pointers to NULL or immediately 'malloc()' or 'new' when declared - no unassigned pointers being free'd

      [sometimes the easy solutions just stare you right in the face, and do not require new,shiny lingo nor new,shiny lingo spec and can be implemented with 'sed']

      1. Blazde Silver badge

        Re: I came up with this idea a whikle ago...

        #include <stdlib.h>

        #define FREE(X) { if(X) { free(X); (X) = NULL; } }

        int main() {

        char* buffer = malloc(100);

        char* also_buffer = buffer;

        FREE(buffer);

        FREE(also_buffer);

        }

        free(): double free detected in tcache 2

        https://godbolt.org/z/187WqbGPb

        Nope, doesn't help.

        1. toejam++

          Re: I came up with this idea a whikle ago...

          One less than ideal solution would be a policy against freeing copies of a pointer. Have something in the variable name to indicate that it is a copy and smack any programmer that violates the policy when you grep for "free" and "also_" in your source files.

          Another solution I've seen is to use a structure with the heap pointer as the first member and a pointer to an "in use" flag as the second member. Since you're dealing with a pointer to your flag, copies don't get out of sync. You can even check the flag before writing to memory if you're especially paranoid.

        2. Michael Strorm Silver badge

          Re: I came up with this idea a whikle ago...

          > > bombastic bob: "sometimes the easy solutions just stare you right in the face, and do not require new,shiny lingo nor new,shiny lingo spec"

          > Blazde: [ Demolishes Bob's "solution" with a borderline-trivial counter-example ]

          As HL Mencken once said "For every complex problem there is an answer that is clear, simple, and wrong."

          Personally, if I thought I'd found a simple solution to a problem that countless experts in the field of language design who'd devoted their time to it had apparently overlooked or dismissed, I'd be wondering what *I'd* failed to consider.

          (Doubly so if that "solution" consisted of an incredibly simple one-line macro. Especially given my gut instinct against anything trying to be overly clever and implement advanced language features via macros, which are still basically just jumped-up text replacement facilities. They have their place, but that sort of thing isn't it.)

      2. AndrueC Silver badge

        Re: I came up with this idea a whikle ago...

        You shouldn't be calling free() or malloc() in C++. Both should be deprecated. If you want to create a guard macro make it generate a compilation error.

        #define malloc Do_NOT_use_malloc

        Though I suspect this will break too much legacy/external badly written code.

        There is no reason to check for NULL when calling delete. I do however agree in having a macro to set a pointer to null after calling delete or delete[] on it. It saves typing and will reduce the chances of anything else hanging to the pointer.

        But my main recommendation is to understand and embrace RAII. Even using new/delete should be a rare occurence. Stick everything(*) on the stack.

        std:vector<byte> arrayOf100Bytes(100);

        With no need for us to write clean up code. That gets done when arrayOf100Bytes goes out of scope.

        class wibble

        {

        std:vector<byte> nestedArray(100);

        }

        ...

        std::vector<wibble> lotsOfWibbles(100);

        That allocates 100 objects each of which has 100 bytes of storage. And as above it all gets cleaned up automatically when lotsOfWibbles goes out of scope.

        (*)In the above example the array itself is almost certainly on the heap but that is invisible to us. We can trust that std::vector will take care of its lifetime.

        1. Richard 12 Silver badge

          Re: I came up with this idea a whikle ago...

          std::unique_ptr, std::make_unique et al.

          Most toolchains already warn about raw pointers and suggest using one of the "smart" pointers instead.

      3. Tom66

        Re: I came up with this idea a whikle ago...

        OK, now what happens if that pointer is referenced in multiple places?

        Use-after-free is still easily possible in that case.

        (Smart pointers in C++ resolve some of these instances.)

      4. Paul Floyd

        Re: I came up with this idea a whikle ago...

        Jesus.

        Why can't C programmers do anything that doesn't involve I_WANT_TO_PUKE macros?

        1. martinusher Silver badge

          Re: I came up with this idea a whikle ago...

          >Why can't C programmers do anything that doesn't involve I_WANT_TO_PUKE macros?

          Macros get evaluated by the preprocessor so provide an opportunity to customize the language for a specific use circumstance.

          I wouldn't got as far as planting compilation errors if someone dared to use malloc/free (after all, what do you think is at the bottom of new/delete?) but there are situations where this might be warranted. Not everyone writes code for a large RAM based system that has virtual memory and an infinite stack, this just happens to be a characteristic of the machine you're typing on. (Forcing unsuitable architectures to conform to this model because that's the only one you grew up with and so know has caused an amazing amount of trouble!)

          1. AndrueC Silver badge
            Boffin

            Re: I came up with this idea a whikle ago...

            I wouldn't got as far as planting compilation errors if someone dared to use malloc/free (after all, what do you think is at the bottom of new/delete?)

            To be fair I wouldn't either because it might break code but if there was a compiler option to generate a warning on their use I'd switch it on and treat warnings as errors.

            But the fact new/delete might be using malloc/free underneath isn't relevant. They are implemented in the Runtime Library so aren't part of the compilation process for 99% of use cases. There's actually a reasonable chance that breaking malloc as I suggested would do exactly what I'm suggesting without causing any grief. Or at least no more grief than macros can cause anyway.

        2. Anonymous Coward
          Anonymous Coward

          Re: I came up with this idea a whikle ago...

          Yes.

          Don't write macros for trivial bits of code, because people debugging your code have to keep going and looking up what actual code the macro generates.

          It just obfuscates your actual code and makes it harder to read.

          Don't write macros for larger bits of code either, write a function instead, that's what they are for.

          A valid use of macros is, for example, if you need to generate a lot of repetitive const data at compile time.

          1. AndrueC Silver badge
            Meh

            Re: I came up with this idea a whikle ago...

            Don't write macros for trivial bits of code, because people debugging your code have to keep going and looking up what actual code the macro generates.

            That depends on your IDE and how many macros you have and how complex they are. Large macros are always a bad idea but VS at least is capable of displaying the macro definition in a pop-up when you hover the mouse over its identifier. And macros for things you're doing a lot(*) are fine. Everyone on your team should soon become familiar.

            (*)As my first post indicated: new/delete should not be used a lot in C/C++ source code anyway.

      5. herman Silver badge

        Re: I came up with this idea a whikle ago...

        There is a conservative garbage collector for C. I always use that and make a macro that redefines free to nothing. It is a simple way to solve many obscure bugs in old program code.

      6. bystander

        Re: I came up with this idea a whikle ago...

        "use after free" error happens not only with reference to beginning of the allocated memory. Access with any derived references will cause the issue. So replacing free-function with macros assigning NULL to pointer would not help preventing this issue.

  2. Paul Crawford Silver badge

    Sounds sensible if you can massively reduce the cost of a re-write.

    Even if a "safe" C++ only stops 80% of memory bugs, if they are, say, up to 70% of all key CVE issues (others being logic flaws or hard-coded passwords, etc) then your percentage of memory bugs is down to 31% and your efforts are better spent dealing with other issues of dumbness or lack of testing/fuzzing/etc.

    Rust purist might not like it, but those with millions of lines of C/C++ have a better chance of stamping out bugs with a lot less code translation effort.

    1. matjaggard

      I'm not sure I'd agree with "a lot" because the hardest thing for me as a C-style language developer to learn was the borrowing of data and variables in Rust. I think you'll basically have to rewrite all the C++ code anyway to make it provably safe.

      I'm happy if we have a new Safe C++ language to compete with Rust to keep them going in the right direction, but it's not going to be an order of magnitude easier to learn for a traditional C++ developer.

      1. Blazde Silver badge

        My feeling is those who struggle most to adapt to the 'borrow checking/ownership' style are probably those who most need to. It's about having clear chains of responsibilities for data. Without that it's too easy to tie yourself in knots.

        And here's the thing. A C++ codebase that's already well written may require only line-level changes to satisfy a C++ borrow checker (scanning over the proposal it looks like these might be extensive because for example pointer use is essentially banned in safe context, but relatively simple). A codebase that doesn't already have good ownership self-restraint may require fairly big design-level changes, including to external APIs. A significant subset of C++ developers who are right now strongly opposed to Rust will probably also be strongly opposed to that level of rewrite, regarding it as unnecessary. And so it could go unused where it's most needed, and not have the '% memory bugs' impact hoped for.

        But a Safe C++ could significantly lower the bar to adoption of memory safety in new code. I wish them all the luck.

      2. O'Reg Inalsin

        The appeal of the new "safe"-C would probably be that true memory safety is opt-in, not opt-out (RUST is opt-out), to keep make existing code compatible. Which also makes checking that "safe"-C is truly safely written very hard.

  3. sitta_europea Silver badge

    It seems obvious to me that if you can rewrite the C++ compiler to handle the memory safety issues, and then just recompile a few million projects with it as and when the opportunities present themselves, that *has* to be a better option than rewriting a few million projects in a completely different language - which will not only take millions of times as much effort, but probably also introduce at least as many issues as it fixes.

    What's not to like?

    1. MikeTheHill

      "..just recompile a few million projects".

      Excepting that's not how it will work. The rewritten C++ compiler will be much stricter and as a consequence it will barf up an extraordinary number of warnings and errors when compiling exisiting projects because most of those projects will contain code which is verboten in this stricter world. For example any program which uses pointers or indexes into arrays may well fail to compile.

      More likely is that if a safe C++ profile can be created a more likely target will be new code, such as that one might put into a kernel wishing to move on from C to something safer.

      1. Dagg Silver badge

        uses pointers or indexes into arrays may well fail to compile

        At least the suspect code is detected. It means you can check to make sure it doesn't cause problems and then if like in current C/C++ you can add a compiler directive to ignore the warning/error it is marked and the next time any work is carried out on that code it can be refactored.

        1. bombastic bob Silver badge
          Devil

          I'd rather have a lint-like utility to look for those things

          1. Mishak Silver badge

            Basic static analysis is only able to detect "trivial" memory safety issues.

            If you want to detect them using a tool at compile time you are going to need to perform static data flow analysis (SDA) - and that is also not going to catch all of the rest of the faults, even if the tool spends days examining the code as many faults are related to data-driven control flow and other undecidable conditions.

            SDA can be improved by subsetting the language and using contracts/annotations, but these are not commonly used.

            1. fg_swe Silver badge

              Real World Lint and PolySpace

              I have seen both tools used with great, beneficial effect.

              Even senior engineer's code contains bugs, which Lint can find. Because even senior folks are sometimes sick, have a sick child and woken night, a fight with wifey etc.

      2. yetanotheraoc Silver badge

        I'm doomed

        "any program which uses pointers or (sic) indexes into arrays may well fail to compile"

        "or" ? I'm hoping you meant "for" !

        Not counting machine language code for lab controllers, just about every program I've ever written uses indexes into arrays.

        1. ssokolow

          Re: I'm doomed

          The key detail is the word "may".

          It's not "Indexing into arrays is bad"... it's "Indexing into arrays is such a generic construct that it enables you to easily write access patterns that can't be mechanically verified".

          (Though, in Rust, aversion to array indexing is more about "Array indexing performs runtime bounds checks unless you use the unsafe-marked getter. Use iterators where possible.")

        2. the spectacularly refined chap Silver badge

          Re: I'm doomed

          Not counting machine language code for lab controllers, just about every program I've ever written uses indexes into arrays.

          That just shows a lack of breadth in your experience than anything else. Many of the functional languages get rid of both arrays and explicit pointers. You get lists as a basic type for aggregate data instead.

          It makes truly robust reasoning about code easier in that it instantly removes so many corner cases: a list is either empty or you can look at the first item. The precise semantics of arrays become more philosophical than anything else in cases where e.g. array[12] is initialised but array[4...11] are not.

          1. joeldillon

            Re: I'm doomed

            Sure, I'll just go out there and try and find me someone who'll pay me to work in Standard ML. 'I've never had a job working with functional languages' is kind of the norm, not the exception.

        3. JoeCool Silver badge

          Re: I'm doomed

          char x[1];

          x[1]=' ';

      3. Dan 55 Silver badge

        Excepting that's not how it will work. The rewritten C++ compiler will be much stricter and as a consequence it will barf up an extraordinary number of warnings and errors when compiling exisiting projects

        Excepting it will work like that. In 1.2 it says you opt-in to safety on a per variable/class/file basis, maintaing compatibility just as you do now when you refactor C code into C++ code or C++11 code into C++17 code, refactoring at a pace which suits the project.

        Depending on how this is implemented the compiler could point out unsafe code with warnings. Then if you have warnings set to errors then it will not compile, but that will be your choice.

        1. the spectacularly refined chap Silver badge

          So it's the C++ equivalent of Fortran's IMPLICIT NONE?

      4. Anonymous Coward
        Anonymous Coward

        If the stricter compiler barfs on your code, that means it thinks your code is unsafe.

        Isn't the point that it would be possible to automate conversion of your unsafe C++ code to safe C++ code but not to convert it to Rust code?

        1. ssokolow

          No, because "barfs on your code" in the context of a borrow-checker means "compiler needs you to clarify your intent".

          For example, if you have a Rust-style borrow-checker with shared/mutable borrow semantics (i.e. compile-time reader-writer locking) and you get a lifetime error, you probably don't want the GC-esque solution of "automatically promote it to a heap allocation behind a std::shared_ptr to make it live long enough". You're using C++, so I assume you want a high degree of control over allocation behaviour and how it affects memory footprint and performance.

          Likewise, if it sees you trying to take a shared and a mutable reference at the same time, should it make a copy? Wrap it in a mutex? Reorder the code so neither is needed? Each one will sometimes be the correct reaction to a borrow-checker error of that type.

          It's sort of like how PHP tried to "just make it work" and wound up with `echo 10 + "10,000 eggs";` producing "20" as output.

    2. sarusa Silver badge
      Childcatcher

      Those millions of projects are going to be revealed as memory safety disasters needing rework. Which will, yes, be less work than writing them from scratch in Rust, but good luck convincing corporate overlords it even needs to be done. 'We haven't gotten any real complaints and only about six CVEs on that codebase, 8 severity at the most! Why bother?'

      Of course, yes, it's better than nothing or (for most C++ programmers) porting to Rust, but immediate impact will be small other than letting people wave it at people who suggest porting to Rust.

  4. Anonymous Coward
    Anonymous Coward

    If you write C++ using RAII and you set the STL to use the out of bounds checks on iterators, it is indeed safe in that use after free and out of bounds accesses are compile or runtime errors, and not silent errors.

    There are already a host of static and runtime tools that will gladly tell you where things are evil, to the extent that 5-6 years ago AddressSanitizer could flag offsets written to invalid locations, with a corresponding color coded memory map.

    This is not an argument against Rust, indeed it has elegant properties, but let's not feed the 'from scratch' paradigm so endemic in computer science in that now everything should be done in a new language. Adding another C++ dialect is not the solution either for general purpose C++ usage, because support per platform will vary likely, and so you settle on what's available on all platforms (see Cxx support across platforms and how long you have to wait until _all_ features are implemented in the two main compilers)

    The borrow checker model of Rust should be implicit in trained programmers, in that you're supposed to think in object lifetimes and when/how they should cross function call boundaries.

    You pay a price for a language that enforces that, Rust is powerful, but forces a style on you.

    You can do borrow checking with the smart pointers in C++ and RAII if you're willing to pay the performance price.

    For most applications this is negligible and the debugging cost of memory corruption 10-100x worse. For performance sensitive applications you can write your own allocators and manage memory yourself, if you know what you are doing.

    Where borrow checking REALLY shines is in avoiding race conditions, and those are _far_ harder to debug, and far easier to trigger.

    1. Groo The Wanderer

      The fact you have to wait for an eternity for g++ to catch up does not mean everyone else should hold back on a good idea. Granted, there are many ways to do what is required, but don't forget: under the hood of C++ is good old fashioned C with assembler snippets, so as far as C++, writing an extension/package to do just about anything has always been theoretically possible.

  5. Dazzleworth

    I thought Microsoft have been doing this already with managed C++ in their Visual C++ family of products?

    1. m4r35n357 Silver badge

      Always a sound plan to assume that M$ are "doing the right thing".

      BTW welcome to Earth.

      1. Paul Floyd

        Well at least Microsoft are still a major contributor to C++ standardization and didn't throw a tantrum and mostly leave when they didn't get their way.

        1. m4r35n357 Silver badge

          erm, wot?

          1. m4r35n357 Silver badge

            seriously, what are you on about?

    2. Anonymous Coward
      Anonymous Coward

      Actually I quite liked Managed C++. It p*ssed off real C++ developers, but I see that as a bonus.

      1. karlkarl Silver badge

        I also was quite a fan of it. Extending an industry standard language is IMO the best way forward.

        The alternative is dicking about writing bindings for CSharp.NET or VB.NET. No thanks!

  6. Groo The Wanderer

    I see a lot of Java and other "object oriented language" code that looks and smells procedural. I think you'll find Rust suffers from similar issues, where rather than going through anything complex, programmers simply mark something as "unsafe" and do it the same way they always did because "it works, damn it! Don't mess with it!"

    Although you can't write memory-"unsafe" code with languages like Java, I can and have written code that produces memory allocation overruns under oddball conditions, consuming all available system resources resulting in program termination by the OS. I've also seen plenty of cases of corruption of data when errors occur because most people write their Java on the assumption that everything will go as planned and that "the system takes care of it." It does not.

    Without a great deal of effort, any programmer can write bad code that is going to cause serious data corruption and security problems. A language alone will not protect you from the root issue of application vulnerabilities allowing miscreants to mess with systems and data.

    1. fg_swe Silver badge

      Nobody claimed that memory-safe languages will eliminate ALL faults. It will eliminate about 70% of CVE exploits, though. The other 30% you have to tackle with things like V-Model, fuzzing, mathematical proof and so on.

      1. Caspian Prince

        And it is extraordinarily *hard* to write pathologically nasty code in Java. The VM conveniently provides a sandbox to stop RAM overcommitment snafus, although unfortunately thread creation is still unbounded.

        To toss petrol on the fire ... 90% of all the C++ code out there could be rewritten in a third of the time in Java or C# and be just as effective as it was before, but without the memory safety issues.

        1. Paul Floyd

          Number crunching in Java. Yeah.

          1. Anonymous Coward
            Anonymous Coward

            That most number crunching is done in Python of all things (numpy and pandas) shows that mathematics is down to the libraries rather than the language...

            1. Anonymous Coward
              Anonymous Coward

              It's down to universities and oss purists who don't want to give Mathworks the danegeld for Matlab.

              (And programmers who snootily call it VB for engineers)

          2. Anonymous Coward
            Anonymous Coward

            "Number crunching in Java. Yeah."

            Generally faster than C++ given that it supports highly optimised representations of large numbers out of the box and can optimise at runtime (HotSpot compilation). Whereas with C++ your first problem is choosing which poorly written and unmaintained third party library to use.

            1. Groo The Wanderer

              It's those weird one-shot library interfaces for arbitrary-precision types that get me. What an ugly mess!

    2. Avfusion

      > [Rust] programmers simply mark something as "unsafe" and do it the same way they always did because "it works, damn it! Don't mess with it!"

      You have a fundamental misunderstanding of how unsafe works.

      1. ssokolow

        *nod*

        `unsafe` doesn't alter how the main body of the language works. It just grants access to a few extra operations.

        (dereferencing raw pointers, calling functions and methods marked unsafe, implementing traits (basically interfaces) marked unsafe, mutating mutable statics (as opposed to putting something like Mutex<T> with interior mutability into an immutable static), and accessing fields of untagged unions.)

        Stuff like creating two mutable references (references, not raw pointers) to the same memory is still undefined behaviour and will run afoul of things like the LLVM IR noalias attributes rustc sticks on them (what Clang turns the C `restrict` qualifier into).

  7. Dostoevsky Bronze badge

    Counter reset.

    Days Since C++ Tried to Shoehorn in Memory Safety: 0

    1. Anonymous Coward
      Anonymous Coward

      Re: Counter reset.

      LOL. Except that for the last 30 years they refused to admit memory safety is even important.

  8. Will Godfrey Silver badge
    Coat

    One size fits all?

    Short answer: No it doesn't.

    All this focus on memory safety, and developing memory safe languages is missing the point. It is possible to write generally unsafe programs in any language. It is also possible to write safe code in any language, but only if you really understand what you are attempting to do and the limitations of your tools. Of course, like any other profession it's much easier if you pick the right tool for the job.

    As a matter of interest, Over the years I've done bits of work in a number of languages, but oddly the only thing I really groked and could mentally 'see' was ARM2 assembly (currently doing moderately OK with C++).

    1. fg_swe Silver badge

      FALSE

      Of course you can write faulty programs in any language. BUT, if you have proper processes(e.g. V-Model), proper algorithms and data structures in place - then chances are you will have much less hard-to-find defects with a memory safe language.

      Cynicism should not be your driving force.

      There exists high quality software you can entrust your life. E.g. AIRBUS flight control software in various types from A310 to Jäger90. Afaik they use Ada and of course use a proper V-Model development process. They know what they do.

      1. fg_swe Silver badge

        Re: FALSE

        I forgot: no loss of airframe due to core software engineering faults. There was a loss of an A400M, due to a software parameter installation fault by the manufacturing line, though. Four airmen killed in the first factory flight.

        https://en.wikipedia.org/wiki/2015_Seville_Airbus_A400M_crash

      2. Groo The Wanderer

        Re: FALSE

        There is one key requirement that most projects sorely lack, though: a clear and concise definition of the problem to solve and how to go about doing so in a predictable and reliable algorithmic approach. The wad of baling twine, barbed wire, and paper clips in a project of any duration is usually where the major vulnerabilities in the code exist. And the only way to avoid those snarls of code is to have a clean, clear, and rigorously testable design.

        1. fg_swe Silver badge

          Right

          Writing and maintaining well-defined System Requirements ("LastenHeft") is a tough challenge for most engineers. Requires multi-year experience in the trenches of systems engineering, proper domain knowledge(physics, chemistry, finance, insurance,...) and the ability to think rigorously.

          In the end it boils down to hiring highly experienced and self-confident engineers. People who can see through the bullshit of the MBA types (see Boeing MCAS) and who are not confused by the Regulatory Paper Mountains of ASPICE, DO-178 and so on.

          MCAS was a true clusterfuck, a rookie mistake. That happens when engineers cannot cut through the BS. I assume they(FAA, Boeing, SW Contractor) worshipped the DO-178 church, but never had a mental model of the Complete Signal Chain(sensor to control surface) in their brains.

          A properly executed V-Model would have simulated a Sensor Fault on system level(aka HIL testing) and immediately seen the horizontal stabilizer run-off. Simulator Tests should have revealed the same. But all of that means zero to an MBA type and his box-checking.

          The simplistic world of moneyman tyranny: rookie mistake killing upwards of 250 passengers.

  9. Anonymous Coward
    Anonymous Coward

    Oh come on, where's the fun in owning a chainsaw and not having the risk? The fear of severe, life threatening danger ensures you're more careful and way less blase about its use. Same applies to most things in life, some fear and danger is necessary in order to instill care and respect.

    You could argue that it's just making C++ slightly better but the downside is that it makes people lazy, they expect safety to cover their arses until one day something doesn't and no one knows why, whereas if you're live with the fear and you're constantly always paying attention then you will make few mistakes.

    1. fg_swe Silver badge

      Also FALSE

      Both from a safety and a security point of view, memory safety will detect software faults which would go undetected for a long time with C, C++ and assembly.

      That is true for both development and for operational phase of software execution.

      What you want are well-defined crashes/stop of execution instead of Silent Corruption and Mysterious Behaviour. Also, you do not want Silent Subversion by a cybernetic attacker and you will greatly prefer a well defined program stop as opposed to attacker's code injection.

      It is very naive to assume your program will see all possible inputs during validation phase. There is no time and no money to achieve this in most settings. Also, "equivalence classes" of input are very hard to successfully define, as they require knowledge of program internals, which defeats the idea of independant test case creation.

    2. Missing Semicolon Silver badge

      Comlexity

      So the solution to C++ lacking a feature, is ... more C++! It is already comically overcomplicated.

      1. Groo The Wanderer

        Re: Comlexity

        Take another look at Rust semantics and syntax before you pooh-pooh the "effort" of adding safety to C++. I just read the Safe C++ proposal a while ago, and I think they've done a great V0.1 planning document for discussion and expounding upon. I advise you to do the same; the syntax and semantics they're suggesting are quite succinct and clear. If anything of the proposal is going to give C++ programmers grief, it is the borrow checking portion of the proposal, but as that solves a key issue in a far more performant fashion than a general purpose scan-and-sweep garbage collector does, the syntax is worth thinking about hard before you reject it out of hand.

        Is C++ a "big" language? No doubt it is if you include the modules and packages of components like the STL. But Java and C# just hide their complexity behind a browser, belying the fact that a printed copy of their documentation would seriously dwarf that of C++.

        1. This post has been deleted by its author

          1. Dan 55 Silver badge

            Re: Confusion

            Er, that was precisely the point he was making.

            1. fg_swe Silver badge

              Re: Confusion

              Ok, thanks, reading fault :-)

      2. AndrueC Silver badge
        Boffin

        Re: Comlexity

        It can seem complicated but it's mostly built up from a common set of rules or principals. If you understand the basic rules and principals the language becomes quite easy to work with. The associated libraries can be large and complex but the important ones like the STL are also self-describing so again if you know the basic rules you can more easily understand something you haven't seen before

  10. An_Old_Dog Silver badge
    Headmaster

    Ambiguity

    "a proposal to help developers write less vulnerable code"

    Is this supposed to mean, "less code which is vulnerable", or, "code which has fewer vulnerabilities than previously-written code"?

    (Either would be an improvement.)

    1. fg_swe Silver badge

      Re: Ambiguity

      Given identical software engineer competence, memory safety will neuter 70% of bugs(or at least detect them early and stop program execution).

      Another few percent you can get from Ada-style Number Domains, which will catch over- and underflows of numeric variables.

      Just never forget to execute the V-Model properly, because only a proper Test Battery will generate the requried testing input to your program to trigger the numeric faults.

      1. Mishak Silver badge

        Proper Test Battery

        It's scary the number of test sets that I've seen that have been created to give 100% code coverage (and maybe also MCDC) - but they don't appear to care about the values that are generated or testing the requirements (probably because there are none or they are of low quality).

        Another case of "the process says 100% coverage is required" - often for compliance against ISO 26262 (et.al.).

        1. fg_swe Silver badge

          Corrupt Engineering

          What you describe is a "popular" way of gaming the V-Model. Only junior or rotten engineers do that. And box-ticking "leaders" who are either midgets or corrupt folks.

          Have seen that, I must admit.

          Proper Unit Testing must always be traced to Unit Requirements(which in turn come from the top left side of the V Model) and it must check expected output (defined by requirements). Branch and MCDC coverage rates should merely be an indicator whether the team has forgotten a test case.

  11. Anonymous Coward
    Anonymous Coward

    Having given the proposal a peruse, I think this is fascinating and seems very mature and well thought through.

    So I hope it doesn't get bogged down in committee arguments, grandstanding and the usual filibustering and ego-waving that can happen when more than one person has to decide on an issue.

    And it probably wouldn't ever have happened without Rust.

    1. JLV
      Thumb Up

      Competition and cross-pollination is a good thing, isn't it? "All in C/C++" is, while certainly more pleasant than "all in Java", will stifle innovation if used immoderately. Who'd want to go back to "all in COBOL"?

      Even "all in Rust" isn't that great an idea.

      Best wishes on C++ pulling it off.

  12. martinusher Silver badge

    Round and Round We Go

    The first reaction to problems with a language seems to be to design yet another new one, it something that's as old as computer languages. Its rare to come across something that's entirely new, though, so its not surprising that C++ -- or C -- could be tweaked to be memory safe. Many programmers wrap memory allocation functions with guards during debugging to probe for faults and get some idea of pool fragmentation but this gets removed once the code has been checked out.

    This ultimately may be the issue. I don't know how many people actively test their code but rather assume that because its not crashing that its working. Active testing, the sort of thing that's done on embedded units, can take rather more effort than actually writing the code in the first place and so is very upsetting to management who always want to ship, to get revenue, as soon as the thing runs overnight. One way to make them happy is to use programming tools that prevent common coding errors but all too often the emphasize form over function ("coding standards") and don't address the fundamental logic of what the code's trying to do.

    1. fg_swe Silver badge

      V Model

      Of course proper documentation+testing is a very serious effort. Something which is often omitted in non safety critical software.

      But we know at least in theory, how to do things properly. No more quick+dirty, informally tested cr4apware. Requirements properly documented in a req. mgmt system (DOORS, Jama, ...) as opposed to Email chains. Honest, large scale testing from unit to system as opposed to faux testing.

      This is clearly the way forward, even if it will not be fully or not correctly used in many projects of the near future.

      1. Anonymous Coward
        Anonymous Coward

        Re: V Model

        I can assure you there is plenty of room for ambiguity even with DOORS piped into something like Jira. The customer can turn around and say the software wasn't meant to do this and after yet another meeting, indeed it wasn't. Requirements re-written, tickets recreated, code backed out, tables flipped over, etc...

        1. fg_swe Silver badge

          Re: V Model

          Of course you can corrupt any great idea. But that does not invalidate the great idea, in this case the V-Model. There are companies and organizations who have used it with great success. Train signal systems, ABS brakes, flight control, electric steering and many more.

          And surely I have seen more than one cr4ptastic LastenHeft in my career. Writing a good requirements document is one of the most demanding engineering tasks. Bad management can mess it up by late time changing, by ambiguous language, by contradictions and so on. Good leaders and senior engineers will fight to avoid this and to polish imperfect requirements documents. Weak leaders will bend over and accept crazy changes and other bad stuff.

  13. jsmith84

    Rustification of C++?

    I looked at the linked provided, and could not help noticing the rustification of C++, i.e. the usage of new "box" and "arc" library element.

    I must admit I did not read everything, but it seems that new C++ "variables" are constant by default (like Rust), and modifications must be prefixed by mut (while in Rust, mut is part of the declaration only).

    To me, it seems redundant, because if I declare "vector<string_view> views { };", unless a non const function is called on views, or views is used as (mutable) reference, there is not need to place "mut" in front of it, e.g. in "mut views.push_back("From a string literal");".

    I know, and the compiler would know that at the point of entry of push_back, views is mutated (and the warning remains valid).

    In 2024, surely, we can do better in term of type inference, and constantness verification.

    x = 1; (why would I put auto or let on the left side of x? and why would I point the type? 1 is an int and if x is not modified anywhere, surely, it should be const, or better constexpr)

    x += 1; I know at this stage x was a variable

    I do not see the point of Rust's full declaration "let mut x: int = 1;".

    1. diodesign (Written by Reg staff) Silver badge

      let mut x: int = 1;

      It a) declares to the compiler that x is mutable (ie, can be changed; Rust defaults to immutable variables for safety) and b) that x is type 'int'. If you try to use it as something else later, that's a build-time error.

      They are both to avoid code being written at one point in time with assumptions in mind, and then at some other time, the code being changed or added to without those assumptions in mind.

      C.

      1. druck Silver badge

        Re: let mut x: int = 1;

        Rust defaults to immutable variables for safety

        The entire point of a variable is to vary, that is the default of any sensible language.

        A non varying variable is a special case called a constant.

        1. jsmith84

          Re: let mut x: int = 1;

          I can read (and write in) Rust thank you.

          My point is that the type can easily be inferred, and the constanness can be inferred too from how the code is used.

          If you are writing code where a variable is changed, you need to go "back up", and add a mut, the same way in the "old C++ style" code, you need to change the declared type. Nowadays, a lot of C++ code is using auto, and let the compiler infer the type (my point is auto is also pointless).

        2. Will Godfrey Silver badge

          Re: let mut x: int = 1;

          Indeed, and in C/C++ you have a special word for it.

          const int x = 4;

          1. jsmith84

            Re: let mut x: int = 1;

            You are really missing my point and are burying yourself in small details.

            const simply indicates: 1. that the compiler must check the value does not change, and 2. that the compiler can assume the value does not change (as opposed to a const volatile).

            In most cases, const brings nothing, because the compiler should be smart enough to see this is a const (please note the "should"), or infer it is a constexpr (which would be more suitable than your suggestion of using const), etc.

            The same applies with int. Here, we know the literal is 1, therefore an int by default, unless I set it explicitly to be different, e.g. "1." or "1.f".

            We also know I am declaring a variable called x, it's kind of obvious, so why having "let".

            My view is that whole thing could be written a "x = 1;" and nothing more, like in Python (we still have the ";" though).

            Fixing the mutability does not add much to the code in that case, because if it was "immutable", and decided to make it mutable, we would simply removed the const... so, let's not have it in the first place (for some reason, I believe (some) very old C compiler were accepting i = 1, and assume the type was int if it was not declared).

            Now, about Rust...

            The language is so verbose, that many people unwrap like crazy anyway.

            Panicking on integer overflow in debug mode is overkill because integer arithmetic is well know from the beginning of time for a i8, 255+1=0 (by the way, I like i8, s8, ... f32, f64, as I used to with those in C and C++).

            If you really want to do check overflow, or have "clamped types" (255+1=255, 0-1=0), it would be better off adding new types with a specific arithmetic and a specifically defined behaviour, rather than changing what the excepted "machine" behaviour, and avoid inconsistency between debug and release.

            Another stuff that annoyed/annoys me was [from memory] std::Rc allowing some construct and later on throwing an exception because there was a circular reference. Talk about safety to build a program, testing it, and having, maybe, at runtime, an exception because the data were visited differently (note: I can't remember what displease Rust, but I found a way to avoid it, and was not impressed by the panic which had not rationale).

            My view on Rust is that it has nice things, such as macros, a good amount a libraries, however, I find Rust too verbose to be productive, and the "absolute safety promise" is not true (it's better though).

            I would rather have a language which superb type inference or other bits I can't think of at the top of my head right now.

            I don't think having a native Linux module (for default of a better word) is going to be key selling point, or seen as a massive achievement, by most of the people outside the hardcore Rust backers.

    2. ssokolow

      Re: Rustification of C++?

      1. Rust's rationale for having a fixed "let" token instead of prefixing the type is twofold: First, it simplifies writing a parser because you don't need the lexer hack and, second, the presence of `let` or `auto` distinguishes variable shadowing from variable assignment when you've got no explicit type because you're letting it be inferred. (It's also one of those places where Rust's Ocaml heritage shows through.)

      2. In Rust, it's not an "int". Instead of having automatic numeric type coercion, the types of numeric literals are inferred and fixed at the point of declaration/assignment. (Though floating point literals will default to f64 (i.e. double) and integer literals will default to i32 (32-bit signed int) if unconstrained.)

      3. Due to type inference, you generally only need to explicitly specify the type in a `let` if there is insufficient information to infer it. Type signatures are "infallible patterns" so, outside of places like function signatures where inference is disallowed, it can be left incomplete. (eg. the `.collect()` method on iterators can return multiple different types, so it's common to write something like `let result: Vec<_> = the_iterator.collect();` to specify you want to collect into a Vec if the way you use it doesn't make that clear. ("Fallible patterns" are what you see in constructs like `match`, `if let`, and `while let`.)

      4. I'm not sure I follow your point about mutability. Could you elaborate?

  14. mevets

    All In!

    Every software developer should be all in on these sorts of initiatives.

    At a point where regenerative array multiplying was verging on automating software development, the emergence of yet another boondoggle should guarantee jobs a for a good few years yet.

    These fads only come along so often, so hop in. The water is fine.

    1. Anonymous Coward
      Anonymous Coward

      Re: All In!

      Heavy pushes for rewrites of major old applications trigger my Jia Tan/xz bullshit detector since Easter 2024.

  15. fg_swe Silver badge

    Herb Sutter on Safety+Security

    https://www.youtube.com/watch?v=EB7yR-1317k

    1. award

      Re: Herb Sutter on Safety+Security

      And Herb again, only a little more recently :-)

      https://cppcon.programmingarchive.com/?mgi_13=7781/peering-forward-cs-next-decade-herb-sutter-cppcon-2024

  16. billdehaan
    Unhappy

    Closing the barn door

    I pretty much gave up on the idea of C++ ever being safe when I heard two architects debating, seriously, the difference between "protected abstract virtual base pure const virtual private" destructors and "protected virtual abstract base pure virtual private const' destructors.

    When you see phrases like "transflective binodal surrogate", and dragging things through reinterpret_casts of dynamic_pointer_casts of static_pointer_casts, you've reached a level of complexity and abstraction where it's pretty much impossible to account for memory safety.

    The funny thing is that C++ was supposed to make C programming cleaner and easier, but in many ways, it's done the opposite.

    At least in the days of C, we had lint to keep us honest.

    1. Mike 125

      Re: Closing the barn door

      > ...I heard two architects debating, seriously, the difference between 'protected abstract virtual base pure const virtual private' destructors and 'protected virtual abstract base pure virtual private const' destructors.

      The only software 'architects' I've worked with clearly got that title to promote them out of harm's way.

      1. billdehaan

        Re: Closing the barn door

        I was a contractor for 20 years. I am a duct-tape programmer type, and I would say that at least 80% of my work was undoing the damage of architecture astronauts.

        The problem is that many companies mistake complexity for intelligence, and equate buzzwords with intelligence. So the more complex and abstract something is, the more impressed they are by it. They reward complexity, and the result is that they end up with systems that are so complicated that they can't be understood. And often, that complexity is completely unnecessary.

        I've replaced 3,500 lines of C++ inheritance with 30 lines of code. I've replaced 18 pages of Pascal code with a one line set definition and a four line boolean function. And in both cases, the architects fought tooth and nail to keep their existing megabytes of navel-gazing code that did absolutely nothing that my half page routines didn't do.

        I joked at one company that their architects couldn't write "hello world" without using parameterized templates and code generation. The PM (project manager) I was talking with told me not to exaggerate. Three weeks later, as he was reviewing bug fixes made to projects to determine whether he should approve their being ported to the main product, he read a bug fix that had a title like "Enhancement: automate adaptable functor generation via variadic template to allow for polymorphic reflection". Other than generation, he had no idea what any of those words meant. He walked up to me with a printout of it, and said "I kind of thought you were kidding". Would that I was.

        And yet, he approved it, because he was afraid not to.

        And that's the problem with architecture astronauts. They get away with their nonsense because everyone is afraid to touch it because they don't understand it.

    2. fg_swe Silver badge

      Rational Innovation

      IF you use inheritance, then a proper understanding of destructors and virtual functions is of course necessary.

      BUT - only use inheritance if you REALLY need it.

      Even without inheritance, there are great C++ features such as "simple" destructors, that clean up(e.g. release memory, release file handles, close DB connections, RAII, ...) after the use of an object.

      Unlike Java's finalize() , these destructors are synchronous, which is what you want in most cases.

      As always in engineering: only use approaches you fully understand, if safety and security are concerns. Don't be a poser and use things just because they have become popular. Rather, learn new things, fully understand them and then apply only if useful for the problem at hand.

      1. billdehaan

        Re: Rational Innovation

        Oh, I agree, absolutely.

        I started with C++ in 1987 or 1988. We were doing C work using Lattice and later Microsoft C (the early versions of Microsoft were just rebranded Lattice, then they went separate ways). We got a Zortech C++ compiler and played around to see what we could do with it.

        At least back then, C++ was basically "smarter C". Most of the structure that C++ imposed were things we were already doing with our coding guidelines. If you think of classes as being structures with associated functions, it's just a cleaner syntax.

        I've used inheritance, and templates myself. But I've used them sparingly, with simple base classes.

        But I routinely see code that inherits from 47 base classes, which are passed into triply-templated layers of abstraction. The average coder can't understand that, and the average architect gets snotty about it when questioned, and usually tells management the problem is that the developers are too stupid to understand it.

        In bad companies, management agrees, and they end up with unmaintainable systems.

        In good companies (and I've seen many), management calls the architect's bluff, and makes him clean up his own mess. It's always amusing to see an architect who's said that any developer who takes more than two weeks to implement an XYZ function with his gee-whiz framework should be fired ordered to implement XYZ, and see him struggle with it for months. I once gave an estimate of 1,000 hours, ie. six months, to do something that the architecture assured the PM could be done in a week, easy. After four months on it, when it still didn't work, the architect upgrade his estimate from 40 hours to 6,500 - more than three years.

  17. G40

    Tiresome nonsense

    This article is.

  18. timrowledge

    If you need ‘extensions’ then you are never going to actually make it safe. A sensible language doesn’t require you to think about memory allocation or freeing

    1. Groo The Wanderer

      Only if you and your team are incompetent children, not actual programmers.

      1. Anonymous Coward
        Anonymous Coward

        Ah! My experience with all C++ developers in a nutshell.

        Do you also insist that the snapshot of C++ 2004 you happen to know, along with your own personal coding style, is C++ and everything else is "bad syntax"?

        1. Groo The Wanderer

          Claiming that only "sensible" languages require you to think about memory is immature and childish. Sooner or later, memory is always involved: these are computers.

          As I said: childish IDIOTS!

  19. Locomotion69 Bronze badge

    It is all about understanding your tools. Like the discussion about the NULL pointer: it is vital to understand the structures you work with.

    Abstraction made this worse- there was no longer a need for that in-dept understanding, and everybody could become a programmer.

    And everybody did...

    Key is to understand what you are actually doing, and what it is you want achieve (and notice the difference)

    And Java is not the Holy Grail - I have seen my share of "null pointer exceptions"....

  20. sabroni Silver badge
    Happy

    Good to be back!

    I've been away for a month but it was worth coming back for this!!

    C programmers promise to manage memory safely this time!!

    Fucking classic!

  21. bystander

    Memory safe C/C++ is good. But...

    Why not use static analysis on C/C++ written code. It catches most of the memory access issues just fine. It does not restrict way memory objects are used (referenced). And for the time of the analysis is comparable to Rust compilation.

    1. MatthiasU

      Re: Memory safe C/C++ is good. But...

      You don't want to catch "most" errors. You want to catch all of them, and you want to be able to prove that you did, even across library boundaries.

  22. MatthiasU

    Syntax schmyntax

    From the proposal:

    > Rust’s functions are safe by default. C++’s are unsafe by default. But that’s now just a syntax difference.

    NO IT IS NOT.

    Safety by default means you can use a simple "grep" to determine whether your program contains unsafe code. Unsafety by default means adapting any of this will be an uphill battle-

    But I digress. By the time C++ gets somewhat close to Rust-level safety guarantees, "[un]safe" keyword or not, the complete Linux kernel will have been rewritten in Rust anyway.

  23. efa

    sanitizer

    All modern compiler has an option to add runtime code that check data boundary, double free, use after free, miss free, stack overflow, and so on.

    At beginning was Clang/LLVM 3.1, but GCC 4.8+ (2013/3) has it from some years now.

    That runtime code slow down binary to the level of C#, Java and Rust, so must be used only when build for debug with -g.

    Once code is clean, you can remove sanitize address and get fast pure C compiled memory safe code without compromise.

    Today we do not need slow memory safe languages, simply informed programmers

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like