back to article Rust haters, unite! Fil-C aims to Make C Great Again

Developers looking to continue working in the C and C++ programming languages amid the global push to promote memory-safe programming now have another option that doesn't involve learning Rust. Filip Pizlo, senior director of language engineering at Epic Games, has created his own memory-safe flavor of C and – because why not …

  1. karlkarl Silver badge

    > But the thing about Rust is that it's not all that easy to learn

    That's not really the reason. If people can learn C++, then Rust is no issue.

    The real issue is with Rust they would need to spend time writing bindings, cleaning generated bindings (bindgen/SWIG) or maintaining rotten bindings abandoned by others.

    So Rust and any other language that can't directly consume C code are, by design, unsuitable for a large number of use-cases that a "Safe" C compiler is perfect for.

    The world already has more C compilers than all the other languages put together. But now we are seeing a new renaissance in writing more. This is great news! More choice.

    > it's slow – about 1.5x-5x slower than legacy C

    We have a similar tool (albeit C++). For high performance requirements, really once compiled in release mode (with a regular C++ compiler), the additional testing during thorough branch testing with the "Safe" compiler is still very very useful.

  2. A Non e-mouse Silver badge
    Meh

    I'm not convinced that a new compiler can make C "safe" without breaking existing C programs.

    If you want to make C safe, I think you need to start with a fresh langauge. Sure, you may make it look a lot like C, but you can't claim it is C (or even C compatible)

    1. Crypto Monad Silver badge

      It depends what you mean by "safe".

      If you simply redefined C so that all "undefined behaviour" became "must immediately crash", that by itself would be a huge improvement in safety. At the moment, any C program which triggers "undefined behaviour" can literally do *anything at all*, including reformatting your hard drive, and it is correct for the compiler to allow that. e.g.

      https://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html

      https://blog.regehr.org/archives/213

      1. Paul Crawford Silver badge

        For some cases of mistakes with malloc() memory you can use the electric-fence library so it segfaults immediately, useful for testing as not much other penalty and you can use the resulting core dump to debug why.

        But other bugs from heap-allocated arrays being overrun are not so easy to trace and a compiler/setting to trap those would be good.

        1. Vometia has insomnia. Again.

          It might be expedient to make programmers actually bother to check return values: I once worked with someone whose code never, ever did this, whether it was malloc or anything else. No error checking, no sanity checking, nothing. They'd previously worked for a software house whose stuff I'd used around the same time and I still had painful memories of its habit of crashing often. Now I know why.

          Rust isn't going to fix this sort of bad coding. It may avoid some pitfalls but it won't turn bad/lazy programmers into awesome ones; on the contrary, judging by the fanboyism, it'll just cause Volvo Driver Syndrome.

          1. LionelB Silver badge

            > It might be expedient to make programmers actually bother to check return values: I once worked with someone whose code never, ever did this, whether it was malloc or anything else.

            Is it not the case, though, that on Linux, infamously, "malloc (almost1) never fails" - i.e., it (almost2) always returns a pointer to a valid memory address, regardless of whether there is sufficient physical memory to allocate the requested space... in which case the OOM killer steps in and blats some process (not necessarily the requesting process), with potentially disastrous results. So testing the return value of malloc may (or may not) be so useful.

            1At least in most default system configurations which enable overcommit, although this behaviour may be changed.

            2Unless it runs out of address space.

            1. Jaybus

              It is true. The malloc() call itself almost never fails. The problem lies not in allocation, but in later use. For example, consider the string.h functions that return a pointer, strcat() and friends. It is the perfect function if you want to implement a buffer overflow. To mitigate the situation, strncat() was created to limit the size of the src buffer, but forces the coder to calculate n such that sizeof(dst) >= strlen(dst) + n + 1. At least strncpy() prevents an endless overwrite should the src string not be nul-terminated, but it lacks the bounds check on dst. As a result, strlcpy() was created to specify the size of the dst buffer in order to prevent the buffer overflow and to guarantee dst is nul-terminated. Yay! Except, it didn't retain strncpy()'s src string length limit, and if src is not nuul-terminated, then a missed terminating null byte might copy the keys to the kingdom into the src string and subsequently publish them on the Internet.

              Oh, but the malloc() worked flawlessly.

            2. Tom66

              On an embedded system running Linux I have encountered malloc() failing when the heap was too fractured to allocate a 16MB buffer. On 'paper' the system there was >>32MB of free memory left, but it wasn't in a contiguous block and so malloc couldn't satisfy my request.

              I would, just based on that experience, recommend always checking malloc()'s return value, or for simpler programs you can create a wrapper around malloc that just calls exit if the call fails, assuming exiting in the middle of the call is safe enough and that memory allocation failures are always fatal.

              1. may_i Silver badge

                People who write C code where they don't check the return value of every function which returns one should not be allowed near keyboards.

        2. bazza Silver badge

          Or just try another OS / C allocator pairing.

          I've seen code (GNU Radio) run just fine on Linux, but the same thing compiled and run on FreeBSD segfaulted with a memory leak. No one was ever going to find that bug on Linux...

          Ideally, OSes / libc's would offer two modes; hyper-optimised, minimum OS interaction, fastest possible allocation, or super-defensive, lots of space between allocations, all freed memory immediately unmapped and returned to the OS.

          1. Paul Crawford Silver badge

            To be fair GNU Radio is a complete basket case of a project in the first place. I once tried to compile it from official source and it failed to build, presumably as there was something odd about the configuration of the developer's machines that was not the same from-scratch as mine, so stuck with the one in Ubuntu and gave up on any attempt to fix/improve it.

          2. bombastic bob Silver badge
            Devil

            memory safe version of std

            perhaps all that is really necessary is a memory safe version of 'std', though memory safe headers that re-define common utils with memory-safe versions might also work.

            Or people could just roll their own C++ objects with validation (where needed). Some of MFC classes could easily be written this way.

            It makes me wonder if a new 'bounds' instruction (or modifier) might be useful in the CPU itself, to auto-apply bounds checking in the microcode and branch on fail (or set a flag in the status register, "bounds exceeded"). If you did this maybe you could do MOV (EAX),EBX BOUNDS base-address,size [or something like that]. Or a bounds register would have the base address and size, and you'd just mention it as part of an instruction so that the bounds check values are cached. Anyway, stuff for CPU designers to consider.

            I had considered a sub-allocator to do this with pre-allocated chunks which reserved the first 4 ir 8 bytes as the chunk size.. Useful for string manipulation, for sure. Never implemented it, though.

      2. martinusher Silver badge

        Compiling a large project will generate a lot of warnings, most of which can be ignored. But it always pays to get rid of all warnings in production code -- treat them like errors.

    2. Paul Crawford Silver badge

      If you want to make C safe, I think you need to start with a fresh language.

      Which is Rust, or another.

      But in the real world with tones of C or traditional C++ code and limited budgets for massive re-writes, anything that takes away 90% of mistakes and has a small run-time or code-change overhead is a wonderful alternative.

      1. bazza Silver badge

        Which indeed is Rust, or another.

        Thing is, C-like languages - to be rendered "safe", or at least "known behaviour" - require quite extensive runtimes, particularly if you've got separate threads sharing data. And that's how you end up with things like C#, Java, etc. I've not looked at either TopC or Fil-C, but I find it difficult to believe that they achieve complete memory safety when there's threads involved (unless they too have a heavy weight see-everything runtime). Taking away 90% of mistakes is a start, but I fear that the remaining 10% would be the really tough bugs to find, and also the most important ones to find.

        I can see another advantage. C to Rust conversion is said to be tricky, I suspect because first you have to fix all the mistakes in the C/C++ before it's in a Rust-like shape. C/C++ compilers that - at run time - can give you error output akin to "ah well, you've got this wrong here" could serve a useful purpose in allowing code to be bashed into better shape in a language which is familiar, rather than doing so whilst also translating to a language that is not.

        General Dilemna

        Rust has certainly thrown in a curve ball. For the first time in decades, there is a truly plausible and arguably superior alternative to C / C++. Unless Rust actually dies, the arguments against migrating to it are going to get thinner and thinner. For large C/C++ projects, it's a horrid choice between sticking with what's known for an easy life and risk project death (as C/C++ gets abandoned) but hoping for the best, or bighting the bullet now and cracking on with converting, or (even harder) both.

        This was always going to happen at some point of other in the history of computing. Indeed, it's already happened; once, there were assembler programmers convinced that assembler would live on forever in, say, OS development, hot apps. C soon wiped that out. If we're to draw a lesson from history at all, it's that C / C++ is going to lose and probably faster than anyone thinks.

        1. O'Reg Inalsin

          Just one minor point - there is one important difference between the C-to-assembler comparison and the Rust-to-C comparison. Assembler is always specific to a CPU's instruction set, and so dies when the instruction set dies. C escapes that by way of the compiler. C and Rust are equals on that level as they are both compiled languages. (Not my downvote).

          As for the rest - Fortran is still alive and well in the various areas of scientific computing and engineering, specifically, for things like numerical weather simulations, computational fluid dynamics, and other simulations that run on HPC systems. Legacy software is everywhere.

          1. bazza Silver badge

            The non-portability of assembler is certainly a severe down-tick for it, for large projects!

            Portability of Optimisation

            One aspect in the Asm-C debate was "you can never write a compiler that produces code as optimised as a skilled asm programmer can". Of course, CPUs (their pipelines, caches, etc) became so complex that optimising by hand became really hard, and C compilers could cope with that and - for most purposes - won. The only place where compilers failed was on Itanium, but that's not their fault!

            There is still a place for hand written asm in, say, DSP applications; the best fft() I ever came across was handwritten asm for PowerPC/Altivec, proprietary, crafted by a company that had a collection of devs who really, really understood PowerPC, its pipelines and caches. It was 30% quicker than the next best thing (FFTW). It gave the company (who also did hardware) an enormous advantage in the market because applications built using their math library required 30% of the hardware compared to competitors. When you're selling into the military aviation market, 30% reduction in hardware requirement for an application is worth $billions.

            Future Aspects of Functional Portability, if Hardware Changes Radically

            C and Rust are on a par regarding portability, for the moment. As I've highlighted in other posts, Rust's "knowledge" of data ownership does mean that it could in theory be evolved to a point where function calls / threads are automatically mapped on to Communicating Sequential Processes processes (akin to Go's go routines). So if one did build a Rust compiler for CSP hardware (i.e. not today's SMP hardware, riddled with Meltdown / Spectre bugs), one's Rust source code could be automatically compiled for it and benefit from it. That'd never be possible with C.

            This possibility in Rust is hinted at in a blog post. Under the hood, Rust could build code for non-SMP hardware just as easily as it can for SMP hardware. It doesn't at the moment because there's (effectively) no such hardware, but there's no theoretical reason why not.

            Actually, it's not quite true that there's no such hardware. Super Computer clusters are not SMP environments.

        2. martinusher Silver badge

          Assembler does "live on for ever" because code generators won't know how to operate processor and support logic hardware. A lot of the discussion about "A being better than (old) B" is from people who don't spend much time in the basement, there's no need for them to rummage around down there because their computers come with fully formed system code. If you're working at that level, even if you're writing ostensibly system code such as network stacks or storage drivers, you're writing a form of application that would benefit from something like memory safety. But that for many of us is relatively high level code, there's actually a whole lot going on underneath it and someone's got to write and maintain it.

          A decent programmer will not only know how to use a number of programming techniques -- languages, if you will -- but will know when to use one or the other. Nobody writes in just one language (although if you make a language sufficiently complex there may never be time to actually learn it all before the next version comes out!).

      2. Anonymous Coward
        1. Paul Herber Silver badge

          Re: C Tones

          C sharp or D flat?

          1. Andrew Scott Bronze badge

            Re: C Tones

            if you don't c# you'll b flat.

    3. abend0c4 Silver badge

      I think it's possible that a new complier (or indeed pre-processor, since ending up with C would be a lot less trouble than having to write compiler backends for multiple platforms) could by default flag up pntentially unsafe uses and allow incremental declaration annotations and minor amends to gradually make the warnings go away. Though how far people would actually bother is a different matter.

      I'm not entirely sure about a lack of ABI compatibility. I see the reason, but I'm not sure the overhead would be acceptable in kernel code, for example.

      I think a good test would be whether you could ultimately write an arbitrary part of the Linux networking stack in the revised language and have it interoperate with the other components. There's a lot going on there in terms of byte-alignment, endian-ness, objects of different lifetimes (eg: sk_buffs may have a different lifetime to the data they point to) and changing ownership (as they move from queue to queue) and if you can crack that, you may well be on to something.

    4. Anonymous Coward
      Anonymous Coward

      Doesn't help

      New languages bring new problems with them anyway.

      And I've seen 'memory safe' languages like Java leak so much memory they wedge the machine due to far too complex frameworks and clumsy programming. Those problems are far harder to find and fix than a simple C buffer overrun.

      History is scattered with languages that didn't make the cut - Algol, Pascal, ADA (Another US govt. inspired effort ;)), Prolog - I could go on but all were intended to solve problems but simply introduced new problems or hit productivity so hard that sanity eventually prevailed.

      The reality is the world needs better programmers, not better languages.

      1. fg_swe Silver badge

        FALSE

        Memory errors of C and C++ programs can be very hard to track down, especially in multithreaded programs. They also enable Silent Subversion of anything facing the outside world.

        The "programmers should be better" argument has proven to be unrealistic. Hundreds of times in anything from VxWorks to the Windows Kernel. Or SSL implementations.

        1. Paul Crawford Silver badge

          Re: FALSE

          Memory errors of C and C++ programs can be very hard to track down, especially in multithreaded programs.

          Multi-threaded programming has lots of problems that go beyond memory ownership. Well actually, just the one problem: order of thread execution, and the horrors of trying to debug stuff that did not have the necessary atomic operations to deal with it and where faults manifest depending on overall system loading, order of inputs, etc.

          When I needed to run parallel code for a DSP project on multi-core x86 machines I took the coward's way out and ran multiple single-thread programs with the data & processing suitably segmented. Let the OS writers deal with concurrency and memory protection...

          1. StrangerHereMyself Silver badge

            Re: FALSE

            The one thing I learned with Rust is that multi-threading and ownership are actually two sides of the same coin.

        2. JoeCool Silver badge

          Re: FALSE

          ""programmers should be better" "

          Not quite, the thrust of that is closer to the sentiment "employ programmers that are better (and stop painting me with the the taint of morons).

      2. tsuch

        Re: Doesn't help

        Algol, Pascal, Ada -- didn't make the cut? Although not used much in the US, Algol was certainly widely used elsewhere. Ever heard of TurboPascal? Ada was (and may still be) widely used for mission-critical code (e.g., flight software) in the aerospace industry, for example. You'll have to convince me that another language in the same family as those three, C, winning out was sanity eventually prevailing. I don't think Prolog was ever intended as a general-purpose language, notwithstanding being part of the chosen paradigm (logic computing) for Japan's ultimately unsuccessful Fifth Generation computer project back in the 1980s.

      3. Caspian Prince

        Re: Doesn't help

        These are not actually memory leaks, they are object leaks. The difference is a memory leak just soaks away into nothing and becomes untraceable and leads to all sorts of bullshit like use-after-free etc., whereas an object leak is entirely traceable, and the only side effect is an OOME rather than undefined or hackable behaviour of use-after-free.

    5. Lee D Silver badge

      C can be compiled down to wasm/js (e.g. Emscripten) in a "safe" manner.

      It merely simulates pointers using arrays and the like. It has to run inside the browser DOM, so it's "safe" for individual apps (but it doesn't stop an app breaking itself by revealing data of its own, etc.).

      It's slow but it works. You can take C code, recompile it, and it "just works" (you do have to make adjustments for limitation of the browser DOM in terms of networking and filesystem access, but the C code doesn't care and most code - even full 3D games using SDL, OpenGL, etc. "just work" - it usually consists of bundling the filesystem inside a file with the web app, and using WebSockets instead of direct socket access, which it does for you, you just need to make sure the server is doing the same and not expecting direct socket access from the C client app, etc.).

      I've used it several times and a lot of projects on the web use it. The changes you make are basically never to the C code, even with lots of clever pointer manipulation tricks. It's more what you bundle and how you handle a WebSocket which is insulating your network access on the other end (i.e. not in the browser / app itself).

      You can make C safe in that way, no problem at all. The question is why would you bother? C code that does dangerous, speed-critical or low-level stuff is in C for a reason... it's because you need direct pointer manipulation, no emulated environment, raw access to all of RAM, etc. etc. Things like device drivers and the like. And that's where Rust also just gives up the ghost and says "just wrap it in unsafe", because you have no real choice.

      But a C app can be made "safe" in a virtualised / emulated environment no problem at all - we've been doing it for years. Simon Tatham's (of PuTTY fame) Portable Puzzle Collection was for years a bunch of C apps that were also then compiled with a MIPS compiler that produced output that a Java app could interpret, so they could run in your browser... long before wasm and the like existed or were popular. Those apps are "C-first" and no special behaviour was coded in to deal with that conversion, the raw C just gets compiled through a toolchain and ends up as a Java app in your browser.

      It's more a question of why would you - C assumes whole-memory access, it encourages you to play pointer manipulation tricks, to interpret raw data as structured data without checks, etc. etc. and that's where the problems lie. You can simulate all of those effectively, with performance hits, but it would be a better idea to just move away from such things. Taking a pointer, adding some numbers to it and then it accessing a completely different variable, etc. is a dangerous thing to be able to do, and I don't think we should encourage models like that.

      FYI: I program almost exclusively in C99 nowadays.

      1. ChromenulAI

        C is assembly in a human readable syntax. Iterating through a structures members by incrementing its base pointer is how the compiler constructs memory access in assembly. In light of this fact, I always encourage those around me to access variables by offsetting from a base. It helps to bring their thinking in line with assembly. Of course, somebody will point out how "dangerous" this is to which I remind them we're writing applications targeting a customer base that has a surface knowledge of computing, not the space shuttle.

        Writing code in unsafe ways that is destined for a space shuttle or critical infrastructure is dangerous where people can die. Last I checked, bugs in Chrome never caused anybodys death, thus are not inherently dangerous.

    6. StrangerHereMyself Silver badge

      I keep stressing that C is a Systems Programming Language which is being abused as an Application Programming Language.

      The answer is not to make a "safe" Systems Programming Language but an Application Programming Language with limited capabilities. Such a language could have a C-like syntax, but would not have raw access to pointers, for example. It would have "managed pointers" which would allow you to make efficient algorithms with pointers without having physical access to them. It looks like Fil-C already uses such a scheme.

      1. Jason Bloomberg Silver badge

        Having C with 'managed pointers' and 'unmanaged pointer' compiler options would seem a better option than having two separate languages - It's not worth trying to fight the "C is the one and only true language" crowd.

        Some will always run "unmanaged", just as they do now. It's the job of QA to keep them in line, ensure the appropriate regime is use, to ensure that what is "unmanaged" is safe. Having a smaller set of "unmanaged" libraries is much easier to deal with than knowing a bug could be lurking anywhere in a project, quite probably everywhere.

        1. StrangerHereMyself Silver badge

          That's exactly the problem I see: that most would simply run "unmanaged" to gain that last few percent more speed. We need a clear distinction between the two paradigms, with different capabilities and somewhat different syntax. Sort of like C# is today.

          The only reason I'm not vying for C# as that Application Programming Language is that it isn't designed from the ground up as an AOT compiled language. In all other aspects it fits the role perfectly.

  3. m4r35n357 Silver badge

    Crawling out of the woodwork . . .

    Seems to be a procession of these, none actually available, natch.

    I'm going to hold out for a cloud implementation, running in a cloud IDE, with four ML "pilots" to take care of me!

    1. Anonymous Coward
      Anonymous Coward

      Re: Crawling out of the woodwork . . .

      Seems to be a procession of these, none actually available, natch.

      TFA has a link to the compiler.

      1. m4r35n357 Silver badge

        Re: Crawling out of the woodwork . . .

        OK ta, I looked but didn't see it when I read the first time!

        Seriously though, my money(?) is still on Zig assuming it becomes stable in the forseeable future (it has been available for testing for years already). It seems to solve most of c's problems, not just one or two.

  4. G40

    Excellent…

    This is precisely the approach that we want and need.

    Props to the author for bringing this project to light.

  5. Bitsminer Silver badge

    1.5x slower....

    No matter how how fast you make the hardware, the software boys (and girls) piss it away.

    1. G40

      Re: 1.5x slower....

      Cheer up. It’s for an excellent cause. And look at the results, apply during testing, find a bug in the Python runtime. Maybe, just maybe, you don’t have to deploy the checked version into release code.

    2. abend0c4 Silver badge

      Re: 1.5x slower....

      From the perspective of a (mostly) software boy, trying to make the hardware faster might just have been part of the problem.

      1. bazza Silver badge

        Re: 1.5x slower....

        Yep, and there's some CS types called for the abandonment of SMP. Attempting to make a "faked" SMP environment on today's chips is what's lead to such silicon flaws.

        Rust is interesting because - with it's total knowledge of what is accessing what memory - is very well suited to CSP environments. If implemented in hardware (remember Transputers?) are far less likely to have flaws like Meltdown and Spectre. With Rust, passing data from function to function whilst having knowledge of ownership could be used to transfer data over CSP channels from CSP node to CSP node. I don't know if Rust actually does that at present, but it could. I know there's CSP implementations for Rust, just like Go.

        So Rust might be a half-way house between SMP code and CSP code, in that it "feels" like you're writing for SMP, but actually underneath it could all be running on CSP hardware, and you'd never know the difference.

        1. sitta_europea Silver badge

          Re: 1.5x slower....

          "... (remember Transputers?) ..."

          Remember them? I still have some.

          1. Anonymous Coward
            Anonymous Coward

            Re: 1.5x slower....

            Transputer: An Intel processor that really wants to transition to an ARM and willing to take the RISC?

    3. Blazde Silver badge

      Re: 1.5x slower....

      It seems important to note this bit too:

      The Fil-C Memory Safety Manifesto: FUGC Yeah!

      ...

      All allocations are garbage collected using FUGC (Fil's Unbelievable Garbage Collector). FUGC is a concurrent, real time, accurate garbage collector.

      So, slower, and a bigger, less-controllable, less cache efficient memory footprint. Definitely a neat tool for testing existing codebases, but making C code slow and garbage collected is a bit like getting rid of the white-space in Python code to make it more byte-efficient on embedded devices. You can't really see it catching on among the language faithful.

      Besides that, anyone hoping to challenge Rust needs to bring similar concurrency guarantees. This seems to get lost a bit in the drive for basic memory safety, probably because by now everyone understands buffer overflows and use-after-frees and so on. They're yesterday's problem (even though it's not solved yet). Today's problem is large core counts in every device and the most basic applications being multithreaded. It's hordes of programmers who can't all truly appreciate how tricky sharing data between threads is muddling through it anyway, and it working fine in production, and in testing, and deployment. But not working at all fine in an adversarial context when the attacker is trying to trigger your race and exploit your code all without breaking the memory layout.

      What we have here is arguably 'allocation safety' not full memory safety.

      The Safe C++ proposal does attempt to implement Rust's concurrency model (with some very difficult edge cases to solve iirc). That makes it much more interesting.

      1. may_i Silver badge

        Re: 1.5x slower....

        I think the proliferation of garbage collected languages bears part of the responsibility for creating the lazy, incompetent programmers who can't write C properly in the first place. If languages themselves encourage bad habits then those bad habits become ingrained and as soon as the lazy, incompetent programmer needs to work in a close to the metal language like C, you get crap, unreliable code as the result.

        Not to mention the fact that code reviews would catch a lot of the bad programming that makes many C programs insecure, but manglement doesn't insist on code reviews and too many programmers are prima donnas who feel threatened by having their code peer reviewed.

        As to concurrency; if your code needs concurrency which can only be provided via locks, semaphores and other techniques, what you really need to do is start again and design your program properly so that concurrency is eliminated.

        1. Blazde Silver badge

          Re: 1.5x slower....

          Nice theory, but back before GC languages were commonplace almost all C code was full of much simpler stack buffer overflows that are very rare now, and programmers just blamed users for feeding their applications the wrong input. Lazy, incompetent(*), and ignorant to the security risk. Things have really improved a lot. (*)That's not really fair, it was a more innocent time and most coders had no formal programming education or access to online resources.

          As to concurrency; if your code needs concurrency which can only be provided via locks, semaphores and other techniques, what you really need to do is start again and design your program properly so that concurrency is eliminated.

          If you eliminate concurrency you don't have a concurrent program. I'm not sure what your point is. The beauty of Rust is it will enforce the minimum concurrency guarantees your code needs. Immutable resources can usually be shared freely without locks (in the lingo it's shareable if it's Sync, which is most stuff you're not trying to write to). Lock free writing is also fine if compile-time ownership analysis confirms only one thread is using the resource at any one time (Send in the lingo). Otherwise the compiler tells you you can't share what you're trying to and you have the option of redesigning, or wrapping the resource with a lock or making it atomic, always explicitly. The only catch is the lazy programmers end up wrapping too much in atomically-reference-counted mutexes, because that always satisfies the compiler by covering all bases without needing much thought. But that's still preferable to data races.

        2. claude j greengrass

          Re: 1.5x slower....

          if your code needs concurrency which can only be provided via locks, semaphores and other techniques, what you really need to do is start again and design your program properly so that concurrency is eliminated....

          unless you are working in a near real time environment on a limited resource microcontroller environment and need to measure and record data and interact with the user at the same time. IMOSHO

          1. fg_swe Silver badge

            Male Cow E..

            All multithreading needs atomic operations for the collaboration of threads. All you can and should do is to limit the atomic operations to a minimum. But not zero.

        3. PerlyKing
          WTF?

          Re: design your program properly so that concurrency is eliminated

          Good luck with that!

      2. sitta_europea Silver badge

        Re: 1.5x slower....

        "... bigger ..."

        I just looked at the Zortech compiler that I've been using for around thirty-five years.

        The entire compiler suite (C,C++,linker) is about half a megabyte -- 'make' is just under 24kBytes.

        I wrote a replacement for malloc() to prevent any follies of that kind, put guard bytes around every array and verify them on every access.

        Haven't seen a crash in decades. It *will* compile with gcc and run under Linux but I don't trust it yet because the application handles real money for real people.

        I downloaded the Fil-C compressed archive. 372.6 megabytes.

        Sorry, too rich for my blood. I deleted it.

        1. fg_swe Silver badge

          Bloat

          There is zero need for bloat due to memory safety.

          My memory safe transpiler needs in the order of 10000 LOC. Generated programs can be tiny, too, if only few standard libraries are used.

          http://sappeur.di-fg.de

          1. ChromenulAI

            Re: Bloat

            Without bloated code, it's hard to justify bloated investments. More LOC == More Engineering Hands == More Money

            It's a win-win-win-win for everybody.

        2. ChromenulAI

          Re: 1.5x slower....

          He's an Unreal Engine dev so it makes sense. That engine is over-engineered and bloated to the max.

    4. StrangerHereMyself Silver badge

      Re: 1.5x slower....

      I guess Rust has a leg up here since it's just as fast a C/C++ but with a guarantee of memory safety. The drawback is that it's much harder to learn and more cumbersome to develop in.

    5. ChromenulAI

      Re: 1.5x slower....

      That's the point. If the software isn't saturating the hardware, then their is no incentive to make faster hardware. You think people would continue buying the latest GPUs, if nobody pushes the metal? Think about all the lost revenue.

  6. Rich 2 Silver badge

    Good idea

    I’m not knocking this at all - it all sounds great

    It also sounds a lot like what Zig does - well worth checking out if you’re interested in this sort of thing

  7. Dostoevsky Bronze badge

    Yolo-C/C++

    This was a great article; I didn't realize El Reg was running a funnies section now. Particularly liked this. ^^^

    Some more memorable jokes from the repo:

    > "Fil-C uses a pointer representation that is a 16-byte atomic tuple..."

    Yeah, because we can write operating systems using atomics everywhere. Totally no overhead involved.

    > "All allocations are garbage collected using FUGC (Fil's Unbelievable Garbage Collector)."

    What!? JavaOS hasn't died yet? Look, if you need a GC'ed memory-safe language, it's called Go. Way better concurrency and everything. Just don't try to use either in embedded systems or mission-critical stuff like flight control software.

    > "There's no unsafe keyword."

    No, because you've just recreated Java with C syntax. Try compiling the Linux kernel with this; LMK how far you get.

    1. Blazde Silver badge

      Re: Yolo-C/C++

      It's user-mode x64 Linux only at the moment, so no need to lose sleep about writing operating systems in it. Of course even on modern x64 outside code as performance sensitive as a kernel, a 16 byte atomic is in no way a cheap operation, and (as far as I can tell) that's on EVERY single pointer access? (It has to be, because the 16 byte memory location isn't atomic if even one thread treats it as non-atomic).

      According to Agner Fog:

      Zen 3 - LOCK CMPXCHG16B - 28 cycles + 15 dependency latency

      Tiger Lake - LOCK CMPXCHG16B - 24 cycles, 32 cycle throughput (independent)

      Plus it will cause cache mayhem if another thread is accessing the same pointer, because even if everyone is notionally only reading the pointer each core must obtain exclusive access to the entire cache line.

      Contrast with using an 8 byte pointer non-atomically, you can do that all day long at 2-3 derefences per cycle with 3 or less cycles dependency latency (I think depending on how well the speculative execution is working), and you can do that PER CORE if it's all reads.

      1. Dostoevsky Bronze badge

        Re: Yolo-C/C++

        > ...no need to lose sleep about writing operating systems in it.

        I'm certainly not losing any! That's just where the most C is used. Atomics everywhere is not a path to a good operating system, though I suspect it wouldn't hurt Windows' performance much (worse). Good points!

      2. mevets

        Re: Yolo-C/C++

        "(as far as I can tell) that's on EVERY single pointer access?"

        why, exactly do you think it is on EVERY ( I wish I could type that in twice as high capitals ).

        With a little bit of thought, you can imagine how a system can well use this.

        If not, with a bit of reading, you can find the groundwork for it.

        That is the endless joy, to and fro, learn and know.

        1. Blazde Silver badge

          Re: Yolo-C/C++

          I said afterwards exactly why (because it's not an atomic object if you sometimes access it non-atomically). If one thread is in the middle of dereferencing the pointer, getting it's bounds check and type guarantees from the extra 8 btes, then another thread cannot free the resource in the middle of that, hence the dereference must be atomic. I'm not immediately clear how it works without locks either, but that should be much easier to achieve so I'm willing to just believe it.

          If you know better how it works differently I'm all for the joy of understanding and learning. I did spend quite a while sifting through the source and couldn't really figure it out because the entire project is a fork of LLVM and a couple of other big projects with changes.

      3. fg_swe Silver badge

        Re: Yolo-C/C++

        There is zero need to make every pointer operation atomic, IF the type system has a notion of single- and multithreded data rypes. ST code needs only simple refcounted pointers for memory safety. MT data structures/objects do need atomic locks, though. Efficoent, well designed code will operate on ST data 99,99% of time.

        see http://sappeur.di-fg.de

  8. Chris Gray 1

    Differing purposes

    I first learned C with the real K & R C. C has become universal, because you can do whatever you need (or want!) in it, because it is fairly easy to write an acceptable compiler for it, and because it was (not is!) fairly easy to master. C++ extends and generalizes C. It is not easy to write a C++ compiler, nor is it easy to master C++. (Quick: explain up-casts and down-casts; explain how you specify a specific allocator in a constructor; explain *exactly* how identical method names are disambiguated; templates; ...) I will admit that I don't really know the answers to any of those, since I'm not a C++ guru. I understand basically *what* C++ does, but the syntax details and detailed rules, no.

    There are a couple of problems with C that most folks acknowlege, but can't be changed. One is the confusion between pointers and arrays - how many bugs has that caused? Another is declaration syntax - making the declaration of a variable be the same as the use of it was a fine idea, but it lead directly to the "cdecl" program. In Zed, my answer to that one is twofold: make "*" postfix, not prefix; and make the order in declarations be the *reverse* of the order in usage.

    Anyway, all of these "safe" C variants are constrained by the way the language(s) can be used. Many of those issues go away with fairly small changes in the languages themselves. But, you can't change the languages. Automatically compiling a modified C/C++ syntax into the one compilers accept has been done, I believe. But, they didn't gain a lot of use, as I recall. I imagine that's partly because they didn't really solve any of the root issues in how the languages can be (ab)used.

    So, we start afresh with a new language. But, I believe that significantly changing the syntax style of programs (which I see in Rust, and in Zig), only makes it harder for programmers to adapt, which inevitably lowers the adoption rate. Make the syntax too punctuation dependent and you make the poor typists happy, but you make the language harder to dabble in. Make the language too verbose and you make readers happy, but poor typists unhappy. You have to find the right balance.

    What good is a new language when there is sooooo much existing C/C++ code to deal with? You handle it by writing large chunks of new or replacement stuff in the new language, and by making calling conventions match so you can plug the new stuff in. Your entire program/system is not "safe" until all of it is changed over, but you are able to make continual progress.

    And there is more to it than just memory safety. C enums should be "Enumerations", not just a way to define some names. If you need the latter ability occasionally, have a different kind of type for them, which is hopefully used infrequently. (In Zed, I have 'enum' types and 'oneof' types which match C "enum"s) What about arithmetic over/under-flow? Both can lead to what seem to be memory problems, but aren't. In Zed, I run-time check 'enum' and integral operations - if you don't want them then use e.g. 'bits8'. Many of the checks can be omitted by code generators based on knowlege from the common semantic code. C 'for' loops should be replaced by proper loop constructs which do what in 99.99% of the cases are really needed. For the rest, use "while". Fallthrough in "switch" statements should be explicit, with the default being to not fallthrough. Then you can get rid of "break", and the semantics of your code becomes clearer. If you need to get out of loops prematurely, then put the loop in a function and use "return". Semantics are much clearer.

    Etc.

    1. mevets

      Re: Differing purposes

      You might find this fun:

      https://github.com/Spydr06/BCause -- it is a "B" compiler which kinda/sorta works on modern systems. It is crazy small, and easy to manipulate.

      There are others. This, I understand, is close to the original:

      https://github.com/AlexCeleste/ybc

      There is no reason to end the joy just cause the for $$ folk want to bore you...

    2. ssokolow

      Re: Differing purposes

      > But, I believe that significantly changing the syntax style of programs (which I see in Rust, and in Zig)

      The funny thing about Rust is that, aside from the bits of syntax they changed to avoid needing the Lexer Hack, Rust used C++ syntax wherever it could.

      Rust is literally a GC-less OCaml derivative with syntax designed on the principle of "In the name of not being Just Another Research Language, let's take C++ syntax where such syntax exists and then fill the gaps with Ocaml syntax".

      That weird, ugly 'a syntax for lifetimes is OCaml's equivalent to <T> because, on an abstract level, lifetimes are a special kind of generic type parameter.

      ->, let, match, Some, None, option, colon-separated postfix type declaration... all OCaml.

      Rust's decision to call its tagged unions "enums"? That's standard in the academic/functional world where they're known as "data-bearing enums" because, in the reverse of a C union, the discriminant is mandatory but the payload is optional.

      1. fg_swe Silver badge

        Syntax Not Equal Semantics

        Ficus on semantics, Rust is imperative, while Ocaml's key aspect is functional programming.

      2. Chris Gray 1

        Re: Differing purposes

        "Rust's decision to call its tagged unions "enums"? That's standard in the academic/functional world where they're known as "data-bearing enums" because, in the reverse of a C union, the discriminant is mandatory but the payload is optional."

        Interesting - I hadn't thought of them that way. In Zed I can have a 'case' field in a record (my mind was tracking Pascal variant records), and enums, so its explicit. In Rust, can you use one of the "enum" types as an array bound type, thus requiring all indexes to be "enum" members? I've found that to be a useful concept.

  9. Paul Herber Silver badge

    Make Algol Great Again.

    As if it ever was!

    1. Chris Gray 1

      Probably it was, for the time

      Wasn't most of the Burroughs mainframe OS written in Burroughs Algol? I'm guessing the compiler hosted itself as soon as possible.

      I imagine they had Fortran and Cobol (maybe even RPG) compilers as well, likely written in their Algol. And no, I have no direct experience.

      1. fg_swe Silver badge

        Re: Probably it was, for the time

        According to Sir Hoare, there was a Fortran to Algol Transpiler for an ICL machine. It exposed tons of indexing bugs in "proven' Fortran code.

    2. fg_swe Silver badge

      Indeed !

      Algol was high quality technology as opposed to the C-Hamburger-fast-food stuff.

    3. tsuch

      It was great! Algol 60 was widely used outside of the U.S. Algol 68 did not become popular, but C's designers were well aware of it and C was heavily influenced by it.

    4. Torben Mogensen

      Algol

      "Make Algol Great Again.

      As if it ever was!"

      Tony Hoare remarked about Algol 60: "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors".

      So, yeah, it was great. Also, C borrows a lot from Algol, mainly just replacing begin and end by { and } and shortening a lot of operator names to one or two characters. Oh, and removing all run-time safety properties.

  10. sarusa Silver badge
    Meh

    A nice personal project, but...

    Looking at all the caveats on this I don't see any reason you'd use this unless you had a C program that (deep breath) didn't need to co-exist with any other C libraries or ABIs, you didn't care if it ran 2-5x slower, you only needed it on linux, you don't mind that it's using garbage collection, you don't mind 16-bit atomics, you don't mind one guy who named it after himself having complete control over it, etc. etc. and you still didn't want to just rewrite it in something else.

    I am sure those use cases exist for some people, but the lack of being able to use existing libraries really makes this a 'why use this instead of another language?' for me personally. Of course he claims some of these will be fixed eventually (and some won't, like the library thing), but we have working alternatives that work now. So this is just an interesting curiosity to me.

    1. klh

      Re: A nice personal project, but...

      Just use Go at that point, it's probably going to be better.

      Slight correction, the guy who named it after himself apparently did it on company time/hardware and one of the rather litigious corporations owns the full copyright.

      The irony of mentioning that after recommending Golang doesn't escape me of course :)

    2. Anonymous Coward
      Anonymous Coward

      Re: A nice personal project, but...

      16 *byte* atomics, no?

      1. sarusa Silver badge

        Re: A nice personal project, but...

        > 16 *byte* atomics, no?

        Yes, totally my bad and thanks for pointing that out (have an upvote). I knew it was 16-byte (or would not have complained, 16-bit atomics are fine), but apparently my fingers rebelled at actually typing '16-byte atomics'. Because who would ever?

        1. fg_swe Silver badge

          Fat Pointer Disease

          Sane languages need only 16, 32 or 64 bit(depending on hardware size) pointers. Sane languages do not have pointer arithmetics. The objects/structs pointed at typically need 16 or 32 bit of reference counter. Only mulithreaded objects need to operate on this counter using atomic instructions. The language type system should make the difference between single and multithreaded records/objects/structs crystal clear.

          Then memory safe pointers/references can be compact and very fast.

          1. G40

            Re: Fat Pointer Disease

            Who keeps anonymously downvoting? Caddish behaviour l.

            1. Anonymous Coward
              Anonymous Coward

              Re: Fat Pointer Disease

              How could C still be C without pointer arithmetic? If somehow there were a version of C which didn't allow it, developers could still cast a pointer to some size of int and back again, and the consequences would probably be worse unless care is taken to find out the pointer size for the current CPU architecture. And if casting weren't allowed, a union could be used to casting to/from pointers. And if unions weren't allowed, you could probably do something by fudging memcpy. And so on... In the end a good portion of the language would be removed to stop pointer arithmetic and developers would find they couldn't even loop through that most basic of things, character arrays. An utterly pointless proposal.

              I've not looked at how Fil-C works, but I think the writer of TrapC also knows it's pointless to try and ban pointer arithmetic and so chose to modify how it works instead of removing it from the language.

              It also seems people here are replying as if they aren't aware of -Warray-bounds (included in -Wall).

              And that is the reason for the downvotes.

              1. fg_swe Silver badge

                Pointer Arithmetics, Funny Casting

                It is very much possible to write fully functional C application programs without these two things.

                The only reason for pointer-magic I can see is system-level programs, which need to perform special things such as copying program images and the like.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Pointer Arithmetics, Funny Casting

                  So as you admit, you can't remove it from the language, so it's a pointless proposal. You have to work with it, as these two recent versions of C are doing.

  11. JoeCool Silver badge

    C and C++ require manual memory management. Good god.

    It's really hard to take seriously an article that makes fundamental mistakes about programming. It is trivial to write non-trivial systems that eschew "manual" or a dynamic memory management in C. It is even easier with C++

  12. MakeAvoy

    Personally I think Rust is a joy to code in, and the syntactic sugar is tasty. But I know writing lifetimes is frustrating, there's still scenarios where unsafe is needed, and async between threads is hellish. But the compiler with it's clear and concise error output instills a lot and of confidence in my code that no other language has. If these new C compilers really can guarantee safety maybe I'll try going back to C, but it's going to take a lot to pull me away from cargo.

    1. fg_swe Silver badge

      C can only be made "somewhat" memory safe by extremely expensive efforts such as

      A) valgrind

      B) 16 octet fat pointers

      C) Always-atomic, always-expensive reference counting

      I still fail how "Fil-C" can stop the casting of an integer into a pointer(can be part of a struct that is casted to) and then all hell breaking lose.

      1. Anonymous Coward
        Anonymous Coward

        Are you planning on lobotomising the language into useless so nothing can be done or using lotobotomised developers who would still manage to cause a segfault in spite of disabling a load of language features? If not, it doesn't matter.

  13. Manveru

    Safe C or safe binary?

    From the description looks like we're not taking about safe language but changes in compiler semantics to build in some runtime checks. Pascal was doing safety checks in runtime in '80s.

    Circle/Safe-C++ is modified language, a subset of original + extensions. Leaning toward Rust much more.

    So, no I remain unconvinced this is the right direction.

  14. LoveYouLoads

    Valgrind?

    I’ve always found Valgrind good for helping maintain memory safety of my C code. I read through all the comments here and I couldn’t find mention of it. I’ve been out of the C world for a while …. is it not something developers use anymore?

    1. fg_swe Silver badge

      Somewhat Yes

      valgrind will slow down your program by a factor of 100 to perform type checking. Other memory safe languages such as Java, Rust, C#, Sappeur only reduce performance by a factor of 3 to 7.

    2. mfalcon
      Thumb Up

      Re: Valgrind?

      The current batch of articles about the latest silver bullets that will get the industry a better place often completely miss the point that good design methodology and a well designed test framework are more important than the language you code in.

      If programmers only start thinking about how to test their work toward the end of the project then it is already far too late. I write my unit tests in parallel to the code. This help refine my design and find errors. Win Win. Sure Rust and other safe languages will help with some classes of errors but there are lots of other errors where they make no difference.

      The endless search for the next silver bullet is always doomed to in the long run. Valgrind is fabulous. If you can keep unit tests to a reasonable size then Valgrind being slow is usually not much of a problem.

      A program designed to be testable will always be better than one where the programmer expected the implementation language to save them from mistakes.

  15. Locomotion69 Bronze badge

    O dear

    With all these initiatives making C(++) memory safe, what shall the ordinary C(++) developer use ?

    In practice I am afraid they will settle for the environment with the least compile time errors/warnings (I would...).

    It is clear now that Rust has is merits in addressing memory safety, but using it to convert existing code bases should be discouraged - this will not provide the desired improvement without (a lot of) redesign.

  16. Anonymous Coward
    Anonymous Coward

    You can either write your programs in C++

    or in crayon.

    1. Anonymous Coward
      Anonymous Coward

      Re: You can either write your programs in C++ or in crayon

      Could you consider getting over yourself for a second?

      You can either write your programs in C++ or you can have them memory safe.

      FTFY!

    2. Apocalypso - a cheery end to the world
      Joke

      Re: You can either write your programs in C++

      > or in crayon

      In the same spirit, I propose freemalloc() - a function that auto-free's the allocated memory after exactly 10ms. All you have to do is write fast code that does what it needs to do in < 10ms. Simples. ;-)

  17. Torben Mogensen

    What is needed to make C safe

    C (and, by extension, C++) is unsafe in so many ways that making a compiler + runtime system for C that makes it safe is bound to make programs run slower. So while this may be a solution for compiling "dusty deck" programs without modification for applications where a ×2 to ×5 slowdown is not important, I can't see a way around replacing C by languages that are safe by design for applications where speed is important.

    It has long been known how to make C safe(r):

    - Add garbage collection. Replace free() by a no-operation and free memory by GC. Because C can do all sorts of stuff with pointers, this requires conservative GC: Any value that could be a pointer into the heap is considered to be a pointer into the heap. So if an integer by chance happens to be in the range of values that (if it was cast to a pointer) points to the heap, we must preserve the object it points to. But C pointers need not point to the header of a heap allocated object: I can point anywhere from the start of an object to one word after its end. Anything else is undefined behaviour. So to identify objects, we need to know where objects start. This can be done by a global table of star and end addresses of heap objects, where the GC compares a value to these to find the header of the object. This gets expensive if there are many heap-allocated objects. Alternatively, every heap object starts with a 64-bit "magic word", which is a value that is unlikely to be generated by computation. You can then search backwards in memory until you find a magic word, and you have found the header of the object. Not 100% safe, but works most of the time. Alternatively, use fat pointers.

    - Fat pointers are represented by two machine words: One that indicates the start of the object into which the pointer points, another that is the actual value of the pointer. This makes it easy to find the headers of objects, and you can also do range checking (as the headers indicate the size of objects). It makes pointers bigger, and range checking costs, but it allows precise GC. Casting integers to pointers (and back) is a problem, though. Like above, you can search for the header of the object to which the new pointer points (and report an error if it doesn't point to any), but this is costly and doesn't give strong guarantees. In addition to explicit casts, storing a non-pointer value into a union and taking it out as a pointer is problematic. So unions should be tagged with field indicators and checked when you store and read values from the union. And since any integer can be cast to a pointer, you can never be sure when a heap object is dead: It may be accessed later when an integer is cast to a pointer. There are coding tricks such as using XOR to traverse lists bidirectionally that will make this happen, so you can not guaranteed 100% memory safety.

    So, it is a better solution to design a language where you can not cast integers (or any other value) to pointers, and where pointers always point to the headers of objects. This allows single-word pointers, and by reading size information from object headers, range checks can be made. You can no longer just increment a pointer in a loop to traverse an array (you have to use offsets from the base pointer), but that is a small cost -- usually, base+offset addressing is supported in instruction sets. And the compiler may do strength reduction to transform pointer+offset to direct pointer when it is safe to do so.

    Unchecked unions should also be avoided, as should null pointers. You can use option types instead. Compilers can compile these into values where 0 means "none" and any non-zero value is a real pointer, so there is no run-time overhead (apart from checking if the value is 0, which is required to avoid following null pointers). Rust does this.

    Implicit casts should also be avoided. An explicit cast need not have any runtime cost and not making them explicit is a sign of programmer laziness. Null-terminated strings are not exactly safe either.

    Some will say that GC is costly. Well, malloc() and free() are not exactly free either, and they are prone to fragmentation which can not be avoided as long as you can cast integers to and from pointers, as this prevents compacting the heap to close gaps.

    1. fg_swe Silver badge

      Garbage Collection

      Typically, garbage collected systems need 2x the amount of RAM an equivalent reference counted system needs. Reason is simple: you cannot run the GC all the time or efficiency goes to zero. So the program must "accumulate" serious amount of garbage, before the next GC run.

      Also, the non deterministic GC execution point in time is bad for semi-realtime things such as ergonomics.

      GC is great for academic systems such as functional languages or for various accounting efforts. Not so much for the real world that interact with fingers, signals, sensors, actors, motors, brakes and so on.

    2. JoeCool Silver badge

      Re: What is needed to make C safe

      "making a compiler + runtime system for C that makes it safe is bound to make programs run slower. "

      That's a general tradeoff for any language. There is no Free Lunch.

      C++ has Smart pointers. Wouldn't that be the far better path for any C program to "upgrade" it's heap management ?

      "Compilers can compile these into values where 0 means "none" and any non-zero value is a real pointer"

      Isn't that the C++ def of "nullptr" ?

      "Implicit casts should also be avoided. An explicit cast need not have any runtime cost"

      Implicit casts are those that the compiler can perform safely. Explicit casts can be safe, or can trigger a compiler warning that should not be ignored, or better yet promoted to an error.

  18. ChromenulAI

    I like to incorporate memory errors, crashes and vulnerabilites into the software I write. This introduces opportunities to build better customer-relations as they will almost always reach out to me in need of support. Of course, I almost always have a quick turn around patch that solves their problem. I also give them a discount on the next version upgrade that is obscenely overpriced to begin with. By the time the whole process is done, I will have a happy customer for life that will remember how awesome I was at fixing their problem. You know what happy people love to do more than anything? Gossip and brag about their awesome choices in software that their friends didn't make because they stuck with the big software houses that refuse to fix even basic memory leaks which is a big +1 for them and a mega +10 for me.

    Now, if I produced software by the standards that some of these blowhards in this comment section are speaking volumes to, then people would buy my software and completely forget I even existed. A successful business is all about developing quality relationships with your customers. It's very difficult to establish and built a relationship of any quality when your software works all the time.

    That's why I write my software in C++ and not Rust.

    Good day!

    1. sabroni Silver badge
      Happy

      Quality!

    2. fg_swe Silver badge

      Broken Window Fallacy

      Europe became the leader of all human knowledge by not believing in such stuff.

      https://en.wikipedia.org/wiki/Parable_of_the_broken_window

      Kepler, Gauss, Newton, Leibniz, Volta, Ampere, Zuse, Planck, Heisenberg, Turing, Gödel, Shannon, Wirth, Hoare - stand on their shoulders !

  19. StrangerHereMyself Silver badge

    See Charp

    If Microsoft had made C# an AOT compiled language from the start there wouldn't have been a need for Rust. Or Fil-C for that matter.

    Fil-C does have the advantage that it can compile a huge code base out of the box and thereby making it "safe" or at least help in the detection of memory related bugs.

  20. TeeCee Gold badge
    Facepalm

    Memory safe language number scramson.

    Do you know why C and C++ remain so popular?

    Because there's only one of each of them, so you don't have to start from scratch whenever the flavour of the month changes.

    1. ChromenulAI

      Re: Memory safe language number scramson.

      They remain so popular because it allows people who have been abused to return the favor to the next generation. Remember to pay it forward. Not backwards. Fuck the boomers.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like