back to article Famed software engineer DJB tries Fil-C… and likes what he sees

Famed mathematician, cryptographer and coder Daniel J. Bernstein has tried out the new type-safe C/C++ compiler, and he's given it a favorable report. The modestly titled Notes by djb on using Fil-C doesn't sound like much, and indeed, the introduction is similarly modest: I'm impressed with the level of compatibility of the …

  1. m4r35n357 Silver badge

    Interesting

    DJB's attitude to Debian seems to have softened with time - I remember in the 1990s he absolutely refused to support Qmail on Debian because they insisted on their own directory layout, which conflicted with his intentions.

    I think Qmail would have been more widely used if they had been able to agree - it was certainly an unique approach, leaning heavily on the file system for confguration!

    He had an alternaltive DNS server too . . .

    1. Alan J. Wylie

      Re: Interesting

      There used to be licensing issue, too.

      Debian Mail Questions

      qmail is distributed under a licence which prohibits the distribution of modified binaries. Debian's policy requries that mail transfer agents conform to certain standards in order to be included in Debian as packages. qmail doesn't meet all of these standards in its standard upstream form, so it's impossible for Debian to distribute a qmail binary package which satisfies both policy and upstream

      Though 18 years ago, DJB changed it to "Public Domain".

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: Interesting

      > He had an alternaltive DNS server too . . .

      I know. I mentioned it and linked to it. It was cunningly hidden in the line that read:

      «

      his DNS server, djbdns.

      »

      1. Anonymous Coward
        Anonymous Coward

        Re: Interesting

        It was cunningly hidden in the line ... *

        Liam !!!

        You sly dog ... 8^D

        * Thanks for that, I was in such a bad mood.

        .

  2. Alan J. Wylie

    K&R C

    A decade ago I worked for a cyber security company that used qmail. I tried to drag the code kicking and screaming into the 21st century, and failed.

    It reminds me of the code I wrote in the 80's, (though I did a lot of linting). No function prototypes or type checking.

    1. Alan J. Wylie

      Re: K&R C

      Example:

      $ grep -A3 sig_catch sig_catch.c

      void sig_catch(sig,f)

      int sig;

      void (*f)();

      {

      $ grep sig_catch *.h

      sig.h:extern void sig_catch();

      $

  3. jake Silver badge

    Not quite ...

    "the language's extreme lack of safety is responsible for the bulk of the software vulnerabilities that require constant updates."

    The language is responsible?

    Is the hammer responsible for bending nails?

    1. FIA Silver badge

      Re: Not quite ...

      "the language's extreme lack of safety is responsible for the bulk of the software vulnerabilities that require constant updates."

      The language is responsible?

      The languages lack of in built safety is a major contributing factor. Yes.

      The attitude that 'It's all the stupid programmers fault and they should just try harder' is another one.

      It's the IT equivalent of macho man.. "I can write safe code... You must be wrong".

      No, you're just a better programmer than me, but unless you can guarantee all code is written by coders of your level then maybe make the languages safer... we have modern CPUs and memory so can do far more rigerous pre-compilation checks than the PDP/11 could. Why we wouldn't want to do this seems odd.

      (It never ceases to amaze me how people who spent their youth embraching the new and exciting then suddenly start to eschew it when it wasn't developed by their generation).

      Is the hammer responsible for bending nails?

      If you embrace modern tooling suddenly not everything is a nail. ;)

      1. JacobZ

        Re: Not quite ...

        Is the hammer responsible for bending nails?

        Tony Hoare certainly thought so, and he knew a thing or two about writing languages: "The author of a language is responsible for errors commonly made by its programmers" (wording may not be exact, intention is.)

        When I was learning Rust and I got to the sentence that began "A common mistake made by Rust programmers..." I threw the book against the wall. (Not my only issue with Rust).

        Look at this way: if you had two hammers, and in the hands of the same skilled carpenter, one of them drove straight every time, and the other one bent one nail in ten, would you say that the hammer was responsible? Because I sure would.

        1. jake Silver badge

          Re: Not quite ...

          It's a poor craftsman who blames his tools.

          1. VoiceOfTruth Silver badge

            Re: Not quite ...

            This is a thoroughly misunderstood quote. It dates back to when craftsmen owned their tools, they weren't borrowing somebody else's. I don't expect every programmer to write their own compiler or language, so they do borrow somebody else's tools. Perhaps it is fairer today to say it is a poor programmer who blames the compiler or language.

            1. jake Silver badge

              Re: Not quite ...

              I personally use the term based on long observation of so-called "craftsmen" (ab)using their tools. And then blaming the tool.

              What? Kids today don't have to write their own compiler in order to get a degree? Well, THERE'S a part of your problem ...

              1. Ken Hagan Gold badge

                Re: Not quite ...

                I'm pretty sure you can get a CS degree in the UK without being aware that there are such things as lexer and parser generators, without actually writing anything in assembly language, and without knowing more than three programming languages. (And no, HTML and CSS do not count as programming languages.)

                In fairness, I'm equally sure you could have got a CS degree 50 years ago without any knowledge of GPGPU programming, MMLs, TCP/IP and all that runs atop of it, and GUI frameworks.

            2. Anonymous Coward
              Anonymous Coward

              Re: Not quite ...

              Is the same ‘Poor’ Individual to blame with other assorted tools they have that lead to poorly delivered/dangerous outcomes (at personal or societal level)?

              - speeding/bad driving

              - Lycra-clad pervert habitually ignoring rules of road on their righteous bicycle

              - not be able to do man-jobs like hold a BBQ without burning everything

              - inability to do DIY??

              - being a douche and parking in a disabled bay or the Fire Lane?

              .. etc.

          2. Bill Gray Silver badge

            Re: Not quite ...

            It's a poor craftsman who blames his tools.

            Well, yes. But it's also a poor craftsman who won't occasionally say "wow, nice tool; I could do better work with that."

            I'm a pretty good programmer (in C and a bit of C++). I still routinely make misteaks in how I handle memory (just fixed a really embarrassingly stupid one about an hour ago). I do so more rarely now than I did as a PFY. That's partly because I've learned a lot, but also partly because I've picked up a few better tools. I am immediately distrustful of anyone who thinks their code is bug-free.

            Among other things, I'll sometimes run my code through various compilers (gcc, clang, Microsoft®, sometimes older ones like OpenWATCOM) to see if one picks up bugs or emits warnings that the others didn't. I may give Fil-C a try. It wouldn't have to catch many problems to be worthwhile.

      2. jake Silver badge

        Re: Not quite ...

        That was a long-winded way of saying "C is HARD!".

      3. martinusher Silver badge

        Re: Not quite ...

        Memory safety is just not a property of a language like C. If you need memory safety then you either have to design it in or use a language and technique that manages memory use for you. If you need to add to C then you could but you'd be using a library (written in C, most likely) that would implement it for you.

        One of the most important properties of C is that it distinguishes between the language implementation and its libraries. Many programmers, especially newer ones, don't quite get this distinction.

    2. david 12 Silver badge

      Re: Not quite ...

      Is the hammer responsible for bending nails?

      If you are a professional programmer, you should be using a nail gun, not a hammer.

      1. jake Silver badge

        Re: Not quite ...

        A true professional has both nail guns and hammers in his toolbox ... and knows which tool to choose for a given job.

        1. herman Silver badge

          Re: Not quite ...

          Yup - Hammers are strictly used for driving screws.

          1. jake Silver badge

            Re: Not quite ...

            Common misconception.

            Those are spiral shanked or ring shanked nails, not screws.

            1. JWLong Silver badge

              Re: Not quite ...

              They're called box nails.

              From day's of old.

          2. Bebu sa Ware Silver badge

            Re: Not quite ...

            Yup - Hammers are strictly used for driving screws.

            I recall as kid being shown that when turning in a wood screw that when there is a ¼ to ½ turn to go, nail it in with a hammer. No idea why. Makes more sense to nail it in and give it a final half turn. Possibly the blind leading the blind.

            1. Blazde Silver badge

              Re: Not quite ...

              Hammering it in the last bit is important because it makes it easier to get out again by making a complete hash of the otherwise perfectly tight thread you've created in the wood.

              Then, once you've made whatever silly mistake inevitably leads to needing to take the screw out again (8 times out of 10 it's failure to measure twice) the trick is to squeeze a bit of wood glue into the gaping mess of a hole you've created and quickly turn the screw back in, jostle it around until it's kinda flush with the face of the wood, and then try not to disturb it while the glue dries. If the hammering created any nasty splits in the wood this is also the time to try to make those good enough with generous squirts of glue. Don't be afraid to go mad with glue - the orbital sander is your friend. If the screw still isn't working out, don't sweat it. Dremel off the head to sufficient depth, filler in the hole, and repeat the whole process again nearby while patting yourself on the back for adding character to the piece.

          3. Ochib

            Re: Not quite ...

            The good old Birmingham Screwdriver

          4. Anonymous Coward Silver badge
            Gimp

            Re: Not quite ...

            Hammers are strictly used to re-educate the apprentice

    3. Michael

      Re: Not quite ...

      I had a beautifully crafted c and assembly based application for a microcontroller which I was certain had no memory faults.

      It had a memory issue when I reprogrammed it over the air. After weeks of testing I found the issue. It ended out being a bug in the avr GCC compiler.

      My code was fine. No third party code was used. Apart from the compilation.

      I should have just written it all in assembly...

    4. Anonymous Coward
      Anonymous Coward

      Re: Not quite ...

      Is the hammer responsible for bending nails?

      +1

      True fact:

      There are programmers and there are programmers.

      The first ones know how to code properly and the rest have a marked tendency towards the bending of nails.

      .

    5. flayman Bronze badge

      Re: Not quite ...

      Grammar fail. It's the "extreme lack of safety" that is the subject in that sentence.

  4. FIA Silver badge

    When it comes to C, Dan Bernstein should know. He wrote some of the safest C code out there

    Doesn't that make him one of the worse people to asses this tool? We all know that some people (mainly commentators here) can write 100% completely safe C with little or no effort.

    However, I'm not one of those people.... I write buggy shite... like most C developers... isn't it someone like that which should be assesing it? (I'm not volunteering, I'm not that competent).

    1. Liam Proven (Written by Reg staff) Silver badge

      > Doesn't that make him one of the worse people to asses this tool?

      Oh come on. Didn't you follow the link and at least skim the intro to his "notes"?

      He recompiled a large part of Debian 13.

      That's the whole point here: it _wasn't_ his own code.

      > some people ... can write 100% completely safe C

      Of course -- and I think this was your intent -- what is really scary and really dangerous are the many who _think_ they can write safe, correct C.

      I trust someone who says "I can't write safe C and neither can you" far more than anyone who says "most people can't write safe C but I can."

      But I am sure we all know that. Except, obviously, those who think it doesn't apply to them, not realising that it _especially_ applies to them.

    2. Anonymous Coward
      Anonymous Coward

      DJB's safest C code

      "He wrote some of the safest C code out there"

      Depends on your definition of safe. I suppose djbdns can be considered safe because nobody uses it. The code's been abandonware for well over a decade. It's never been used for anything important

      Incidentally, his so-called security guarantee isn't worth anything because His DJBness gets to decide what is and isn't a security hole. I quote from his website: "My judgment is final as to what constitutes a security hole in djbdns.'

      1. Blazde Silver badge

        Re: DJB's safest C code

        It was widely used back in the day when literally every other codebase widely used for critical networked services, including BIND, was full of simple stack overflows.

        As an aside he offered $500 at least as far back as 2001, which makes it possibly the earliest security-focused bug bounty programme? (He only bumped it to $1000 shortly before having to pay out, ironically).

        None of the vulnerability hunters I knew of got anywhere looking back then, it was a tight code base. However it wasn't coded especially defensively, which is very tough to do in C anyway, and even tougher to do back then without sacrificing performance because compilers weren't as mighty as they are now. It relied on DJB being extremely careful and having complete control of the project. There were places where a careless but innocent looking local change could have introduced vulnerabilities elsewhere.

    3. Roland6 Silver badge

      >” Doesn't that make him one of the worse people to asses this tool?”

      Depends on what you are assessing. It would seem here the assessment wasn’t whether it did or didn’t find unsafe code, it that the tool would accept and compile long established C code.

      Obviously, we need to use other tools to assess whether the Debian 13 code is or isn’t safe. But the implication of DJBs tests is that you can replace your existing compiler and not have to be unduly worried about refactoring your old code.

    4. retiredFool

      If you really do write bad code, check out valgrind if you make memory booboos. Fantastic run time tool with no recompile. I do recommend -g for best messages though.

      1. Bill Gray Silver badge

        Valgrind

        Upvoted and vigorously agreed with. I was late to the Valgrind party, but now consider no project complete until Valgrind gives it a clean bill of health.

        (I think some may downvote this on the grounds that no tool can catch all C errors, or even probably a majority of them. No argument there. But use of Valgrind, or compiling with -Wall -Wextra -Werror, or similar tricks, will catch a lot of errors quickly and easily. Why would I not embrace that?)

        1. Lee D Silver badge

          Re: Valgrind

          "Necessary but not sufficient" is the phrase.

        2. Chris Gray 1

          Re: Valgrind

          Thanks to both of you for bringing up Valgrind. Caused me to read a bit more on Wikipedia, answering my question about whether it worked on C sources or on binary files. The fact it uses the latter is good, since it should work on binaries produced by my Zed compiler.

          A friend ran valgrind (Memcheck I think) on my earlier Zed stuff and found one uninitialized variable, easily fixed. It also reported a problem that I was not able to track down - the badness seems to have been going through my bytecode engine's memory.... That *should* have been OK, but it would have required some "interesting" hacks to try to chase it further.

          Now that I have "zedc" the standalone compiler, I should be able to valgrind it (biggest program I have used it on so far is 2K lines of Zed). Valgrind the compilation, then Valgrind the test program - whee!

          1. retiredFool

            Re: Valgrind

            I use valgrind on my product, around 1M lines of C. For compute stuff areas, can be pretty slow. For GUI stuff where most of my errors are for edge cases, not bad at all. I am still amazed at the quality of this free program. Years ago there was a program from a company that did something similar on sun's and it was thousands per copy.

  5. MarkMLl
    Coat

    Type checking and compatibility

    I was thinking the other day that it's interesting that some of the earliest "surviving" languages (ALGOL-W and Pascal) plus of course C have type as an attribute of a variable and massage the result of an expression to allow it to be assigned. This automatic type conversion is a real hazard, particularly where assignments are chained.

    As I understand it Python v3 does the exact opposite: variables are initially untyped but an expression has a type.

    And Wirth's later languages tightened up the assignment and automatic conversion rules a great deal.

    It appears to be fashionable for commentators to argue that type-safe languages aren't needed since the majority of recent security vulnerabilities etc. are logic errors rather than simple off-by-one or use-after-free. But that ignores the fact that a high proportion of current software is 20+ years old and that dozens of errors have been weeded out over time: sometimes resulting in real damage to individuals and companies using them.

    Strong typing would have prevented many of those errors getting into production code. As such anything that can be done to promote it is a Good Thing, particularly if it has some chance of being applied to legacy code since rewriting it risks introducing more subtle logic errors.

    1. Chris Gray 1

      Re: Type checking and compatibility

      Compiled language don't do the kind of type conversions that run-time-typed languages do (can't say about Python - its years since I did a few experiments with it). I've actually used both Algol-W and Pascal (long long ago), and neither does that sort of thing.

      The problem with not declaring a type for variables is that the variables can then be used in expressions, and then you don't really know what is going on without finding the variable's declaration and carefully evaluating the type resulting from the expression assigned to it.

      By declaring the type of a variable you reduce the effort needed to understand the code - you only need to look at the declaration to know - not any later uses. Taking the type from the initialization expression can be difficult for a human reading unfamiliar code.

      One of the minor quandries I have when programming in my Zed language (new, compiled, strongly typed, to native binary) is which form to use for simple declarations. "bool flag := <bool expression>" and "var flag := <bool expression>" both do the same thing. But, with "var" (more often "con" when it is something that won't change later in the code) you can setup multiple simple variables of differing types on the same line. With "bool" starting the declaration you can only setup "bool" variables. Most languages I know have these niggles.

      ((Liam: progress is slow - currently I use "gcc" to link the compiler, and in my latest experiments it crashes something call "bfd" which seems to be part of "ld". My old route of creating several .a's and linking with those still works fine.))

      1. JacobZ

        Re: Type checking and compatibility

        FWIW, and that's probably not much, I prefer "bool flag := <bool expression>". It is a useful redundancy that ensures that the (potentially hard to read) expression is of the type that the author and later reader think it should be. While this is not a big deal for bool, it can be a big deal for numeric types or for references (how many C bugs occur because something is a pointer to a pointer, rather than just a pointer?)

        Also FWIW my own language project [actually a pre-processor to Go since they won't add it to the language] goes even further with compile-time checking. It allows the programmer to define named types, much like any language, and then also "compound types", or as I call them, dimensions. For example, in C-ish pseudo-code it might look something like...

        type Meters double;

        type Seconds double;

        type Velocity (Meters/Seconds);

        And then if you have m, a variable of type Meters, s of type seconds, and v or type velocity, you can write:

        v = m/s;

        or

        m = v*s;

        but not

        v = m*s; //wrong dimensions, failed by dimension checker even though underlying types are all double

        The dimensions will be checked at compile time, leaving runtime code as efficient as if the dimension checker never existed.

        Of course, this is a relatively trivial example. The value becomes more apparent the more complex an expression becomes, and when calculations are chained together.

        1. LVPC Bronze badge

          Re: Type checking and compatibility

          On a side note, Boolean types are defective by design. In real life, they should be TRUE, FALSE, UNINITIALIZED, and ERROR. It's only two bits, but it avoids making stupid assumptions. Just my two bits.

          And an example of how a language construct can be made less error-prone.

          Started doing that decades ago. It works.

          1. MarkMLl

            Re: Type checking and compatibility

            > TRUE, FALSE, UNINITIALIZED, and ERROR

            I once worked on a preprocessor for documents to be submitted to Ventura Publisher, and working from memory there were something like seven states of a flag. I really can't remember the details after 35 years or so, but it's very easy to imagine there being separate states for "undefined", "undeclared", "true-but-may-be-overridden" and so on.

          2. Dan 55 Silver badge

            Re: Type checking and compatibility

            TRUE, FALSE, UNINITIALIZED, and ERROR

            But having that for boolean, wouldn't you want it for the rest of the types too and if so, how would you implement it?

            Wouldn't it be clearer to just have some special return code enum for all functions and not mix it with boolean?

          3. Chris Gray 1

            Re: Type checking and compatibility

            Disagree, at least as far as strongly typed compiled languages are concerned. Type 'bool' is true/false. If you want more, then feel free to add such a type to your programming language, but the historic use of the simple 'bool' is very strong.

            In my AmigaMUD programming language, type "status" had values "success", "failure", "continue" to deal with attempts to do things in the game world. What you need/want depends on what you are doing and the context. Leave 'bool' alone.

          4. Gene Cash Silver badge

            Re: Type checking and compatibility

            > On a side note, Boolean types are defective by design. In real life, they should be TRUE, FALSE, UNINITIALIZED, and ERROR. It's only two bits, but it avoids making stupid assumptions. Just my two bits.

            No... that is just not a Boolean. If you want that, declare your enumerated type. Leave my Booleans alone.

          5. Roland6 Silver badge

            Re: Type checking and compatibility

            I think way back (K&R white book) C booleans were set to zero/false by default on initialisation, It was only later they became undefined until the user set a value.

            Undefined is a state that should only exist in debug ie. Gets discovered and corrected in debug.

            1. HereIAmJH Silver badge

              Re: Type checking and compatibility

              Boolean is a representation of a bit that is mapped to true/false. A bit can only have two values, 0 and 1. If it is uninitialized it can still only have two values, it's not like SQL where an entity can be null. It's just that you don't know what the value is because you never set it. Uninitialized values are unpredictable because it's simply allocating memory. If you don't set a value then whatever was there before is what you will get.

            2. Dan 55 Silver badge

              Re: Type checking and compatibility

              Booleans were a C99 addition, weren't they?

          6. ChoHag Silver badge

            Re: Type checking and compatibility

            Real booleans can be TRUE, FALSE or FILE_NOT_FOUND.

        2. Chris Gray 1

          Re: Type checking and compatibility

          Fun! How much do you use it? I put a full "Measures and Units" facility into my Zed language. My thought is that could help folks doing scientific, etc. programming. I've never used it myself. :/) The only run-time aspect is that of finding a good scale factor for output. Took a bit, but found my fun example: (Sorry, don't know how to format stuff for El Reg comments!)

          proc

          BttF(bool useLightning)void:

          var ampVec := getCurrentCurrents(), totalCurrent := 0.(A);

          for i from 0 upto getBound(ampVec) - 1 do

          totalCurrent := totalCurrent + ampVec[i];

          od;

          float(V) voltage := if useLightning then 1_000_000.(V) else 12.(V) fi;

          /* *How* many jigga-watts?!!?! */

          con power := totalCurrent * voltage;

          Fmt("Power = ", power, " [", power :: gUs, "] {", power :: gUsn, "}");

          corp;

          Output:

          Power = +0.121006e+010(A*V) [1.21(GW)] {1.21(gigawatt)}

          1. Bebu sa Ware Silver badge

            Re: Type checking and compatibility

            Not only physical units.

            You might imagine currencies being of a common type money_t and units of £, $, € etc or more precisely GBP, USD, EUR. When converting between currencies the dimensionally correct factor must be used.

            eg

            money_t : GBP cost = 120 : GBP;

            money_t : USD payment = cost; // error

            double : USD/GBP conversion = 0.76 : USD/GBP;

            money_t : USD payment = conversion * cost; // ok

            cost + payment; // error.

            money_t : USD/widget unit_cost = 13 : USD/widget;

            count_t : widget howmany = (100 : USD) / unit_cost;

            Obviously for counts need some type/unit hierarchy as you might realistically add an inventory of apples and oranges as a total of fruit or masses (kg) generally.

            1. Chris Gray 1

              Re: Type checking and compatibility

              Conceptually makes sense. The big issue, at least in my case, is that the conversion factors are continually changing. So, they could not be compiled into a "program". Even if you only ran the "program" right after updating it, you'd want the updating to be automatic, so you would need your "compiler" able to go fetch them from whatever authority maintains them. (Is there such a single authority? I'm a bytes guy, not a money guy.)

          2. Chris Gray 1

            Re: Type checking and compatibility

            I've read on stuff for formatting comments here. I don't have a badge, so I can't use < pre >. Gonna try anyway:

            Nope, nor does blockquote or code (I don't know that one)

            Any other suggestions I can try?

            As a programmer, posting little code snippets is a natural thing for me.

      2. MarkMLl

        Re: Type checking and compatibility

        > Compiled language don't do the kind of type conversions that run-time-typed languages do (can't say about Python - its years since I did a few experiments with it). I've actually used both Algol-W and Pascal (long long ago), and neither does that sort of thing.

        But (choosing the trivial example) they do promote byte to word and so on, sometimes with very unclear definitions of sign propagation and so on: I was reading the ALGOL-W manual and some of Hoare's contributions a few days ago.

        One of the peculiar things about the Pascal community is that the members almost always express the view "everything Wirth did is perfect!" until somebody suggests that some part of the language be upgraded to be compatible with some of his later ideas.

        1. Chris Gray 1

          Re: Type checking and compatibility

          Ok, I was wondering if that type of size change was what you had in mind. You need to also pay attention to the signed/unsigned attribute. If the starting value is signed, then the sign bit will be extended as needed. If the starting value is unsigned, padding 0 bits are inserted. You *do* need to be fully aware of when you are expanding numerics, but most code never has to worry about it. Whether or not you check for loss of value on a size reduction will depend on the "level" of your programming language, and then on what it's rules are for specific situations. I've avoided that issue in Zed by only having low-level sized types. 'uint' and 'sint' are only ever 64 bits.

      3. Roland6 Silver badge

        Re: Type checking and compatibility

        >” The problem with not declaring a type for variables is that the variables can then be used in expressions, and then you don't really know what is going on without finding the variable's declaration and carefully evaluating the type resulting from the expression assigned to it.”

        This was one of the features of C we were able to surface in Living-C’s debugging environment, so a user debugging their source code got to see the type conversions and thus the opportunity to make matters explicit if necessary. I would expect modern tools such as Visual Studio, running without the 640k memory limit to be able to do similar.

        Okay we didn’t force code rewrites to make things explicit, but if you are writing stuff in C, we do expect you to be better than a BASIC programmer.

    2. thames Silver badge

      Re: Type checking and compatibility

      @MarkMLl said: "As I understand it Python v3 does the exact opposite: variables are initially untyped but an expression has a type."

      What many people think of as variables in Python are actually references to objects. This is a very important distinction because Python is very much an object oriented language.

      I can do x = 5 followed by x = 'a' because x is neither an integer nor a string, it is a reference to an object and it is the object which may be integer or string (or whatever). Another way of looking at it is that you are actually effectively dealing with pointers rather than variables.

      However, if you try to do something like 5+'a' then you get an error which says TypeError: unsupported operand type(s) for +: 'int' and 'str'.

      Compiler errors in Python are generally related to syntax, e.g. SyntaxError: invalid syntax.

      Static languages such as C must figure out what machine code operations they must execute at compile time. This is why data types must be defined precisely. The machine code for adding two floating point numbers together is not the same as the one for adding two integers together.

      Dynamic languages such as Python defer such decisions until runtime.

      Python is however very memory safe because you don't allocate memory, you just create references to objects and the objects figure out for themselves how to allocate the proper amount of memory for their needs.

      1. Blue Pumpkin

        Re: Type checking and compatibility

        “ Dynamic languages such as Python defer such decisions until runtime.”

        And there be dragons …. Memory safe doesn’t help if the code falls in a heap.

        But of course it’s horses for courses, I much prefer a really strongly typed contract based approach for my nuclear reactor or anything I fly in, rather than some runtime error that may or may not occur.

        1. Tom 38

          Re: Type checking and compatibility

          Python is extremely strongly typed; it just doesn't validate any contracts around types at compile time (because there isn't one) or at run time (because its already slow enough). If you extensively annotate your code with type hints, you can then use type checkers to enforce the contracts. We have an absolute monolith of python code at work, some 12M lines, and we do not ever have runtime type errors.

          Essentially, if you want well typed dynamic code, you replace compilation with static analysis.

          Type hinting sounds painful and slow - why would I do all that extra work if I've chosen a dynamic language - but its actually a net benefit as it allows LSPs and LLMs* to reason about the code much more as you're developing, and the correctness that they introduce to your coding more than makes up for it in lost productivity debugging and fixing type mismatches that you would definitely get without them.

          * I'm not a fan of LLM AIs for coding. They frequently produce gibberish. In the hands of a competent engineer though, they do seem to have some productivity benefits. Ask me again in two years.

        2. thames Silver badge

          Re: Type checking and compatibility

          There is no such thing as a one size fits all programming language. There's no such thing as a programming language which is better than all other languages at all fields of application. This is why we have multiple programming languages. You need to learn multiple programming languages and use each in places where its strengths reside.

          One of Python's great strengths is its ability to be combined with C so that you get the advantages of both, the rapid application development and concise code of Python, and the high performance of C in focused elements where it really matters.

          As a result of this there are a lot of Python libraries which are written in C (and I happen to maintain a few).

          The Rust fans want us to "just re-write all your existing software in Rust", which is going over like a lead balloon with people who have large existing code bases that are working fine with no errors having been reported in years. The last thing I want to do is to re-write lots of code and introduce new bugs which weren't in the old stuff.

          And on top of this Rust is heavily tied to LLVM, which doesn't have the wide variety of extensions that GCC has, which I use to access the CPU specific features which are necessary to get high performance. Apparently what I am supposed to do is re-write all those bits in assembly language. That's just great - apparently the answer to memory bugs is to re-write most of my software in assembly language, put a Rust wrapper around it, add in inter-op code so it can be used from a C program (the Python run time), and call it "more secure".

          What is really needed is something that is very similar to C but which can deal with a lot of the common memory management issues, which isi what things like Fil-C are trying to do. I am following this with a great deal of interest to see if these ideas make it into standard C.

          1. Liam Proven (Written by Reg staff) Silver badge
            Trollface

            Re: Type checking and compatibility

            > There is no such thing as a one size fits all programming language. There's no such thing as a programming language which is better than all other languages at all fields of application.

            I dunno about that. I mean, there's Lisp.

            (Please note the icon, do.)

            The big snags are

            (a) a lot of mere mortals can't read it -- and I'm one of them -- and

            (b) that the syntax encourages brain-bending macros, which punish any other poor bugger trying to maintain it later.

            Which is why I wrote about Dylan:

            https://www.theregister.com/2025/06/26/opendylan_20251_released/

            Lisp, but readable.

            Which means it upset _both_ camps. The Lispians didn't like it because it made their magic superior-brain code readable by mortals, and stopped them doing unmaintainable stuff... and hoi polloi didn't like it 'cos it was weird and from Apple and didn't use curly braces everywhere.

            There is another answer to this. (Please imagine extra large trollface here.)

            Lisp is the universal programming language. The mere fact that lots of people can't read it merely indicates that they shouldn't be programming in the first place.

            I reckon a clever tool (*not* a euphemism for a Lisp developer, honest, would I?) could be contrived that could convert one to the other.

            Keep the Real Lisp for the Real Men, and let the wannabe script kiddies mess about in Dylan.

            :-D

            *Runs away and hides*

            1. ChoHag Silver badge
              Coat

              Re: Type checking and compatibility

              > > There is no such thing as a one size fits all programming language. There's no such thing as a programming language which is better than all other languages at all fields of application.

              > I dunno about that. I mean, there's Lisp.

              > (Please note the icon, do.)

              I can see right through your scheme.

            2. DarkwavePunk Silver badge

              Re: Type checking and compatibility

              I almost got a job at some company in Cambridge that was still making LISP machines in the early 2000s. I often wonder if I missed a trick, or dodged a bullet.

  6. Anonymous Coward
    Anonymous Coward

    All I know is...

    I want compilers to be memory-safe & do all the other stuff to minimize potential exploits, so the end product safety under the hood is as minimally dependent on the programmer as possible. I know the Dunning-Kruger effect exists, so some people don't even know enough to wonder what they don't know, and I also know that I don't know everything. Yea, I'm still responsible for the logic of my code being correct & restricted properly to form, function & fit-for-purpose. Still, I can't be responsible for some code I didn't even write running some other program on the machine (or library/API my code uses), allowing some weird access to sensitive data still sitting in RAM of one of my variables. It's a fucking computer, whose whole purpose is to handle data faster than any human can. It behooves us to move to compilers that do as much heavy lifting as possible to produce as secure, coder-independent binaries as possible. It's just common sense.

    1. Anonymous Coward
      Anonymous Coward

      Re: All I know is...

      Yeah, if your code targets x86_64 you can just pizlonate yourself a libc Sandwich (linked at "its author terms"), with libyoloc, ld-yoloc, and crt* at the bottom, libpizlo & fil_crt in the middle, and fil-C's libc(++) on top, and voilà, whatever else you Fil-C compile on top will be memory-safe and garbage collected by FUGC ... but between "1x and 4x" slower than normal Yolo-C (from djb's "graph", linked at "Notes by djb on using Fil-C").

      I imagine that hardware-assisted efforts (CHERI and OMA) could help with performance in future ...

  7. Anonymous Coward
    Anonymous Coward

    Interesting article, thanks!

    Thanks, Liam, a very interesting article! (I'd certainly like to see more articles about programming and development in The Reg.)

    Although I learned C a long time ago, the fact that you have to take care (oh so very much care!) of so much memory management yourself really put me off it (and, for a while, sadly, off programming in general), so I never really used it for anything "real" after graduating.

    It's sort of understandable, given that C was intended to be "just high level enough to be comprehensible, but (still) let you work with the hardware efficiently", but, as many other commenters have alluded to, it takes a special sort of coder to be able to work at that level proficiently enough and consistently enough to not inadvertently spill things around memory by accident.

    I'm so glad that the sort of coding I do nowadays is abstracted far enough away from hardware-banging that I don't have to worry about that sort of thing. Although, as Bobby Tables reminds us, there can be other problems…

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Interesting article, thanks!

      > Thanks, Liam, a very interesting article!

      Why thank you!

      > (I'd certainly like to see more articles about programming and development in The Reg.)

      May I quietly point in the direction of our sister site?

      https://devclass.com/

      1. Dan 55 Silver badge

        Re: Interesting article, thanks!

        It would be nice to have comments on DevClass, lots of possibilities for geeky language debates, but I guess you don't want to take on more moderation.

        That said, it would only be a problem when Rust fanatics deliberately and maliciously conflate C and C++, their use cases, and their functionality. ;)

        1. Dave559

          Re: Interesting article, thanks!

          Yeah, I did have a look at DevClass when it first appeared, but it sort of unfortunately (and maybe wrongly?) gave the impression of being perhaps not very much more than a site of reformatted press releases, and veering dangerously close to Maximal Trendy Buzzword Compliance, rather than taking the time to elevate the "useful need to know" stuff from the sales-pitch fluff (unlike here), so I saw little to attract me, I'm afraid.

          And half the value of The Reg genuinely is the interesting, acerbic, and usually informative (and surprisingly troll-free) comments (beers all round, etc), so, as Dan 55 says, without that DevClass is considerably less useful. In addition, it's much easier to skim over the "what's new" in just one place and open potentially interesting articles in new tabs, than to have to go to two separate but related sites!

  8. DS999 Silver badge

    I just can't believe it took THIS long

    For such a compiler to be written. I was suggesting this as a fix to some of C's memory handling issues 20+ years ago, arguing that even if you throw away some percentage of performance and waste a bit of RAM it isn't an issue because we have cycles and bytes to spare. In 2025 that's true to a far greater degree!

    I guess it must have been a lot harder to make happen than I would have guessed. I remember using Purify to identify/fix memory issues in the 90s and thought if they were able to write that to operate directly on object files having similar technology built into the compiler would make the job much easier.

  9. snifferdog_the_second

    "C (and C++)"...

    ...Not the same thing. Used correctly, C++ is orders of magnitude safer than C.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: "C (and C++)"...

      > Used correctly

      Ay, there's the rub...

      1. Ken Hagan Gold badge

        Re: "C (and C++)"...

        It's not hard to get many of the benefits of C++. Basically if you ever find yourself calling malloc() or free(), or using the new or delete operators, stop right there and learn how to do the same thing without them. I've managed to explain that in one sentence. You can review an existing codebase for flaws with four runs of fgrep. You can begin today by shoving your C codebase through a C++ compiler unmodified. (I think it is still true that semantic differences are all detectable by static analysis )

        Compared with learning a new language and porting existing code before you see any benefits, this is a very low barrier to entry.

        1. Dan 55 Silver badge

          Re: "C (and C++)"...

          Obligatory watch for anyone who wants to C++ify a C codebase (Matt Godbolt, 1 hour 35 minute talk).

  10. pip25

    Vs Rust

    "Fil-C runs rather more slowly than usual for C code"

    Considering, as far as I'm aware, you can use Rust without such performance penalties, I wouldn't say that the pendulum is swinging back in the direction of C just yet.

    1. thames Silver badge

      Re: Vs Rust

      Rust doesn't address the issue of what to do about existing code bases. Re-writing massive amounts of code in another language and reintroducing whole classes of bugs that were found and fixed years ago in the existing one isn't realistic. This problem is what Fil-C and other similar projects are trying to address.

      I suspect that projects like Fil-C will pioneer and demonstrate concepts which will get incorporated into mainstream C compilers eventually.

      1. pip25

        Re: Vs Rust

        For existing, non-performance critical C applications, that is an absolutely fair point.

      2. DS999 Silver badge

        Re: Vs Rust

        pioneer and demonstrate concepts which will get incorporated into mainstream C compilers

        And get more eyes on it which will no doubt result in addressing many of the performance concerns of this initial version.

    2. Dan 55 Silver badge
      1. pip25

        Re: Vs Rust

        If you read on you'll find that with some optimizations the Rust version was made virtually identical to the C version in terms of speed.

  11. redwine

    tinydns

    I miss tinydns and loved the data format. zonefiles are horrific.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon