back to article Can Rust save the planet? Why, and why not

Here at a depleted AWS Re:invent in Las Vegas, Rust Foundation chairwoman Shane Miller and Tokio project lead Carl Lerche made the case for using Rust to minimize environmental impact, though said its steep learning curve made the task challenging. Miller is also a senior engineering manager for AWS, and Lerche a principal …

  1. dajames Silver badge

    That learning curve

    Rust would be a lot easier to learn if it didn't insist on using terms we think we already understand to mean things that we don't think they mean.

    An obvious example is the use of "variable" for something you can't vary (you need a mutable variable for that -- as Verity explains).

    Another example might be the use of "enum" to mean something that is more akin to a discriminated union than the simple enumerated integral types of most other languages (rust is not alone in this, Kotlin's enums are similar).

    Those hoping to ascend the rust learning curve have first to break through the preconception barrier.

    1. Doctor Syntax Silver badge

      Re: That learning curve

      Verily, would deserve a 2nd upvote for reference to Verity.

    2. bombastic bob Silver badge
      Trollface

      Re: That learning curve

      Inigo Montoya might have somehing to say about that...

      That term does not mean what you think it means...

    3. Lorribot Silver badge

      Re: That learning curve

      I often say that IT/Technology is a department separated by the same language. Each sub group using the same words to mean entirely different things or referring to the same things with different words.

      Seems that programming is similarly afflicted.

    4. Someone Else Silver badge

      Re: That learning curve

      An obvious example is the use of "variable" for something you can't vary (you need a mutable variable for that -- as Verity explains).

      Verity knows her poopadoodle, but I think it was Murphy that first stated the axiom:

      Constants aren't, and variables won't

      Seems Rust has baked that into the language.

      And a language that is based on a Murphy's law can't be good...can it?

      1. ssokolow

        Re: That learning curve

        To be fair to them, the people who use the terminology properly try to promote talking about a mutable or immutable binding to a variable.

        Similar to how the people who talk about C++ try to promote "zero-overhead abstraction" as a replacement for "zero-cost abstraction".

    5. fg_swe

      Much Easier: Sappeur

      If you know Java, C or C++, you might have a look at Sappeur. It reuses as many C++ concepts and terms as possible and generally adheres to the KISS principle.

      http://sappeur.ddnss.de/

      And yes, consumes only 50% RAM of an equivalent Java program. Starts in milliseconds.

    6. HildyJ Silver badge

      Re: That learning curve

      The learning curve is function of whether it's a learning curve or a relearning curve.

      As newcomers learn Rust in school, they will view other languages like C and C# as arcane and difficult to learn.

      1. kewlio

        Re: That learning curve

        Ugh? My kids decline Python for Scratch. Good luck with teaching them Rust.

        1. fg_swe

          Re: That learning curve

          Arguably, your kids will be put off by the nondeterminstic bugs of C much more than the nagging of the Rust compiler before a program even compiles.

  2. Paul Crawford Silver badge

    It is quite fascinating to see rankings of different languages by the 3 key metrics shown, even though we know that different applications have aspects that favour one or another.

    Still, as a part-time C & python programmer I will probably stick with them as they cover most of what I need and life is getting short for learning a new and unusual language that really only promises a bit more security for my C side. And there is always apparmour as a bit of a backup.

    1. AMBxx Silver badge
      Thumb Up

      Your C skills will still be usable long after we've all forgotten Rust existed.

      Waiting for the next flavour of the day to arrive...

      1. J27

        People said that about COBOL, Fortran. At some point something knocks the current king off their throne. Will it be Rust? No idea, but it'll happen eventually. But then you'll be able to charge a fortune for your C skills to maintain those "ancient" C systems that the new kids don't know how to fix. Should get you to retirement, worked for the COBOL guys.

        1. bombastic bob Silver badge
          Meh

          FORTRAN and COBOL (capitalized because that's what you do, it's their names) can be easily learned from a book in a few days. Yes, that's how I learned them. C takes a bit longer, mostly because of pointers and arrays and the various nuances of type casting. Once you "get" that, no problem.

          The difference between procedural and structured languages (in general) is why C basically took over as the common denominator for so many OTHER languages, though you COULD say it was really ALGOL wut dun it...

          So now you have this "more complicated to learn" structured lingo (Rust) with its promoters trying to take over as the NEW favored lingo, where an easier to learn and slightly more efficient decades-established universal programming language (C) has been learned by pretty much everyone, and it seems we're all expected to JUST CHANGE and "accept it" now...

          (I have to wonder if those who promoted Ada "felt" the same way...)

          1. fg_swe

            Wrong

            Rust and Sappeur do provide wholly new capabilities to safely execute multithreaded code without risking nasty memory bugs. C, C++ and most other languages were never designed to robustly handle multithreaded heap access.

            Sappeur uses a simple approach, details of which fit a page

            http://sappeur.ddnss.de/manual.pdf

            (Section 9.2)

            1. Anonymous Coward
              Anonymous Coward

              Re: Wrong

              "...new capabilities to safely execute multithreaded code..." so these languages are designed to support programmers who don't understand the implications of multithreaded code?

              1. fg_swe

                No

                Non-trivial multi-threaded C or C++ programs need plenty of Mutexes to protect shared memory. If you share a variable by accident without mutex protection, chances are you get heap cancer. Best of luck finding the root cause of this cancer.

                The Rust and Sappeur compilers will force you to have proper Mutex protection.

                1. mpi

                  Re: No

                  And Golang allows me to write code with hundreds of thousands of concurrent threads of execution using no (or very few) Mutexes at all.

                  Now, Golang can be learned over a long weekend, it's compiler is blazing fast, and the code is readable.

                  1. ssokolow

                    Re: No

                    Go's data race detector is a runtime thing, similar to LLVM's Thread Sanitizer. Rust guarantees that code is free of data races at compile time using the type system. You may or may not consider the latter necessary, but they are different classes of solution.

                    1. mpi

                      Re: No

                      Indeed they are, my point is, as long as I observe some very simple (and intuitive) rules, golang makes it really hard for me to stumble into situations where data races can even occur.

                      1. fg_swe

                        Multithreaded Memory Safety in Rust, Sappeur and Go

                        1.) Sappeur and Rust will force the software engineer to think about thread-shared data at compile time. Go does nothing of the like.

                        2.) Go assures the integrity of the heap, just like Sappeur and Rust do. C++ does not.

                        3.) You can have nasty data races in Go at a low level. For example, you can create a global counter and attempt to update it from many threads. Result will be undefined. With Sappeur, you will get the accurate value, because the compiler forces you to create a "multithreaded" class* for the counter.

                        4.) Go will typically consume 2x the RAM of an equivalent C++, Sappeur or Rust program, assuming something now trivial which performs heap allocations in a loop.

                        *each method of such a calls is protected by mutexes

          2. Version 1.0 Silver badge
            Thumb Up

            I liked COBOL, the neat thing was that there was virtually no need to write any comments to explain what was being done - this has made life easy for programmers maintaining COBOL code.

            A quote that covers all the issues with Rust, Python, etc., saying that they are best; "There is no such thing as bad publicity except your own obituary." - Brendan Behan

          3. Ian Johnston Silver badge

            FORTRAN and COBOL (capitalized because that's what you do, it's their names) can be easily learned from a book in a few days.

            A colleague of mine used to say that it was impossible to find anybody was "learning" FORTRAN because the transition from "haven't learned it" to "has learned it" was so fast.

          4. Dagg

            Yep ALGOL was the start a language that wasn't controlled by a card column location.

            I remember using ALGOL-60 on a B6700 so much better than FORTRAN.

            ALGOL is the ancestor of languages like PASCAL, C, Ada etc

        2. katrinab Silver badge
          Alert

          I would say the following are "the next Cobol":

          Visual Basic

          Java

          Javascript

          and of course Excel

          Not a fan of any of these, but that's the way it is.

      2. DrXym Silver badge

        Your C skills would be improved by learning Rust even if ceased to exist tomorrow. The things it beats you over the head about at compile time are transferable to C and C++.

        1. fg_swe

          The C and the C++ folks have tried to graft static checkers onto C and C++ programs in order to achieve the same goals as the Rust and Sapper type systems.

          1. Yes Me Silver badge
            FAIL

            Missed bus in 1975....

            "tried to graft static checkers onto C and C++ "

            With the emphasis on tried. A great opportunity was lost in the late 1970s, by which time the requirements for a safe programming language were known and the world was ready for "high level assemblers" that were not hardware-specific. Sadly the world chose C, instead of picking one of the various safer systems programming languages that were proposed then.

            https://en.wikipedia.org/wiki/Mary_(programming_language)

            (The first thing about Mary, I'll tell you cause you asked

            Is she don't like when people are always livin' in the past

            You know, talkin' 'bout the good old days and how things might have been

            If some folks had been different how things might be better now for them

            That's what you learn when you've known Mary long as I have)

            1. fg_swe

              A Pity

              We have so many interesting projects in Europe, but American standards (almost) always win.

              To name a few:

              Pascal

              Modula 2

              Oberon

              Transputer

              Occam

              Eiffel

              The success of C shows this is not for the better. Rather, it is a breeding ground for criminals and warmongering.

          2. DrXym Silver badge

            Static analysis is certainly better than nothing but these tools are also noisy and easy to confuse. Code analysis in Visual Studio is definitely worth running at least once before release.

            I can even recall when tools like Boundschecker or Purify arrived into the office with much fanfare and the things were so bloated and slow they actually crashed or caused the whole PC to die even before it got to the bug we were trying to get them to find.

            Nothing beats the compiler just telling you point blank that the code is wrong.

    2. fg_swe

      Cyber Security & Memory Safety

      AppArmor can only help you defend other sections of you system, but not the exploited process itself. For example, imagine a multi threaded web server written in C. An attacker will use a memory access bug to inject his malware. Then the attacker has access to all user sessions processed via this Linux process. He might even gain access to cryptographic keys, if you do not use an HSM.

      See this presentation for details: http://sappeur.ddnss.de/Sappeur_Cyber_Security.pdf

      1. Someone Else Silver badge
        Facepalm

        Re: Cyber Security & Memory Safety

        Yes, of course, because every "multi threaded web server written in C" has oodles of "memory access bugs" just waiting for an "attacker" with average skillz (and matching intelligence) to "inject his malware".

        A job in Marketing awaits you...

        1. fg_swe

          Non Trivial C programs

          ..do indeed have exploitable memory bugs. That is what the CVE database tells us.

  3. martyn.hare
    Angel

    Dumping the cloud

    Would save more power than making everyone use Rust. Seriously. If YouTube, Netflix and all the other heavy data users were P2P and designed to use the most efficient routing possible, total server counts could be vastly reduced. Similarly, if we stopped encrypting non-confidential data and stuck to just signing it instead (for integrity), then ISPs could cache more things at the edge, further reducing burdens for remaining non-P2P services with many slow changing pages (e.g. mature parts of Wikipedia).

    Then we could all use our dirty PHP. poncey Python and janky Java while still saving the planet!

    1. Peter-Waterman1

      Re: Dumping the cloud

      The Cloud centralises computing and allows greater use of greener energy. In addition, the scale they operate is vastly more efficient than could be done in most on-prem data centres If you look at the three clouds that everyone uses - AWS (Netflix) & Azure say they will be on 100% renewable energy by 2025. (currently above 60%) and Google (YouTube) is already 100% renewable energy.

      1. bombastic bob Silver badge
        Facepalm

        Re: Dumping the cloud

        The Cloud centralises computing and allows greater use of greener energy

        so why arent we all just using big-iron timeshare systems, then? THAT centralises EVERYTHING, right?

        I think you will find that this level of centralisation actually CREATES inefficiencies (such as a SHIT PILE of JAVASCRIPT on the clients to avoid clogging the cloud servers since they bill by the CPU usage) and we're actually experiencing WORSE overall efficiency as a direct result.

        Where a non-cloudy "your server" solution may have simpler PHP and minimal scripting and (typically) smaller bandwidth requirements, a CLOUDY one will (no doubt) waste bandwidth with monolithic CDN-fed javascript libraries [that are unfortunately 'common' {in a computing sense} and typically cached on user's browsers, even after you hit F5 on chrome] so that you can minimize CPU instances and related costs.

        yeah, it HAS spawned THE MESS, to be "too cloudy" and "too central". 'More of the same' will NOT fix it.

      2. Dagg
        Pint

        Re: Dumping the cloud

        The Cloud centralises computing and allows greater use of greener energy

        And also a single point of failure! I remember being in my local pub and the accountant crying into his beer because the cloud had gone down / offline what ever and he couldn't get the payroll done.

  4. Ian Johnston Silver badge

    The biggest reason to doubt rust seems to be that it's "run" by multiple teams of enthusiasts, many of whom are at each others throats. Tantrums, walk-outs and other exciting internal politics abound. All good clean fun, no doubt, for hobbyists but doesn't encourage reliance.

    1. DrXym Silver badge

      This is utter nonsense.

    2. cornetman Silver badge

      Any references that we could look at? Rust seems to me relatively boring on that front.

      1. DrXym Silver badge

        Exactly. I think original poster is somehow extrapolating that because a couple of community admins resigned last week, possibly for legit reasons over a core developer, that it implies complete anarchy. The boards and message lists suggest different. It's actually kind of boring and polite.

    3. fg_swe

      Experience

      I have used the Rust compiler for small projects and never found a bug. The error messages and terms are a steep learning curve, though.

      1. Yes Me Silver badge

        Re: Experience

        A compile time error is probably a thousand times cheaper to fix than the intermittent side effect bug that it prevents. That's the whole point of safe languages.

        1. fg_swe

          Re: Experience

          I completely agree. I found lots of bugs in my own code, but no bug in the compiler. And of course, good things of a non trivial nature require learning.

          1. Peter Gathercole Silver badge

            Re: Experience

            As I worked on the languages queue at a UK support centre, I have found (or at least investigated and documented) compiler errors.

            The worst one was a bug that did not identify stack wraparound in the generated code. It generated code that crashed due to stack corruption a long time after the corruption actually occurred. Was absolute hell to identify, and even worse to try to explain to the development team that actually needed to understand it to fix the compiler (I documented the stack corruption and the sequence to make it happen, but did not have access to the source to find the actual cause).

            The worst part was that the company that reported the bug (and who were unable to compile their code so they could sell it on this platform) went bust before the fix was generated. I called up to give them the good news just days before they shut down.

  5. Filippo Silver badge

    I am more troubled than the author by the point that TypeScript appears to be 10x less efficient than JavaScript. As the author points, there really is no good reason for this. TypeScript is compiled to JavaScript in a comparatively straightforward manner. If the benchmarks show that much of a difference, I feel that the most likely explanation by far is that the person who coded the TypeScript solution was simply not as skilled, or not as focused on performance, as the person who coded the JavaScript solution. But if that is the case, then the validity of the whole dataset is cast in doubt.

    I might be wrong, but I would definitely investigate that data point further, if I were one of the authors.

    1. Brewster's Angle Grinder Silver badge

      You could make the argument for C++ and C. It's not as dramatic, but there's no reason why the C++ performance should be noticeably different to C unless you've done something horrible.

      1. Draco
        Windows

        There are at least three reasons C vs C++ performance might differ

        C and C++ are not the same language and there are a few corner cases were C code won't compile the same in C++, but ... I am digressing.

        1) The programming paradigm is different between C and C++. In C++, you are "encouraged" to use C++ features - like the STL (Standard Template Library) instead of managing your data structures with malloc(), realloc(), and free(), or streams instead of printf() and FILE. So, equivalent C and C++ programs will look different.

        2) C and C++ runtimes are different. C has no need for exception handling.

        3) Even if you compiled a C program as a C++ program, things like structs get default constructors and destructors in C++.

        1. Irony Deficient Silver badge

          C has no need for exception handling.

          It could be posited that setjmp() and longjmp() are C’s exception handling.

          1. Brewster's Angle Grinder Silver badge

            Re: C has no need for exception handling.

            IIIRC, for a long time, exceptions in C++ were implemented as setjmp/longjmp. IIRC, a lot of the performance pain from exceptions came from that approach.

            I still remember the first version of g++ that supported exceptions (2.6?). I had them in Borland C++ and thought now they were in both compilers I could start using them. I turned them on. Looked how big my executable had become. Looked at how much performance had been hit. And turned them off for another fifteen years.

          2. Man inna barrel Bronze badge

            Re: C has no need for exception handling.

            Unlike proper exception handling, setjmp()/longjmp() do not clean up any memory allocated after the setjmp(), so memory leaks are likely. In C++, throwing an exception unwinds the stack, and calls destructors.

        2. Brewster's Angle Grinder Silver badge

          Re: There are at least three reasons C vs C++ performance might differ

          "C and C++ are not the same language and there are a few corner cases were C code won't compile the same in C++, but ... I am digressing."

          Yeah. But a little bit of casting [*shakes head at <tgmath.h>*] and a little bit of fiddling should enable a C++ compiler to compile any C program. And, from experience, reveal a few bugs in the C while banging your head against C's structural typing.

          It won't use the STL, will use libc, and as you haven't used new and if you turn off exceptions you won't need libstdc++. (I have done that in a while but have in the past. I also once knew how to use placement new.) From memory, I don't think PODs get constructors but empty constructors will get optimised away. And they are fundamentally the same underlying compiler.

          Which comes back to it not being really clear what those charts are measuring.

    2. msobkow Silver badge

      Any time you take your code to a higher level, you introduce inefficiencies. Typescript is NOT Javascript, so the comparison is irrelevant. You could claim that everything eventually runs machine code so it is all the same, too.

      1. Filippo Silver badge

        That could explain, I dunno, JavaScript running 10x slower than C. It could, maybe, explain TypeScript running 5% slower than JavaScript.

        That cannot explain TypeScript running 10x slower than JavaScript. The layer of abstraction is just too thin.

      2. J27

        You can quite literally take your JavaScript and add types to it to make it typescript. The transpiler then strips the types off after type checking. If the code isn't the same there is either an issue with the transpiler or you didn't actually use the same code in the first place.

        Typescript isn't actually executed by the browser in any case. When I was considering adding TypeScript support to the codebase of a large application I work on regularly I tested the performance difference for several common operations and it came out within the margin of error (+/- 5%). I'm using babel to transpile the code so perhaps they used tsc or something else?

      3. DrXym Silver badge

        Typescript translates into mostly equivalent Javascript. Perhaps there is some runtime overhead but whenever I've debugged JS compiled from TS it seems pretty tight. Maybe that by being a typed language that encourages the use of interfaces and classes that somehow it changes how people write code that is less efficient.

        Some environments like deno also allow Typescript to be executed directly. I don't know if they compile it to JS on the fly or cache it somewhere, but I guess that would be less efficient. BTW Deno is written in Rust so things are converging.

        1. Filippo Silver badge

          > Maybe that by being a typed language that encourages the use of interfaces and classes that somehow it changes how people write code that is less efficient.

          That's why I find this data point concerning. Is the dataset actually comparing the efficiency of languages and compilers, or is it just comparing the skill of the coders who built the benchmarks?

          That difference between Java and C#, is it because C# is slower, or is it because the guy who wrote the Java benchmark knew how to avoid heap fragmentation, and the C# guy didn't? The difference between C# and F#, is it something in the IL generated by the compiler, or is it because the F# guy is doing random access on linked lists? Etc etc.

    3. nijam Silver badge

      > It is also odd to find TypeScript 10 times less efficient than JavaScript, considering that it compiles to JavaScript and similar code can be written in both.

      Well, maybe it produces poor quality JavaScript. Maybe it uses excessive resources to compile. And so on. The only thing that would be unlikely is for it to be more efficient that JavaScipt.

      1. Tom 7 Silver badge

        Is TypeScript JIT? Mind you you may not want to run it twice!

    4. thames

      They used the CLBG benchmarks. These are intended as a game (which is why they have "game" as part of the name), not serious benchmarks. As real benchmarks they're worthless when used by themselves.

      The most likely reason for TypeScript being slow is that the person who wrote the Javascript compiler was using the CLBG benchmarks and tuned the compiler to recognize those benchmarks and output an optimized result.

      Run the same benchmark through a TypeScript compiler and the resulting Javascript isn't recognized by the Javascript compiler, which therefore doesn't produce the optimized result.

      Similar things happen with the Javascript compilers in different browsers. Tweak the benchmark slightly and performance can fall off the cliff in some browsers.

      This sort of thing happens all the time, which is why these common synthetic benchmarks are generally considered to be worthless other than as an amusing game.

      The Python core development team reject "improvements" which are intended to help performance on these sorts of "pop" synthetic benchmarks. They'll tell you to come back when you've found something that makes Django or the like run faster, otherwise it's pointless.

  6. Paul Smith
    Coat

    Cool!

    Looks like its time to brush up on my Pascal!

    Mine is the with with a Turbo in the pocket.

    1. Paul Crawford Silver badge

      Re: Cool!

      Steady, next you will be claiming it is a Python in your pocket.

      1. Hull
        Coat

        Re: Cool!

        Cut it out, Paul!

    2. LionelB Bronze badge

      Re: Cool!

      Last time I looked at Pascal that was a button on the front of my computer.

      1. Ken Shabby
        Coat

        Re: Cool!

        Mine is the one with the RPG in the pocket.

  7. Marco van de Voort

    IIRC that article is commonly considered junk, as it uses benchmark-game benchmarks, where some have been optimized for multithread use, and some not.

    It shows more which community wants to invest more time in futile benchmark games than anything related to energy.

    1. fg_swe

      Cynical View

      Could you be a little less cynical ?

      Even though these benchmark games have their flaws, the general observations are correct:

      1.) Compiled Languages are more energy-efficient

      2.) mark+sweep GC creates at least 2x more RAM demand than refcounted objects. (it is actually easy to understand why - you cannot run GC all the time, so you must accumulate garbage)

      3.) C and C++ are indeed highly efficient in runtime and RAM consumption

      4.) Rust and similar languages such as Sappeur aim to provide similar time+space efficiencies as C and C++. They come close.

      5.) Strong typing means efficiency. Dynamic typing comes at very serious cost.

      (I do think the TypeScript benchmark is somehow using an inefficient algorithm)

    2. thames

      Yes, I had a look at the paper, and just as I suspected it uses the Computer Language Benchmarks Game (CLBG).

      As a game it may be fun, but as a set of benchmarks they're worthless. Some compilers have been optimized to score well on those benchmarks, and some haven't. Change the problem set to something different and you may get a completely different set of rankings.

      The rules of the CLBG requires that your entry must follow the idiom of the original problem exactly, without being adapted for how another language may actually work. This means that languages which favour approaching problems in a different way are inherently disadvantaged.

      Many benchmarks that you find on the Internet are actually samples which were written specifically to show off features of a specific compiler. If you've just written a cool new feature into your compiler, then you need a benchmark which shows it off at its best. That has little relevance to anything which isn't very similar to that benchmark however.

      These benchmarks represent a very specific problem domain, one that is probably best suited to C, or in some cases Fortran. If you have a problem like that, then write it in C or Fortran, that's what those languages are intended for. If you are writing a server application for a web site or a line of business application, then write it in a language which is suited for that.

      There was a project at Google some years to write their own JIT compiler for Python, called Unladen Swallow. They used the CLBG benchmarks as their development benchmarks, and worked for months on the JIT compiler. When they were done they proudly announced how according to CLBG benchmarks their work was now 'x' times faster. The new JIT compiler was then rolled out to testing prior to deployment.

      It was rapidly kicked back. The JIT compiled version was not 'x' times faster, it was 'y' times slower. By optimizing it for CLBG they ended up de-optimizing it for real world problems.

      The big lesson learned from that was the need for more realistic benchmarks. These were created by using large chunks of actual applications as well as synthetic benchmarks, and this is the approach used by more successful Python JITs such as PyPy.

      Unfortunately for people trying to compare languages, realistic benchmarks are not readily portable across languages, and nobody else is motivated enough to write ones which are.

      The only realistic approach is to know several different programming languages and know which one is best suited to which application. Even better, learn how to use them together so that you can use C and python together in those situations where you need features from both.

  8. tiggity Silver badge

    Reductio ad absurdum

    Lets do everything in assembler, properly hand crafted will be top of all the benchmarks.....

    More seriously, as people have said, complier used makes a huge difference (or interpreter for interpreted languages), and that's even the case for "efficient" languages such as C. Back in the day when I did real time work we would build C code using different compilers and examine performance / memory use / size of the resultant output etc. as sometimes trade-offs needed e.g. lower memory use could trump higher "speed" so long as it was fast enough to meet the spec.

    Depending on the tasks involved, it was not always the same complier that was "top" in a particular metric, as a particular compiler could be great at optimising some things but comparatively awful at others.

    1. nijam Silver badge

      Re: Reductio ad absurdum

      > ...everything in assembler...

      Assembler? You're lucky! In my day we had to carve hexadecimal on't cave walls. If our cave had walls.

      1. msobkow Silver badge

        Re: Reductio ad absurdum

        You had it easy. We had to program by placing rocks. If there is a rock in the slot, its a 1/true, if there is no rock, its a 0. Used to take forever to bootstrap the Enirock. :)

        1. Will Godfrey Silver badge
          Happy

          Re: Reductio ad absurdum

          Rocks! We could never afford those, we had to rely on grains of sand - a nightmare on a windy day!

        2. Anonymous Coward
          Anonymous Coward

          Re: Reductio ad absurdum

          Rocks? Luxury. We used to do our calculations with a few sticks and the shadows from the sun. The slackers used to love night-shift in that data center...

        3. EVP

          Re: Reductio ad absurdum

          Lucky you. We didn’t have 1s, but only 0s.

    2. Man inna barrel Bronze badge

      Re: Reductio ad absurdum

      >Lets do everything in assembler, properly hand crafted will be top of all the benchmarks.....

      I realise that was meant as a joke, but actually, hand crafted assembler is generally only better than compiler-generated code for limited cases. I think developers at work have done some SSE assembler to optimise image processing, but the bulk of the software is C++. Using a more efficient language for parts of the software that have little impact on performance is probably premature optimisation. The coding standards at work appear put more emphasis on code correctness than raw speed. Bugs found by customers cost real money.

      My own little bits of in-house code in Python are no doubt rather inefficient. But the code runs fast enough that I don't have to wait for results, and it saves me a lot of time, compared to crunching data by hand, or writing more efficient code in some other language. This could probably be said of a great deal of glue code and UI code. Get the job done right first. You can worry about making it faster after that.

  9. msobkow Silver badge

    I find it very amusing that Erlang is so far down that list, given how some people I used to work with went on about how it was the greatest language since sliced bread. Apparently Telcos don't care about power consumption. :)

    1. boblongii

      "Apparently Telcos don't care about power consumption"

      Most computing projects don't. If they did we would still be writing assembler. The reason we aren't is because time to produce the software is important. Very important.

      If you have to tell your audience that your language needs months with the help of another person to become productive then what you have built there is a concept language - looks neat, does a good turn of speed, but isn't going to ever be on the mass market.

      Hopefully someone will take Rust's Big Idea and apply it to a language with a sane syntax.

      1. fg_swe

        Re: "Apparently Telcos don't care about power consumption"

        Well, it could very well be that the age of insane energy consumption comes to an end. Apparently there are fuel shortages here and there, plus exploding cost for methane. Methane is what drives the electricity grid on cloudy days with little wind.

        As soon as energy is not longer near-free, economics might force us to use more efficient approaches.

        1. Someone Else Silver badge

          Re: "Apparently Telcos don't care about power consumption"

          [...] plus exploding cost for methane.

          I saw what you did there....

      2. Man inna barrel Bronze badge

        Re: "Apparently Telcos don't care about power consumption"

        Time to learn a new language does cost money. However, difficult-to-find memory errors occurring in production code probably cost more money. I presume this is a motivation for using Rust, despite the difficulties in learning the language, and the oddities of its compile-time object lifetime checks.

        I rather like Rust syntax, when compared to C++, which I am fairly familiar with. Rust has quite a few functional programming features built in at language level. In C++, these are largely bolted on to a procedural language core, which makes a functional programming style a bit clunky. I also like Rust's compile-time type inference, which makes code a bit neater than declaring everything.

    2. Man inna barrel Bronze badge

      As far as I know, Erlang is intended to construct complex systems out of little bits of code written in C, so the performance of the telecoms system would be largely dictated by the efficient code compiled from C. One of the main benefits that Erlang provides is safe and efficient concurrency using lightweight processes rather than threads. Another benefit is to handle errors in such a way that parts of a large system can fail, while the rest carry on, with maybe some loss of functionality, rather than actually crashing. Presumably, downtime due to system failure costs so much that the designers are prepared to spend a bit of CPU time and memory usage to keep things going when stuff goes wrong (which it will).

      1. ssokolow

        "out of little bits of code written in C"

        ...or Rust. Writing API wrappers around other languages' C extension APIs which enforce correct usage at compile time is a popular Rust pastime and there's one for Erlang NIFs named Rustler.

  10. karlkarl Silver badge

    I would be interested in knowing what kind of C++ was used.

    Was it:

    1) old-school raw owning pointer style

    2) unique_ptr and then raw non-owning pointer style

    3) shared/weak_ptr everywhere

    4) garbage collected (Boehm's or UE4's)

    5) full of javascript-style lambdas

    C++ is a multiparadigm language (as are many others) but this is quite critical as part of the measuring. Rust would be most similar to #2 (and raw pointer safety replaced with refs and safety of borrow checker)

    1. DrXym Silver badge

      Rust is going to be 1), 2) and 3).

      The default is 1) since the compiler tracks object lifetimes and inserts the allocation/deallocation only if stuff lives on the heap. If you use Box<> then it's more like 2) since a Box is heap allocated. If you use Rc<> or Arc<> then it's more like 3) although Rc<> is not atomic and therefore cheaper than a shared_ptr if all you do is on a single thread.

      Performance should be roughly analogous to correctly written C or C++. Where I think Rust performance definitely exceeds C++ is for string & collection manipulation because it has first class slice support as well as super expressive iterators.

      1. ssokolow

        Exactly.

        Back around the time Rust was going 1.0, there was a blog post on Planet Mozilla I wish I could find again which was giving good examples of how C++ developers leave performance on the table compared to Rust's &str (string slice).

        Why? Simply because they don't want the maintainability headache of having to manually keep track of the lifetimes involved in std::string_view in large projects and adopt a "when in doubt, make a copy" philosophy instead.

    2. bombastic bob Silver badge
      Devil

      Yes. I wondered something like that as well. Reliance on exceptions and other "object unwinding" kinds of inefficiencies may be a big part of their (unfair?) test.

      Good C++ code looks a LOT like good C code. (I say this a LOT)

  11. Rich 2 Silver badge

    I wish this madness would just stop

    I have no particular issue with Rust - never used it but keep promising myself I will one day - but let’s face it - the world runs on C. With a bit of C++ sprinkled around (*)

    Instead of constantly bashing C because it lets you shoot yourself in the foot, why not just learn how to bloody use it properly and stop writing buggy code. I can’t remember the last time I wrote some C that had a memory error. It’s not bloody rocket science - you just have to apply some common sense and a dose of discipline to what you’re doing. It annoys me how so many people complain about C because they are fuckwits who can’t learn how to use it.

    The article also mentions the difficulty Java (etc) people have with Rust. Well that’s because Java and Python and (god forbid) JavaScript abstract away the machine so much that the users of these languages often have no clue at all about what is going on under the hood. This is also why we have obscenely inefficient libraries written in these abhorrences and calls to big fat inefficient routines with no appreciation of how heavy they are. This is a fault of our education system rather than the languages, of course.

    (*) Of course there are still systems out there happily running Fortran and COBOL code, and while they are being replaced as time goes by, they are still there

    1. Paul Crawford Silver badge

      Re: I wish this madness would just stop

      'C' is the universal assembler, it lets you do all of the low level things you need to do in an OS, etc, including foot-shooting without having to learn assembly for a given CPU. There are many ways to make safer-C, such as enabling and responding to compiler errors, using static tools such as Coverity, and dynamic tools such as Valgrind to check things are going well in your memory-use department.

      But the world runs on cheap, and good programmers are not so cheap and take longer to build, test, document, and retest their code. How many companies actually give a fsck about that?

    2. DrXym Silver badge

      Re: I wish this madness would just stop

      Even kernel developers have their fair share of CVEs caused by aspects of the language. At what point do you ask yourself, if those people can't write code without being bitten by things caused by the language then what does it say of your average programming team?

      As for Java / Python and their issues with Rust (or C / C++), it is probably because they're high level languages where discarded objects go away to live on a farm to run and play and it never crosses a dev's mind what happens to them after that. Having to learn that stuff is the difficulty. In Rust it will beat you at compile time until you learn. In C/C++ it will beat you at runtime (NPEs, crashes, memory leaks) until you learn.

      1. Gordon 11

        Re: I wish this madness would just stop

        In Rust it will beat you at compile time until you learn. In C/C++ it will beat you at runtime (NPEs, crashes, memory leaks) until you learn.

        Compile time is mine. Runtime is yours....

        I know which I prefer when robustifying code (hint: not the latter).

        1. DrXym Silver badge

          Re: I wish this madness would just stop

          That makes no sense unless you fling your code out of the door never to see it again.

          If you have actual customers or users you have to support, then that runtime bug will arrive back on your desk and you could spend weeks trying to isolate and fix it. Especially if it's of the "it just crashed" variety.

          That is why code development is all about finding and fixing bugs early. Not only because the end product is more reliable (and you have happier customers) but because its easier to fix a bug long before it gets to be an end product.

          1. kewlio

            Re: I wish this madness would just stop

            Sadly, after several decades of commercial programming experience, I can tell you that nobody cares about bugs unless they impact a companies bottom line. Almost all software is and can continue to be riddled with bugs, but as long as they don't provide a visible block on what the customer is trying to do they can be postponed, ignored, added to a never-ending backlog of work or whatever. Time to market is very important, development cost is very important. It's all very well having these ideals that we're going to do a better job of programming, but it's the person paying the bills who's going to decide. They don't care about your ideals, only the money. No development manager gets fired for delivering something early with a few issues. No development manager gets fired for delivering inefficient code. These two are just there to be considered for the next version. Delivery late on the other hand? Yup seen plenty of people get the chop for that.

            If a bug arrives back on my desk, there's a whole lot of things that have to happen before I start working on it, it will be prioritised according to the number of customers affected, additional features that are to be delivered (that will make more $$$), how important are the customers and so on. That's a lot of 'opportunities' to get out of fixing it.

            NB: This is not how I want things to be, but it is life.

            1. fg_swe

              Regulated Industries, Cyber Threats

              It must be noted that there are industries and application fields which operate under somewhat stricter regulations. A serous bug in an ABS brake system software will kill someone sooner or later. Same for bugs in railway, aerospace and medical systems. These industries use extensive documentation and testing to weed out these bugs. Standardized in DO178, ISO26262, and other norms.

              An emerging thing is cybernetic threats, which can have very severe consequences (losing a war, for example) for the nation, bank, company or person who uses a certain system. These users will in some cases have a very dim view of commercial software such as Windows or Linux.

              But I agree that every developer is at some point under commercial pressure to deliver "something working". If the compiler can help the developer to find as many bugs as possible(mostly due to the type system), this will be a powerful aid.

    3. fg_swe

      C and the Cyber War Domain

      If we use your terms for one second, the engineers building the Linux, Windows and HPUX kernels were "f-wits".

      In the next second we should realize that humans are not robots and we DO make mistakes then and now. Small mistakes should not mean an attacker can take over the process or the entire system(kernel exploit).

      See this http://sappeur.ddnss.de/Sappeur_Cyber_Security.pdf

    4. Zanzibar Rastapopulous

      Re: I wish this madness would just stop

      > "I can’t remember the last time I wrote some C that had a memory error."

      That's the problem, you don't even know if it did.

  12. Version 1.0 Silver badge
    Meh

    The language is not that important

    It's the ability to use it well, it's odd that Assembler did not appear in the efficiency list, it's actually way more efficient than any language if the writer does a good job - and like all the languages, only if the writer does a good job.

    "The determined Real Programmer can write FORTRAN programs in any language." - Ed Post, 1982

    LOL, I remember writing FORTRAN programs in Pascal for years.

    1. bombastic bob Silver badge
      Devil

      Re: The language is not that important

      There have been a few FORTRAN to C translator efforts. Some of them are pretty good. But yeah writing FORTRAN-like C code is actually pretty straightforward, turning FORMAT statements into equivalent C format strings and using printf.

      The only thing to keep in mind is that FORTRAN functions and subroutines pass parameters by reference, so you'd use a pointer rather than a value to make it 100% equivalent. Reminds me of a bug some IDIOT that preceded me created by altering the value within the subroutine without realizing that it altered it in the calling function too, which caused a serious problem. THEN he didn't save his fixed source code and the only source was the broken version. Months later I had to make a change and used his broken code as there was no other source. Then I had to find out what he did and fix it... and that person's name will remain on my S list indefinitely.

      1. Paul Crawford Silver badge

        Re: The language is not that important

        FORTRAN also supports the very wired concept of multiple entry points to a given sub-routine. Most other languages only support multiple exit points (e.g. 'return' in C).

        Converting such a routing in to C is far from simple and elegant, though you could have several dummy functions that call the main one with a goto jump value. Oh, I feel dirty just thinking about it!

        1. Gordon 11

          Re: The language is not that important

          FORTRAN also supports the very wired concept of multiple entry points to a given sub-routine.

          This is not weird. PL1 (or at least PRIME's SPL) allowed this.

          All that is required is to know what you are doing. If you don't. why are you "writing" code in the first place?

          1. that one in the corner

            Re: The language is not that important

            Leaving aside the "this is not weird (names some other language that isn't exactly in the top 10 best beloved)", just because someone considers a language feature weird, especially in comparison to a newer language and all that language's offspring, in no way calls for that style of response.

            Multiple entry points is definitely weird, compared to the "formulas" that you are "translating", let alone structured programming. *But*, precisely because you do know what you are doing, you may well be able to come up with a prefectly valid technical reason for using them (such as saving code space in the Good Old Days of kilobytes of core). And you then noted down that weird usage in your accompanying lab notes, didn't you?

            Being able to recognise something as "weird" is definitely a Good Thing: it indicates a breadth of experience *and* the understanding - including self-awareness - that this thing isn't commonly encountered. The true measure of the programmer is then their response (which, I really hope, doesn't include just ramming this weirdness into everything they do until it becomes the norm for them: self-awareness can go down as well as up).

        2. sw guy

          Re: The language is not that important

          Weird, maybe, but what about the following feature:

          Pass label as parameter for a function who then can decide to return there instead of just after call ?

        3. ssokolow

          Re: The language is not that important

          That's the tail echoes of the "unstructured GOTO" that Dijkstra was railing against in his famous "considered harmful" paper.

          Not as bad as at its height, when people would jump into any function at any point, but still worse than proper structured programming languages like C written around the "one entry, one or more exits" principle, making structured goto more like a beefed up break, continue, or try/catch.

    2. Paul Crawford Silver badge

      Re: The language is not that important

      it's odd that Assembler did not appear in the efficiency list, it's actually way more efficient than any language if the writer does a good job

      With super-scalar processors where your pipeline delay/blocking depends on other stuff that is running it is REALLY hard to beat any decent optimising compiler for speed. Where assembly is justified (to me at least) is when you have to access very CPU-specific features, but that is really unusual out side of writing for an OS, or embedded microcontroller. Even then, you are probably better to wrap it in a simple function to call from your choice of C/C++/Rust/etc used for the main code.

  13. a_yank_lurker Silver badge

    Apples to Grapefruit

    The power consumption ratings of various languages is more than a bit dodgy. The dodginess is that they are only looking at specific use case for all languages. Every good language has a set of uses were it is one of the best options for use. It is balancing of execution time, code robustness, programming time, etc. that determines which is best. I would not use C#, Python, Typescript, Java, etc. were one would use Rust, C, C++, or Go and vice versa. The languages, all good, are designed to excel in different areas and conversely they all suck in different areas. Every language designed is a tradeoff features and intended use cases.

  14. Lorribot Silver badge

    We all need to Stidy more

    "according to the stidy"

    I think we need another stidy to validate that paper.

  15. Lunatic Looking For Asylum

    Thrashing about wildly looking for straws to clutch...

    Standing on a stage and saying 'but look how we can save energy' smacks of desperation to me. Maybe they've realised that nobody's listening.

    1. fg_swe

      Re: Thrashing about wildly looking for straws to clutch...

      There are many use cases where energy consumption matters. Think of aerospace applications that have a tight space and cooling budget. Think of mini satellites with small solar panels. IoT sensors. In memory databases.

      Just because energy is still cheap and your accounting code can be done in Python means little.

      1. Lunatic Looking For Asylum

        Re: Thrashing about wildly looking for straws to clutch...

        Where did I say energy consumption didn't matter ?

        In the examples you mention I'd hope that the programmers were aware of the power consumption issue and used the most appropriate languages and techniques to minimise that.

        The reason we're using so much energy is because we have a "throw more CPU's at it" mentality rather than 'can we write it better'.

        Rust (and any other language) isn't going to solve the problem - (almost) nobody tries to write efficient code anymore or revisits their old code to clean it up - if it works - leave it.

        All languages allow you to produce bad, inefficient results, e.g. pretty much everybody would write a bubble sort, rust is no exception so using power consumption as a marketing message is spurious.

        1. fg_swe

          Re: Thrashing about wildly looking for straws to clutch...

          Imagine all Java developers switching to Rust. We can assume memory consumption would go down by 50%, based on experimental results so far.

          That would definitely be a reduction in energy consumed for manufacturing RAM and for operating RAM.

          1. mpi

            Re: Thrashing about wildly looking for straws to clutch...

            >We can assume memory consumption would go down by 50%, based on experimental results so far.

            IF the code using that memory is efficient, and IF it is possible to port the old code.

            And those are 2 BIG if's.

            Rust isn't inherently more memory efficient. I can write inefficient code in any language. Heck, I could write inefficient assembly.

            In fact, the more complicated the language, the easier it is to freck up and write something that looks okay, but has huge potential for improvement. Yes, rust can result in very efficient code. It also gives me all the complexity required to produce something that kinda works, but only as long as I throw $$$ worth of hardware at it to keep it ticking at scale.

            As to the second point: Developers time matters. If I have 1,000,000 Java Engineers, each spending 6m to learn Rust, that is 6,000,000 months or 500,000 years of time invested, and not a single line of code has been ported over to Rust at that point. And there are billions of lines of enterprise level Java out there that would need to be rewritten, from scratch, and also tested, deployed and maintained. Who's going to do that, and the answer is "no one".

            And for all that, what do we get? A single-digit improvement in an area that amounts for maybe 1% of global power consumption. Wow.

            A much better use of all these countless work-hours and mental resources, would be figuring out how to reduce individual traffic, improve public transport, and get people away from believing that it's a good idea to burn 3l of gasoline in an SUV to get 500ml of Milk from the corner store.

            1. fg_swe

              Re: Thrashing about wildly looking for straws to clutch...

              We always assume competent Java and competent Rust developers. Identical algorithms. Using standard libraries of each environment. Then the Java mark+sweep GC does generate a 2x RAM overhead, for very systematic reasons.

              I have done this myself for an application that processes CSV files.

              The RAM overhead could be pushed down by aggressive GC settings, but that meant Java runtime would no longer be competitive with Sappeur.

              1. mpi

                Re: Thrashing about wildly looking for straws to clutch...

                >We always assume competent Java and competent Rust developers.

                Even if every single Developer was perfect at his job, there are overly optimistic deadlines, badly planned projects, code written on crunchtime, requirements changing halfway through, decades old legacy code to be interfaced with new systems, etc. etc.

                A language can only do so much.

                It's the quality of the code written in practice that matters most.

                So he best a language can do, is help the developer to write good code.

                And in my opinion, the best way a language can achieve that, is by being easy to learn & easy to read.

  16. I code for the bacon

    Very confused researchers

    This has nothing to do with languages. They are comparing the performance of compiled programs, runtime environments and/or virtual machines. The garbage collection reasoning is specially poor: C/C++ programs can be designed to run, and can run on a garbage-collected runtime. And if you use fixed memory buffers and other techniques of the embedded programming, you can write complex Java programs that will never need garbage collection and even compile them to native executables. A language is a thing, and a very different one the is runtime implementation of a program written on it. Even if the language has only one.

    1. fg_swe

      Re: Very confused researchers

      For systemic reasons, Java cannot use memory as efficient as C++, Rust or Sappeur. For example, you cannot allocate programmer-defined objects on the stack. Stack allocation is the most efficient allocation approach you can think of, because it is essentially just incrementing the stack pointer and calling the constructor. The memory most likely is already in the cache, which is also critical.

      Your idea of "allocating once and forever in Java" can be done for hard-realtime systems (I guess it was done for the Barracuda drone), but it totally defeats the idea of using the Java Standard Library and many popular programming patterns.

      1. ssokolow

        Re: Very confused researchers

        "Go Does Not Need a Java Style GC" by Erik Engheim is a good post to read for more details on how and why Java is still getting bitten by its 90s gamble on smarter GCs being a magic bullet and the decision to bake that deeply into the fundamentals of the language.

  17. Anonymous Coward
    Anonymous Coward

    Good to see Pascal still representing.

  18. Anonymous Coward
    Anonymous Coward

    Perl

    But, Perl's a singer.

  19. Scene it all

    LISP still places pretty well in those rankings. :)

    I have done a little work in languages with this 'ownership' principle. Though it takes more time to get things to compile, you save a LOT of time NOT debugging whole classes of mistakes.

  20. Anonymous Coward
    Facepalm

    Job creation scheme...

    1. Take the latest powerful CPU. Intel will do.

    2. Realise that you've already lost part of it because it's running its own embedded operating system, and hardly anyone knows why.

    3. Run an operating system on it based on concepts dating back to the 70s with more compatibility hacks and legacy code than you'd ever imagine.

    4. Run a virtual machine manager over the top of and down the side of this operating system.

    5. Run virtual machines in the virtual machine manager.

    6. Install the same operating system in each VM.

    7. Decide to write your highly scalable application in Python.

    8. Cripple any per-core threading capabilities the CPU designers worked very hard to implement and tune, because of the Python GIL.

    9. Decide to use Python's asyncio library - a truly awful piece of coding that brings the single-threaded Windows 3.1 cooperative task switching model to Python.

    10. Wonder why your application doesn't scale, despite running on the very latest hardware.

    11. ...but you'll never know because no-one understands more than one software layer any more.

    12. Go back to step 1 with more money.

    Modern IT is absurd, akin to trying to juggle lumps of raw liver underwater in the dark with both hands tied behind your back.

    1. fg_swe

      Re: Job creation scheme...

      I can assure you that when it matters, much more efficient approaches are used. For example, one major stock exchange uses C++ for the trading system. They employ expert developers and even Linux kernel experts.

    2. mpi

      Re: Job creation scheme...

      >Decide to use Python's asyncio library - a truly awful piece of coding that brings the single-threaded Windows 3.1 cooperative task switching model to Python.

      This is probably the most succinct, correct and useful description of asyncio I have ever read.

      Thank you, and have an upvote :-)

  21. Gordon 11

    There seems to be a suggestion that being easy is better than being efficient?

    Dragging everything down to the Highest Command Factor (usually much lower than the Lowest Common Multiple) is not a good model.

    There's a reason that "rocket science" is not easy. Try to make it "easy" and things tend to blow up in your face.

    Can we stop pandering to those who thing that everyone should be able to program and try to accept that it is a skill?

    To be learnt and practised by those who can do so.

    The rest can teach it (badly?).

    1. fg_swe

      Barking Up Wrong Tree

      Even expert software engineers will create severe bugs then and now. The evidence in the CVE database is very clear. The cost and security threats from these bugs can no longer be ignored. Memory Safe languages are a very important safety/security approach along with firewalls, MMUs, sandboxing, strict input parsers and so on.

      The latest novel C exploit reports are about medical devices running VxWorks. They had an exploitable bug in the TCP stack, which means the device could be commandeered by simply sending "bad" IP packets to the device.

    2. mpi

      >There's a reason that "rocket science" is not easy. Try to make it "easy" and things tend to blow up in your face.

      The science isn't easy, but the tools to evaluate, explore and use it should be as easy as possible. This is true for every scientific and engineering field.

      Provided that the tools do not compromise correctness and efficiency beyond certain tolerances, which depend on the actual task at hand, but that goes without saying.

      Bottom Line:

      Programming languages are tools.

      Good tools are as easy as possible, and as difficult as necessary.

  22. sreynolds

    It doesn't matter which language you use...

    When you have hardware proving the randomness of SHA256 - the planet if fucked no matter what language you use.

  23. TomPhan

    Does the energy saving include the days of online searching as you try to find out how to do something, or the wasted weeks when a different solution is thought of?

    Possibly we should go with languages which are very easy to learn and just make a very efficient compiler.

    1. fg_swe

      Did it ever occur to you that the rules and structure of a language limits its runtime efficiency ?

      For example, Java needs 2x the RAM of an equivalent Sappeur program. No compiler can change that fact, because this follows from the mark+sweep GC approach.

      Compilers are not the same as unicorn horses.

  24. kewlio

    The percentages could be a bit misleading...

    Consider a cloud application written in Python, running in a C container, under a C kernel, talking to a C (Lua?) Redis cache, and a (C?) SQL database, accessed through a load balancer written in C, with everything running under Xen (in C). Now consider the same with Rust, that's the percentage I want to see, surely more useful that a straight language comparison.

    1. fg_swe

      Re: The percentages could be a bit misleading...

      All the application-level performance data we have so far suggests kernels and database servers could be written in a memory safe language with only moderate runtime penalties(in the order of 20% or less).

      Even before Unix became popular, there were successful lines of Algol mainframes, which used at least partial memory safety inside the kernel (ICL, Unisys, Moscow). According to Sir Tony Hoare, this worked rather efficiently.

      In the world of high security computing (government+mil) they already use memory safe languages.

      1. Anonymous Coward
        Anonymous Coward

        Re: The percentages could be a bit misleading...

        I think the point they are making is that the runtime with Rust isn't 120% of C - its 120% x 120% x 120% x 120% x 120%... for all the layers in the stack.

        1. fg_swe

          Re: The percentages could be a bit misleading...

          Why should these numbers multiply ?

          Assume a C program has a user space runtime of 5000ms, and an system call runtime of 1000ms. The equivalent system based on Sappeur would have 20% overhead: 6000ms in user space and 1200ms in the kernel.

          That is a total runtime of 6000ms vs 7200ms. A total overhead of 20%.

          1. ssokolow

            Re: The percentages could be a bit misleading...

            Not to mention that Rust has the "unsafe" keyword, intended for letting you do C-style raw pointer trickery in hot regions while still upholding that the rest of the program will be memory-safe as long as you wrap a correct abstraction around the small bit that needs the intensive auditing. Heck, that's how the Rust standard library is implemented.

            ...so comparing safe Rust and C is misleading for the same reason that you wouldn't compare a C++ program using OpenCV against a Python program which reimplements all of OpenCV in pure Python, rather than just using PyOpenCV.

            1. fg_swe

              Unsafe Code Parts: Great

              There exist valid reasons for using small parts of unsafe code in a larger memory safe system. For example, the Sappeur standard library will eventually call the POSIX API using the inline_cpp[[ ]] mechanism.

              By doing so, the error-prone amount of unsafe code will still be a small percentage of total code and we can assume we will (statistically speaking) have very few memory bugs. inline_cpp[[ ]] should only be used by experienced C++ developers and it should be reviewed by another seasoned C++ engineer. Unit Tests should be created. Valgrind should be used with the unit tests.

              So, it is not an all-or-nothing proposition, but rather an attempt to squeeze out the exploitable bugs related to memory safety.

  25. mpi

    Alright, so the way to save power in datacenters....

    ...is using a programming language that provides a marginally more efficient use of electricity...

    ...instead of reevaluating whether we really need to store, process and distribute all these exabytes of ROT Data, or run the gazillions of pointless (cr)apps, with layer upon layer of tracking bulls... on top?

    Sure, lets learn Rust, and then use this marginally more energy efficient language to develop the next super-needed fitness-tracker, daily-water-intake-tracker, cat-meme-generator, and to wrangle 10 Megafantastillion of photos showing peoples food. Because our civilization desperately needs all this to function!!!

    That's how you save the planet </sarcasm>

    1. fg_swe

      Re: Alright, so the way to save power in datacenters....

      The energy consumption differential is very real. If you were an engineer, it would be of interest to you.

      Intel never cared about energy consumption, so they missed out the mobile market. Mobile devices are very much constrained by battery.

      So in fact it is an engineering-economic thing. Very much like cyber security, which is also a very real problem.

      1. mpi

        Re: Alright, so the way to save power in datacenters....

        >The energy consumption differential is very real.

        Did I say it isn't?

        But "real" doesn't mean that the difference matters on a global scale.

        Datacenters and their infrastructure are ~1% of the global energy consumption. Most of that is infrastructure we already build as efficient as possible...network components, OS kernels, FS drivers, etc.

        So we take the fraction of that 1% that is the actual application-code running on these datacenters, and we shave a few percent off that. What percentage of global consumption will that be? I don't know, but I assume it's not much.

        Meanwhile, new code is written, and new hardware spun up month after month, for more pointless apps, and to shuffle yet more ROT data around. And rockets are launched for space-tourism, the car is still widely accepted as being the ultimate mode of transportation, we still produce mointains of milk, meat and other energy-inefficient foodstuffs, and yes we still burn coal as an energy source.

  26. JoeCool

    Conundrum Conundrun Conundrum

    I can see these people wanting to champion Rust, but just from looking at the discussion, a few things stand out

    1) Rust was designed to replace C, but it's being pushed to replace Java. So all the reasons that Java was/is popular remain.

    2) There is a high bar to writing quality code. With C, that bar is learned discipline in using the language and hardenning of the libraries through experinece. With Rust the bar is a steep learning curve and a more restrictive language. I don't see a clear advantage, although I have a personal preference for C++.

    3) If Energy consunption really becomes a thing, I can guarantee that the biggest gains will come from re-writing apps in the current language, but more efficiently. The concepts of Code Quality seems to elude the management of most tech companies and the off shore slave labour they use.

    1. fg_swe

      No

      The main reason for the creation of Sappeur and Rust was to eliminate the nasty bugs which come from a lack of memory safety. Also in multithreaded programs.

      As cyber crime/war is now a very real thing, memory safety is an additional, very valuable security measure.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022