back to article Google putting its trust in Rust to weed out memory bugs in Android development

Google has signalled support for the Rust programming language in low-level system code to limit the prevalence of memory-based security vulnerabilities. The Android project has largely been built in two languages. Java (and more recently, JVM-compatible languages like Kotlin) have been favoured for higher-level parts of the …

  1. Dave 15

    Maybe that explains

    Have a new Nokia phonie (and several very old ones). The new one runs android one. It has a collosal amount of memory but runs like a sick dog or not at all (frequently cant even put the passcode in when waking the device up!!!!!!)

    Maybe this is because the Android programmers, or maybe the Nokia programmers, or maybe both are such useless twats they cant program in a proper language on a proper operating system without fucking it up.

    Oh please, please, please can someone get Symbian back into circulation - my old phones with Symbian on run a LOT longer, a LOT more reliably and even with a trivial memory and processor a LOT faster. The morons who wanted to jump on the loud mouthed American inspired modern phone band wagons (apple and android) were wildly mistaken.

    1. Tom Chiverton 1 Silver badge

      Re: Maybe that explains

      More likely the CPU is crap, hence running "Android One" rather than the full fat thing.

      1. PerlyKing

        Re: Android One

        As far as I know Android One isn't some sort of cut-down version of the Android OS, but a program which guarantees a minimum duration of updates and near-stock Android UI. I think it was intended to cut down on "landfill Android" phones which were dirt cheap to buy but then never received any updates.

      2. JDPower666

        Re: Maybe that explains

        Android One is standard Android, not a cut down version. You're thinking of Android Go.

      3. Dan 55 Silver badge

        Re: Maybe that explains

        Android One phones have a HAL written by the handset manufacturer and Google pushes out software updates to everything else above that with very little intervention by the manufacturer. Presumably you're going to get the full-fat Google experience though - every single Google app bloating in size over time.

    2. adrian727

      Re: Maybe that explains

      Try newer Nokia feature phones, they're using KaiOS, should be what you wanted.

    3. Anonymous Coward
      Anonymous Coward

      Re: Maybe that explains

      Android ONE, 'someone' had the idea that RAM was expensive, so he spent a lot of programmer time reducing RAM consumption, so you'll find apps are constantly unloading parts and reloading parts in that OS.

      It's a Google thing, I have software that has 2 activities and takes 30 seconds (2 minutes on older tablets) to recreate all its precalculation tables.

      When I switch from activity 1 to activity 2, Android will unload activity 1. When I switch back, I have to wait the 30 seconds for it to finish its reload/recalc. There is 6GB of RAM free on this device.

      It's dumb.

      The target for Android ONE devices was 128MB of ram. RAM is a picture printed on a chip, more ram is just a slightly denser picture. How much actual price do you think they saved with that?

    4. ThomH

      Re: Maybe that explains

      Ugh, Symbian.

      One font. GPU support left as an app-by-app problem prompting the browser with three fixed levels of zoom (and, again, rendering everything in the single Nokia font). Not POSIX compliant, weird branched dialect of C++ that looked very little like C++98, never mind having a hope of being pulled towards C++11 and subsequent. All coupled to hacked-on touch screen support.

      On my Nokia N8, with no third-party software installed: three completely different kinds of text scroll area, two of them direct manipulation, one that involved dragging a scroll bar. Many, many built-in parts of the OS not yet adapted for a virtual keyboard — the process for navigating to a particular URL in the browser was this: (1) open context menu; (2) find URL entry and select it, this brings up a completely different screen with a box for typing the URL; (3) this screen isn't virtual keyboard aware, so tapping on the box brings up the full-screen keyboard. Enter your URL here and tap to enter it into the previous text box; (4) on the previous screen, tap to use what you just entered as the URL; (5) now, finally, you're returned to the browser to see your page load.

      The week before the burning platforms memo I was at an official Nokia engineering event at which the sales pitch was for QtQuick, Nokia still owning Qt at the time and it being the intended isolation from Symbian's awfulness and the upward path to Maemo.

      The person they'd invited — a third-party developer with a successful app — more or less presented as 'Symbian isn't that bad because with some intense coding I was able to recreate UITableView and Symbian is cool because I finally got to stick it to those designers by having the excuse of platform inability not to do most of what they wanted'. Not a convincing sales pitch.

      I think the plan of killing Symbian and transitioning to Maemo via Qt was smart, it's just a shame that the unexpectedly-fast collapse of the market for feature phones in the wake of Android took away the opportunity to execute.

      1. Dan 55 Silver badge

        Re: Maybe that explains

        There was PIPS which gave you a POSIX-like environment but signals were a bit flaky.

        I thought everyone used Opera Mobile on Symbian anyway.

        You could use Qt on Symbian too, I think they got it to the stage where the same code would work on Symbian and Maemo which was pretty good.

        The collapse of the market wasn't that unexpected after the burning platform memo. It's difficult to say who did the most damage to their own business, Elop or Ratner.

  2. Anonymous Coward
    Anonymous Coward

    Garbage collection

    Background history: Finalize in Java, when an object was no longer referenced, it was garbage collected, and the finalize method called, so the object could clean up anything the garbage collector doesn't know about, e.g. memory in a graphics heap, external network connections, locked resources, etc. A very useful function. A very *essential* function for a garbage collected / reference counted environment.

    2010...2018, the garbage collector gets lazier and lazier, finalize cannot be relied up, it gets called less and less and later and later.

    2018 Finalize gets deprecated, without a suitable replacement. Various half assed alternatives are promoted, which basically amount to the class re-creating a separate reference count and garbage collect, and forcing explicit calls to 'free' or 'close' the external resources associated with the object. Why have garbage collection then, if you need your own reference counting?? Give the class a 'FreeMeNow' destructor then if you're not prepared to fix Finalize, so at least we have one consistent reference count!!

    So, now you allocate a block and *maybe* it will be freed and maybe not, so how do you check for memory leaks? Well there is a sort of way using WeakReferences. A WeakReference doesn't count as object-usage, and it goes to null when the garbage collector determines the object is no longer used by anything excepting weak references. So you could track the objects that should be freed till their weakreferences become null and you know they've been garbage collected. Bingo, it involves some polling, but there is a way.

    Something like TrackLeaks(new WeakReference<String>(s), "Tracking s at line 2938");

    So Android 8 comes and goes and this works, within 2 minutes blocks that should be freed, are freed. Leaks are found and leaks are fixed as before.

    Now we get to Android 10, and I see blocks that hang around for hours. Let me give you some code so you can see how bizarre it is.

    {

    StringBuilder sb = new StringBuilder();

    sb.add("stuff");

    sb.add("stuff and more stuff");

    String s1 = sb.toString();

    TrackLeaks(new WeakReference<String>(s1), "S1Leak");

    // lets clone the string to show how annoying this is

    String s2 = new String(s1);

    TrackLeaks(new WeakReference<String>(s2), "S2Leak");

    return s2;

    }

    So, I've returned s2, and perhaps it leaks? No.... this code has been used for years, it does not leak, what's new is s1 leaks, not s2.

    It's like the garbage collector is doing ONE pass, very very rarely, and finds s1 is not suitable for garbage collection because it is used by s2*. It then determines that s2 can be garbage collected and stops. Done. Except it isn't, s1 can now be

    I see its a "generational heap" now in Android 10, I suspect that because s1 stays around, it is moved to the long term heap where it will hardly ever be examined for garbage collection!

    I'm not sure if that's the cause, but having spent days on it, code that happily ran on Android 8, now hits the 512Mb limit on Android 10 and crashes, I am at a loss to explain these weird cases, that's my best guess.

    Honestly, I'm sick of it. They have ONE JOB, manage a 8GB RAM heap for software that can happily run in 800Mb, and forces itself to run in 400Mb because of a crappy heap limit. I get max 512MB and a crappy unreliable garbage collector to work with. You give me an unstable, poorly designed OS that neither virtualizes the hardware like it should, nor delivers consistency behavior across versions like it should.

    * Yes this is correct, I know the documentation says it always takes a copy, but new String(s1) takes the *value* if the string's length shows it is not a substring, so s2 would, I assume, be referencing s1 in the above case. See the code of the String constructor.

    1. Anonymous Coward
      Anonymous Coward

      Re: Garbage collection

      finalize was deprecated because it was called after an object was scheduled for collection, but as it had a hard reference to the original object it could store that reference, thus making the object not collectable. That's bad. The replacement is not WeakReference, it's PhantomReference which will be placed on a ReferenceQueue after the object has been GCed. It has no hard reference to the object, so can't resurrect it.

      WeakReference doesn't "go to null when the hard references go". It goes to null if the object has been GCed. It may still point to the object, even if no other references to that object reman.

      GC is about memory cleanup, not resource cleanup. If you have enough memory, the GC won't run. When your application exits, it will exit leaving some objects uncollected. Relying on the GC for cleanup of *resources* (eg temp files) is absolutely the wrong approach.

      1. Anonymous Coward
        Anonymous Coward

        Re: Garbage collection

        1. finalize worked, garbage collection worked. It was the move to lazy garbage collection that stopped it working.

        2. WeakReference is a way to probe for memory leaks, re-read my comments. That object should have a null weak reference and does not. It will not be freed.

        3. @"GC is about memory cleanup, not resource cleanup." It's about freeing up an *object* that are unused. finalize told the object it was unused and thus anything it needed that was unknown to the garbage collector could go with the object. THE OBJECT WAS TOLD IT WAS UNUSED. You can duplicate that with a reference count, but then you're re-creating what the garbage collector is doing. Two ways for the same thing = opportunity for mismatch.

        What you have now is objects that *can* be released that are not released, and objects keeping a separate reference count to recreate the expensive garbage collect usage tracking.

        "Failed to allocate a 16 byte allocation with 0 free bytes and 0B until OOM, target footprint 537858840, growth limit 536870912"

        Welcome to Android 10, blocks not free that can be freed, this is new, it breaks things that worked in Android 8. It fails to free blocks that can be freed, because they are assumed to be permanent. Thus available memory is less, thus is leaks till it crashes.

        "Relying on the GC for cleanup of *resources* (eg temp files) is absolutely the wrong approach."

        I cannot even rely on the GC in Android 10 to do its job and garbage collect.

        What is the wrong approach is to FAIL to count references, and instead build a garbage collector that tries to figure out references from enumerating objects. Without proper reference counting garbage collection becomes expensive and programmers start doing shit optimizations to reduce garbage collecting. In the process they make less usable memory.

        THUS IT FAILS TO COLLECT GARBAGE PROMPTLY AND EFFICIENTLY.

        Can I rely on Android to collect garbage? No. I can rely on Android for f*ck all these days.

        As the garbage collects less garbage, and collects it later and lazier, so the available memory becomes less and less.

        If Google are not capable of doing a garbage collector, and I have to implement reference counting in my objects then give me a "freeme" method so I can get the object freed on return to the thread message pump, and I'll fooking garbage collect their objects for them.

        This incompetence isn't solved by you misrepresenting stuff and repeating falsehoods.,

        GARBAGE COLLECT SHOULD FREE GARBAGE PROMPTLY AND EFFICIENTLY. To do otherwise would reduce available memory.

        1. Anonymous Coward
          Anonymous Coward

          Let me put this a difference way

          Let me put this a different way, so its clear to Google and everyone else where this path leads and just how bad it is.

          You have a garbage collector that can free 10 objects a second. So I can use 10 objects a second and software will run forever.

          Then you try to make it faster by lazy garbage collecting. "The garbage collector is expensive, so lets make software run faster by not running garbage collection as often, or on all the possible things that can be garbage collected" you say. "After all, this object has been around a while already, lets not check it till an hour from now."

          So now Garbage Collection will only free 8 a second. So now my software, allocating 10 a second, will inevitably crash.

          It could run, but by failing to collect all the garbage, it inevitably will crash.

          You've done worse than reduce available memory, you've slowed the RATE at which objects can be allocated.

          The lazier your garbage collector is, the sooner the crash, and the more apps will hit the crash faster.

          I just happen to be doing something big and complicated with Android... so I hit that limit sooner than others.

          Yes, I know I should have gone with a proper operating system that can actually free memory. Hindsight is great. Tell me about it.

      2. Anonymous Coward
        Anonymous Coward

        "PhantomReference"?

        And what's next? "ZombieReference"? "BansheeReference"? "VampireReference"?

        When you have to resort to new attempts to make something work, it means it was ill-designed from the beginning.

        The true undead is Java. It should have been killed with a pole years ago. Instead it keeps on living by sucking the RAM of every machine so unlucky to meet it.

        1. Anonymous Coward
          Anonymous Coward

          Re: "PhantomReference"?

          It's a red herring,

          Optimal case for a garbage collector, is: "you stop using an object and immediately its memory is there for other uses."

          No wasted memory, and no blocks allocated unnecessarily causing expensive memory fragmentation.

          Having different stages when the object is still there but not yet cleaned up, is to simply codify the failure of the GC to do its job promptly.

          The bigger the delay between the two phases, the more memory is allocated unnecessarily and the more time it has to fragment.

          1. Anonymous Coward
            Anonymous Coward

            Re: "PhantomReference"?

            You're showing a very basic comprehension of garbage collection here, which is hugely complex when multiple threads are involved and you don't want to the apply the "big lock". What you're describing is pretty much the opposite of what a sensible strategy would be: the GC would be constantly analyzing paths trying to identify when references are cleared. This stuff isn't free, you know, it costs cycles.

            Almost every JVM release since I can remember has featured changes to the garbage collector; it's been completely replaced 3 or 4 times. All of these are moving towards zero locking, which I presume is very hard as it's they've been trying to get there for 15 years or so.

            Forgive me, but it sounds like you have just enough knowledge to be dangerous. You've opened up java.lang.String and have a 50,000 foot idea of how a GC works, and you've made some assumptions which just aren't true - or, if they are true, they're not necessarily true, so they could change in the next release.

            I've made the same mistakes over the years, it happens - I was cleaning up some performance tricks I put in for Java 4 just last month as they were no longer a good idea. But you're trying to second guess the virtual machine and it's apparent that's not working out for you so well.

    2. Anonymous Coward
      Anonymous Coward

      Re: Garbage collection

      Some years ago I worked on a project where the system architect had decreed that all new code would be written in C# whenever possible (which meant Mono, on Linux), because the wonders of memory management were the only way the project could be productive enough to succeed.

      The first C# program I ran promptly leaked a huge amount of memory and crashed. The memory was allocated inside a lower level library written in C and freed by the C# wrapper if IDisposable.Dispose was called and somewhere there was a layer that forgot to Dispose something.

      There were similar problems with handles to database connections.

      There were Finalize methods as backstops in some cases, but since the memory that the C# collector knew about wasn't under pressure, collection wasn't triggered, so non-memory resources weren't freed.

      The C++ wrapper had a destructor that freed it if the C++ object went out of scope, and just worked. C++ does allow you to pass around raw pointers and lose track of ownership though. Disciplined use of smart pointers can avoid that, but if you have masses of legacy code you don't want to rewrite, there's a good change some of it gets it wrong and the compiler, unlike Rust, won't enforce your discipline.

      1. ovation1357

        Re: Garbage collection

        There's no one size fits all solution and certainly situations where one language is more suitable than another for a task. I really liked C# when I did some significant application development with it a few years back, and also was impressed with the ease with which it could interface with C and C++ libraries as well as giving the power to use 'unsafe' segments of code where necessary.

        If your first C# program crashed due to an underlying bug in an underlying library that's not necessarily a reason to dismiss the whole language. (And as an avid hater of most things Microsoft I really wanted to hate C#, but I couldn't). Mono had a few limitations which meant that I never got my code working on Linux as well as Windows but it seemed a pretty decent effort.

        Managed languages are great when you want to be able to write some code using standard constructs that have been tested to death and manage their own memory, however it's definitely not fool proof. I've seen terrible code written in many different languages - one could suggest that the more abstract and high level the language, the most careless the developers become...

        But there's no way anyone would suggest writing Kernel level code in a language like C# or Java - that's currently C/C++ territory with smatterings of assembly thrown in for good measure, especially not when the current trend seems to take your beautifully type safe language and then ignore that but using 'var' everywhere..

        I've not had any need to learn nor use Rust as yet but the idea of it looks very sensible, they've clearly identified some of the most common patterns of human error and come up with an elegant way of protecting against they without abstracting everything away and removing control from the developer.

        It sounds interesting and I'm watching this with great interest.

      2. StrangerHereMyself Silver badge

        Re: Garbage collection

        This has nothing to do with C# or GC, but with InterOp between C# and C program.

        If the C code had been written in C# there would've been no problems.

        Basically, every time you do InterOp between C / C++ and .NET you have a huge red flag waiving at you. You really need to take a very close look at that code.

      3. Anonymous Coward
        Anonymous Coward

        Re: Garbage collection

        I'd suggest the 'Finalize" method backstop is more useful for development. You can always stress test it to force the GC.

        i.e.

        Finalize(){ if (dbHandlesOpen>0) assert("Hold up, someone forgot to close their database, fix this before you roll out this software")...

        When you don't get explicitly called when the *garbage*collector* thinks you're unused, but *you* think you're still in use, you don't get the chance to catch the mismatch and report it, even in development code, even if you stress test it.

        Finalize is essential for that. You need to be told when the GC thinks you're unused. Even if you insist Finalize should only be used during development.

        F**k I hate this. I cannot pretend to be nice about it. An OS with a garbage collector that won't collect garbage and I'm supposed to be nice about it. The CORE purpose of the garbage collector..

        I'd blitz it sometimes and curse about the blocking GCs as I hit 512MB on a device with a GBs of free memory, but at least it *would* garbage collect, even if it left it to the last minute and slowed software to a crawl, better late than never.

        Now I suspect it leaves it too late.

    3. Kevin McMurtrie Silver badge

      Re: Garbage collection

      Some JVMs have an option to apply extreme/unsafe String optimizations as a workaround for horrible code. You might only have one String object in your example.

      A strong case for GC is where many tasks are using COW or temporary data. Say you have a source that produces an enormous immutable catalog, and it periodically returns a new version. Any number of tasks grab that catalog and use it for as long as they need catalog consistency. To save memory, new catalogs may largely refer to the previous catalog structures. That's fine because it's all immutable. It can be done with reference counting but the overhead might be worse than GC.

  3. karlkarl Silver badge

    It is strange that if you mention zero initializing memory in a C or C++ project, everyone says that is inefficient and would laugh.

    And yet when in a Rust project, it is absolutely fine?

    I am fairly convinced that it is the mentality of a lot of C and C++ developers that needs to change, not the language.

    For one, we should have had a competent address sanitizer many decades ago.

    We should not be using raw pointers as much as we are. Citing that std::unique_ptr<T> cannot have a "weak / observer" counterpart because then it wouldn't have zero overhead is not a valid argument.

    C++ developers should also not be using raw C code without adequate bindings. Using raw, C is like the *-sys libraries in Rust. They are not the end result and should have fatter safety bindings around them.

    Heck, I am also fairly convinced that iterators, .at(i) and [i] should have machinery around it to track invalidation of underlying data. At least in a debug build. C++ needs more safety, not more features. You don't even need to change the language. Something like this: https://github.com/osen/sr1

    1. ThomH

      I think you might have started from a false premise; I'm only an application-level C++ developer, which admittedly gives me more leeway for not laughing, but always ensuring everything is initialised at the point of declaration is nowadays considered proper form. That's partly why class-member initialisers were added in C++11 (i.e. you can put the proper initial value for any class member directly at the point of declaration, instead of just making a mental note to try to remember to do so in the constructor).

      Similarly, raw pointers are generally frowned upon, really being used only where you want to supply ephemeral access and want to retain the option of `nullptr`.

      `std::shared_ptr` is the correct storage for anything that you might want to distribute a `std::weak_ptr` to. Though, yes, Boost adds `intrusive_ptr` for those that are concerned about the location of the reference count (i.e. it puts it in the object, for good locality).

      1. karlkarl Silver badge

        It seems that raw *owning* pointers are finally frowned upon (i.e avoiding new and delete). Unfortunately there are too many instances where a raw pointer still gets passed around a program (usually from a unique_ptr<T>) and we have absolutely no idea or guarantees of the lifespan.

        This opens up opportunity for error. And as C++ developers, we make these errors.

        Plus we have repeat offenders like Microsoft (i.e their MSSQL API), just chucking out a raw pointer and screw any kind of lifespan guarantees when the underlying connection object might get ripped out from under us.

        1. bazza Silver badge

          When I write C++ code I make all class constructors private, and give it a static create method that returns a shared_ptr.

          So far so good.

    2. ibmalone

      "It is strange that if you mention zero initializing memory in a C or C++ project, everyone says that is inefficient and would laugh."

      Not quite the same thing as this:

      "Additionally, Rust requires all variables be initialised before use"

      Initialising memory does have an overhead, though whether that matters depends what else you're doing. In C you have a choice between malloc and calloc when allocating memory, which is a clear distinction, in Rust you are generally going to be dealing with objects such as vectors, and can do things like with_capacity https://doc.rust-lang.org/std/vec/struct.Vec.html#method.with_capacity to allocate without initialisation. Rust will generally prevent you getting hold of uninitialised memory directly, but you can do it if you really want.

      Initialising variables though, in C:

      int a;

      Is absolutely fine, but try to use a before assigning to it and you're into undefined behaviour territory (unless it's global scope, in which case a is initialised to 0). Rust insists on

      let a = 0;

      Or whatever value. To avoid duck typing and be able to change "a" later too, you're going to need:

      let mut a: i32 = 0;

      I'm not 100% convinced about initialisation as preventing bugs; generally the real bug is "forgot to assign something relevant to this declared variable later", which this doesn't really solve. It may mitigate bugs, by making them more reproducible and preventing information from the application's memory leaking out though. (And possibly mask a subtler class of bug where you're unknowingly relying on the initialised value rather than the value you meant to assign later, but that's one for testing.)

    3. DrXym

      "I am fairly convinced that it is the mentality of a lot of C and C++ developers that needs to change, not the language."

      It's like the argument that obesity is personal responsibility and yet it's still a public health problem. You will never change developers in any substantive way when the language itself is filled with attractive nuisances that are so much easier to use than the supposedly safe way.

      As far as Rust and memory initialisation is concerned, the language basically requires you set a variable to a valid value before you can start reading it. So that could be zero, an enum value or whatever. That's no more than should be expected of any language. If you were absolutely convinced that initialisation wasn't efficient, you could put the chunk of code in an unsafe block. At least then if stuff crashes you can just grep for "unsafe" and chances are it's that unsafe block that is the cause.

    4. Dan 55 Silver badge

      It is strange that if you mention zero initializing memory in a C or C++ project, everyone says that is inefficient and would laugh.

      And yet when in a Rust project, it is absolutely fine?

      If you do gain anything from zero initializing before using a block of memory/structure to store data, it's probably because there's a problem elsewhere in the code manipulating the memory/structure.

      I don't mean initializing flags/numbers to zero is wrong, if zero is a value they should have. That's something else.

      1. Anonymous Coward
        Anonymous Coward

        "because there's a problem elsewhere in the code manipulating the memory/structure."

        Probably, but it's better to catch it using known initialization values that letting it create a RCE with unknown ones...

        1. Dan 55 Silver badge

          Re: "because there's a problem elsewhere in the code manipulating the memory/structure."

          Validation of external tainted data before writing it to the memory/structure is the key to avoiding buffer overflows.

  4. Mike 137 Silver badge

    More attention needed, maybe?

    "According to Google, memory-safety bugs represent 70 per cent of all high-severity security vulnerabilities found in the Android Open Source Project."

    When I was learning C, we were taught to build our own protection round hazardous functions and processes - indeed way back before the flood I co-authored an article on defeating the buffer overflow in C by use of a validation wrapper.

    1. Snake Silver badge

      Re: More attention

      That's not a feature, having to both remember to, and properly implementing, your own protections around vulnerable functions.

      That's a systemic flaw. Expecting humans to be perfect under every iteration of the function, rather than baking the protections in automatically, is a sure-fire way to get bugs.

      Which our reality has just proven to be true.

      1. ibmalone

        Re: More attention

        Much pain could probably have been avoided by providing such functions as an extension to the standard libraries. Sure, many of the functions are the way they are due to efficiency reasons, but think how many bugs could have been prevented by providing a standard asprintf for example.

  5. a_yank_lurker

    Size and Complexity

    From what I have seen over the years is OSes and applications become bulkier as they are asked to more over time. C and C++ are fine languages for their eras particularly when the overall code bases were smaller. As the code got bulkier, it got more complex as there are more possible interactions that might be difficult to find, trace, test, etc. This makes are language like C more dangerous as it becomes more difficult for someone to properly track all the interactions and manage memory correctly particularly for some obscure interactions. This is not a criticism of C or C++ but a criticism of language developers being very slow to realize something like Rust or Go was needed a long time ago. Whether Rust has the answer or Go has a better solution; I do not expert enough to know. But this has been needed for a long time.

    1. DrXym

      Re: Size and Complexity

      Go is a kind of middle ground language. It is compiled like Rust, C & C++ but it has garbage collection and runtime overheads like a higher level language. So that might be a bottleneck for systems programming.

      If performance isn't a major consideration then it might be easier to write something in Go than other languages. For example if I had to make some kind of web server executable then I think I'd rather do it in Go than Java for example, because it basically has everything I'd need to do it in the standard library.

      On the flip side, if performance is an issue then Go could be a problem. For example, there is a popular time series database called InfluxDB which is on version 2.x and written with Go. But the next version of it will be written in Rust so it can use a more federated, scalable, performant architecture.

    2. StrangerHereMyself Silver badge

      Re: Size and Complexity

      A couple of months ago I looked into KolibriOS, an operating system written entirely in assembler, and it got me thinking why OS'es are so large, cumbersome and slow these days.

      If they can fit an entire operating system, including TCP/IP stack, USB HID stack, windowing system and utilities into 1.44MB (yes folks, that's megabytes), we must be doing something seriously wrong these days.

  6. red floyd

    But if they put Rust into Android, how will a process suspend itself for a certain time period?

    Because, as Neil Young reminded us all, Rust never sleeps!

    [I'll show myself to the door now...]

    1. yogidude

      I knew this comment would be here before I found it. Well a strong hunch anyway. Upticked all the same.

    2. StrangerHereMyself Silver badge

      I see no reason for it. (Pink Floyd Dark Side of the Moon background talk).

      For Rust never sleeping that it.

  7. Bossington
    Coat

    Please Sir! My Android has gone all rusty

    So old code is good and new code is bad?! I usually find it's the other way round

    1. matjaggard

      Re: Please Sir! My Android has gone all rusty

      I want to both up and down vote your post. I also have the same feeling and experience and get frustrated with those who seem to like changing nothing - but I think it's fair to say that on a system as widely used as Android, most bugs in old code have probably been found already

    2. DrXym

      Re: Please Sir! My Android has gone all rusty

      Not really what they're saying. They're saying old code has had the bugs fixed that Rust would have prevented out of the box so there is little incentive to rewrite it. But there is lots of incentive to write new code in Rust because it stops it from suffering the same buggy lifecycle as the old code.

      And generally speaking that's a good practice in code. Don't rewrite for the sake of it, but if you're writing new code then use the best tool for the job.

  8. DS999 Silver badge

    If uninitialized variables really are 3-5% of errors

    Why didn't they change their code to initialize all variables when they are declared long ago? It would be a simple one time change to the code, plus some massaging when they add new code. I have to think a company with as many PhDs as Google could figure out a script to modify the source appropriately, or better yet modify the compiler they use to do it directly.

    Just set everything to 0 if another value isn't explicitly defined. Heck, that's already true for global variables in C, the only place you need to make changes are variables declared inside functions or code blocks, and change malloc() calls to calloc() with a #define in some generic Google header that's inserted in every project.

    1. _andrew

      Re: If uninitialized variables really are 3-5% of errors

      "Why didn't they change their code to initialize all variables when they are declared long ago?"

      Because to a first-order approximation, the code in question is not "their" code. Most of these systems are enormous agglomerations of third-party open source libraries. Probably 80 or 90%. Sure, if you cared to look into that library you might fix it, but then you have a change against the up-stream that you have to track, or try to persuade the maintainer, if there is one, to wake up and accept it. Or publish it as a fork. Or merge your change back in when upstream does release a new version that changes something else. Ob xkcd: https://xkcd.com/2347/

      Best not to look.

  9. Elledan

    Too bad

    Real shame that Rust increases the possibility of logic and type errors due to its use of weak typing, similar to Python's weak type system.

    Not to mention its more abstract syntax, complex alternative to OOP that seem to throw all beginners for a loop and it violating basically every tenet of the Steelman Requirements that underlie truly safe languages like Ada.

    Basically, Rust will never be certified by the DoD, whereas Ada has been since the 1980s and C++ for over a decade now.

    1. Anonymous Coward
      Anonymous Coward

      Re: Too bad

      Interesting thought, why not Ada rather than rust?

    2. Phil Lord

      Re: Too bad

      I can see that you have some problems with Rust, but it's a shame that you don't say what they are, rather than alluding to other languages. What part of the Steelman requirements is it that you think Rust needs?

      And the idea that Rust has a weak type system similar to Python sits poorly with me. Rust's type system is really nothing like Python. What is the weakness in Rust's type system that you dislike? Rather than just referring to Python?

      1. Anonymous Coward
        Anonymous Coward

        Re: Too bad

        Recent gripe is the lack of a do-while loop, I mean, what is that shit about?

        I looked up the history of it and there's some numpty claiming you don't see any do-while loops so it doesn't matter, but I have never, ever, ever seen a genuinely deliberate infinite loop and yet it has one of those.

        The syntax also has lots of that stuff bolted on that really makes it unreadable, closures especially have this tendency, combined with braces it makes for a matrix effect, which while cool in cheezy cinema is a bit shit if you have to work with it.

        Bring back Ada I say! (Although that needs a do-while as well as I recall).

        1. Phil Lord

          Re: Too bad

          Rust is a little unusual in it's loops. But as `loop` and `break` generalize both `while` and `do while`.

          The syntax is occasionally overly complex -- like most C syntax languages the paren matching can get a bit much; but it's okay to my mind. Then I like lisp, so perhaps I am odd here.

    3. Robert Grant

      Re: Too bad

      Can you be specific about what you mean by "weak typing"? Python is strongly and dynamically typed; I thought Rust was strongly and statically typed.

      1. DrXym

        Re: Too bad

        It is. The grand parent poster has been repeatedly told this and still hasn't learned. Therefore take anything else they say with a massive grain of salt.

    4. DrXym

      Re: Too bad

      Why do you keep spouting this nonsense? You repeatedly keep saying it despite being corrected.

      Rust has strong typing just like C and C++. You can either declare the type or you can let it be inferred. A bit like "auto" works in C++ but better since it doesn't have gotchas when dealing with references.

      STRONGLY TYPED.

      And the "complex alternative" to OOP is called composition and traits. I think most people who have ever dealt with interfaces in another language will figure it out.

      And maybe you're super into your Ada but if you're going to talk about Rust, maybe learn something about it?

    5. StrangerHereMyself Silver badge

      Re: Too bad

      The U.S. DoD doesn't require Ada anymore and hasn't for almost 20 years. They kind of figured out that few developers want to use Ada and they couldn't afford to lose hip developers.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like