back to article After four years, Rust-based Redox OS is nearly self-hosting

The Redox OS, written in Rust and currently under development, is only "a few months of work away" from self-hosting, meaning that the Rustc compiler would run on Redox itself, according to its creator Jeremy Soller. Soller, who is also a principal engineer at the Linux hardware company System76, based in Denver, USA, says …

  1. katrinab Silver badge

    3 seconds boot time?

    On my i7 3700 with SATA SSD, Debian takes about 1 second to boot, FreeBSD takes about 10 seconds. I'm not in the habit of doing that very often, so I don't really care. FreeBSD is way faster when it does boot.

    Windows Server 2019 takes about 2 seconds to boot on the same hardware. I *am* in the habit of doing that very often. It is much better than Server 2008, which took about 30 minutes, but my main concern is that it is something I have to do at least once a month.

    1. james_smith

      Re: 3 seconds boot time?

      Debian on my i7 based Dell takes over two minutes to boot, although that seems to be down to some SystemD process hanging - and leaving nothing useful in the logs for me to diagnose the cause...

      1. Anonymous Coward
        Anonymous Coward

        Re: 3 seconds boot time?

        Pffft. I use busybox and I have a shell on the UART is 2200000 microseconds.

      2. TJ1
        Thumb Up

        Re: 3 seconds boot time?

        Many times slow boot (or more accurately, time to reach "") is due to waiting for the "" which will only be reached on most laptops once the/a WiFi network is found and connected.

        This is usually a side effect of configuring a Network Manager (WiFi) connection to be available to all system users which causes it to be brought up before desktop log-in is reached.

        However, for the more general case systemd provides useful tools for identifying where boot-time delays occurred:

        systemd-analyze critical-chain

        systemd-analyze blame

        By default these assume "--system" but with "--user" the user session start-up can be analysed separately.

        "critical-chain" is the most useful when one service is delaying others, such as when waiting for a network connection to become available. The numbers show the @when and the +duration of each unit. E.g: on my laptop it takes +5.649s for the WiFi connection to be established: @11.614s

        └─ @11.614s

        └─kerneloops.service @11.579s +34ms

        └─ @11.570s

        └─NetworkManager-wait-online.service @5.919s +5.649s

        └─NetworkManager.service @5.419s +403ms

        └─dbus.service @5.158s

        └─ @5.091s

        (Bug alert: seems like ElReg's CODE and PRE tags do not preserve layout - the above should be indented and each line shouldn't be wrapped in P tags)

        As always

        man systemd-analyze

        details many more useful reports and visualisations of the boot process and how to interpret the output.

        1. ibmalone Silver badge

          Re: 3 seconds boot time?

          And despite that my RHEL7 system waits three minutes on boot and eventually times out without mounting NFS despite figuring out the network-online dependency and having the network online, while systemd-free RHEL6 managed okay. *kicks box*

        2. Anonymous Coward
          Anonymous Coward

          Re: 3 seconds boot time?

          I ran systemd-analyze blame on the console and this popped out:

          > systemd-analyze blame

          Lennart Poettering


        3. bombastic bob Silver badge

          Re: 3 seconds boot time?

          NOT having SystemD would improve that. Devuan comes up really fast, booting into a GUI window manager (not gdm, I forget what it's called, it's lightweight). Evdn when I had it connecting wirelessly, it was still pretty fast. But ethernet is a bit faster I think. That box has an SSD on it.

          Most of the boot time on my BSD boxen is due to all of the daemons I load. I never bother timing it and they all have spinny drives. I've never really minded, since they run for WEEKS (and months) without booting.

          if BOOT TIME is all you're concerned about, a dedicated RTOS is probably going to be thee fastest. Whoopee.

        4. Siv

          Re: 3 seconds boot time?

          Excellent comment, I hadn't realised that you can run diagnostics like that. My system takes 21 seconds to get to the GUI and I have discovered 19.582 seconds of that is vboxdrv.service which does make me wonder whether I should uninstall Virtual Box as I assume that is what vboxdrv.service is and maybe I would boot to GUI in a few seconds. I use VMWare Player for running Windows 10 as I still support Windows 10 clients, but all I have in Virtual Box are an old Windows 7 and Windows Vista VMs which I haven't needed to access for years as no-one is running Vista any more and I suspect after January's end of life I will have no-one running Windows 7.


          1. CBM

            Re: 3 seconds boot time?

            I see vbox services on the critical path for graphical target too, but I realised that the time quoted seems to put it 10s after I had actually logged in, so it may not be on the critical path for actually using your PC.

      3. Andy Landy

        Re: 3 seconds boot time?

        may or may not be the issue you are seeing, but there was a recent discussion on the kernel mailing list about boot blocking due to lack of entropy

        lwn has a summary here:

    2. sabroni Silver badge

      Re: 3 seconds boot time?

      "a time he says is "not fast enough"."

      1. katrinab Silver badge

        Re: 3 seconds boot time?

        Obsession with boot times is what lead to s****md.

        1. bombastic bob Silver badge

          Re: 3 seconds boot time?

          obsession with boot times might cripple it entirely, leading to NOTHING REAL GETTING DONE.

          FUNCTIONALITY FIRST - and THEN tweek it for performance!

    3. Anonymous Coward
      Anonymous Coward

      Re: 3 seconds boot time?

      Not counting the intolerable startup time of the server hardware?

      4-7 mins of POST/BIOS shenanigans before it even gets to the OS really puts the difference in boot times in perspective. At least it isn't as bad as my Cisco switch stack, which clocks over 22 min on a restart. We had a critical problem caused by the core switch firmware that required a during work hours restart and firmware update a few years back. Two core restarts, then the storage arrays, then the virtual servers, then getting the workloads unpaused. By the time the critical chain was back up more than 40 mins had passed. Redundant hardware is great, but downtime still happens in the real world, and in many cases is quite expensive.

      My next rack refresh is going to the company that gets me from power on to login prompt in 60 seconds. Why is this a feature for embedded and desktop hardware and not for every run of the mill server and a BIOS/UEFI option? I'd like the one to choose if the hardware does a full hardware diagnostic a startup or a fast boot. If I shut the machine down cleanly (like when the UPS is running low after a series of weekend power cuts to our site) I should be able to quick boot safely at startup.

      My site isn't running a power plant or the DOD. And frankly, the guys in the air traffic control system would probably love something that started in a reasonable amount of time AND did the full system checks, and would pay a princely sum for it.

      1. IGotOut Silver badge

        Re: 3 seconds boot time?

        Many switches I've come across bring everything up long before the management interfaces start to respond.

        And if you want faster boot times over Cisco, there's a huge Chinese company you may want to check out.

  2. james_smith

    Im tempted to buy one of System 76's laptops just to give this a go as long as I can have Linux installed and bootable on the optional additional drive. The whole Rust based OS idea is great, and following Verity Stob's article on Rust (plus the discussion in the comments) it's definitely the language I'm going to look into rather than Google's Go and Fuchsia OS.

    1. NetBlackOps Bronze badge

      Rust was designed for safe(r) systems level programming, so that's a plus. I'll give a try. Besides, I've been meaning to return to OS design.

    2. Hyper72

      Getting into Rust

      Rust takes a bit longer than most languages to fully understand and code often takes longer to get to compile, especially in the beginning until you become more experienced. There will be some frustration in the beginning but push through it. However, it is my experience that once the code compiles it tends to have fewer runtime errors, especially of the hard to find memory related issues you might see normally in C or C++ and with multi-threaded code.

      All the other goodness aside I really love Return and Option types, it's elegant and makes sure return values are never forgotten, they have to be ignored explicitly and it has to be coded from the outset instead of left till later (and forgotten). I've spent decades reviewing code full of return values being forgotten...

    3. Anonymous Coward
      Anonymous Coward

      I'm tempted...

      Just to avoid the "everything is a file". "Everything is a link" seems much more logical and consistent.

      1. Robert Grant Silver badge

        Re: I'm tempted...

        It's also a lot more vague. "Everything has a location and a default handler" is pretty generic.

        1. Fruit and Nutcase Silver badge

          Re: I'm tempted...

          Is there are 404 error code?

        2. sabroni Silver badge

          Re: It's also a lot more vague. "Everything has a location and a default handler" is pretty generic.

          Rather than being explicitly wrong and saying everything's a file? The more generic the better, surely?

          1. Robert Grant Silver badge

            Re: It's also a lot more vague. "Everything has a location and a default handler" is pretty generic.

            Rather than being explicitly wrong and saying everything's a file?

            They could be totally correct and say everything has the empty set of properties, as a minimum.

            The balance between usefulness and correctness isn't always "as correct as possible".

      2. Anonymous Coward Silver badge

        Re: I'm tempted...

        It's not "everything is a link". It's more "everything has a URL". URL stands for Uniform Resource Locator, so it's saying that "everything is a resource" which doesn't sound too far out.

      3. bombastic bob Silver badge

        Re: I'm tempted...

        " "Everything is a link" seems much more logical and consistent."

        I agree 100% (and then some). Having coded for windows as well as for POSIX systems, I totally _LOVE_ the "everything is a file" principle.

        But like so many "smarter than thou" (millennial) types, he has to go and CHANGE things (like making every UI into 2D FLATTY when 3D Skeuomorphic was PERFECT, 'nuff on that). What he forgot is that Microshaft (with windows) _ALREADY_ does this, which means that something using a serial port vs a socket vs a pipe vs a console must CODE EACH CODE PATH DIFFERENTLY in the winders world. In the POSIX world, it's generally the SAME CODE for all of them [with a few exceptions while setting it up, as needed].

        I call the POSIX way "simpler" and MUCH easier to develop for. It's why (I believe) we're STILL using the UNIX model for so many "non windows" operating systems, for over 4 decades. It was SO well thought out.

        1. david 12

          Re: I'm tempted...

          Dunno. The file-open stuff I use is not entirely generic either on my Linux system or my OSx system or my Windows system, but it's still file-open. On all platforms, it's the configuration of the Serial-Port vs USB vs Socket that is different, the file-open code differs just in the name of the different 'files'.

      4. Anonymous Coward
        Anonymous Coward

        Re: I'm tempted...

        As long as it behaves like a file, the url scheme is just schematics.

        dd if=/dev/disks/disk1s1 of=d1s1.bak


        dd if=disks:/1/s1/ of=d1s1.bak

        Who cares whether the prefix i /dev/disks/ or disks://, or whatever, as long as it works transparently?

  3. Marco van de Voort

    And now just put it next to Singularity ?

    Nice. Done. Great. And now just put it next to Singularity!

  4. Anonymous Coward
    Anonymous Coward

    Smells a bit like a future case of Second System Syndrome to me... Best of luck to them, though, and the hellish battles they must be enjoying with the borrow checker.

  5. Steve Davies 3 Silver badge

    Does it have an Orange coloured Crash Screen?

    I'll get me coat. It is Friday and well past beer time (Harvey's Best naturally)

    1. 2+2=5 Silver badge

      Re: Does it have an Orange coloured Crash Screen?

      BSOD - Brown Screen of Death?

  6. Elledan Silver badge

    Ada/SPARK is laughing at your 'safe programming language'.

    Show me an OS written in Ada/SPARK and I'll take it seriously. Not this warmed-over, basically-modern-C++-language with worse syntax and a much weaker type system courtesy of inferred typing instead of strong typing.

    I guess they expect that this will replace Linux any day now, right? I guess Linus et al better start rethinking their disdain of non-C languages lest they get left behind in the Rust-revolution.

    1. Anonymous Coward
      Anonymous Coward

      Show me an OS written in Ada/SPARK

      Why don't you show us one?

      1. Tomato42

        his point is that there isn't one...

        1. Anonymous Coward
          Anonymous Coward

          One might exist if he spent his time writing it instead of whining that other people aren't doing it for him.

    2. John Gamble

      Show me an OS written in Ada/SPARK and I'll take it seriously.

      I'm not certain where you get the idea that an OS not written in a language is proof that an OS cannot be written in another language?

      Ada's and SPARK's syntax were terrible, so I'm not clear where you get "worse syntax". Ada was proof that an extremely strong type system is a hindrance, not an advantage, but you also seem to think that Rust's type system is weaker than C's?

      1. Electronics'R'Us Silver badge


        Whatever we might not like about Ada, it can enforce designs that are properly verifiable (as is required in safety critical applications).

        As to a truly solid OS, look at the (expensive but does what it says on the tin) Green Hills Integrity OS.

        1. Anonymous Coward
          Anonymous Coward


          I have programmed on GHS Integrity653, but I'd go for Wind River VxWorks653 myself.

          Proven correct in so many scenarios: DO-178C (Aerospace), ISO 26262 (Automotive), IEC 61508 (Industrial), IEC 62304 (Medical), Nuclear, etc.

          7 (seven) missions to Mars

          A proper hyper-visor (if required) for multiple real-time guest OS instances, which can be at differing DAL levels to allow consolidation of multiple discrete systems onto one physical target.

          Support for many libraries, such as Boost, plus languages such as C, C++, Ada, RUST and Python.

    3. Robert Grant Silver badge

      courtesy of inferred typing instead of strong typing

      What a weird false dichotomy.

    4. Anonymous Coward
      Anonymous Coward

      Maybe Ada/SPARK isn't well suited for writing a kernel.

      Rust was designed to be blazing fast from the start. It can compete with C++ and even C in many scenarios, while being much safer.

      Not only does Rust track references and prevent double-frees, use-after-frees etc; it also prevents race conditions, making multi-threading much easier and less error-prone. Bounds-checking and integer overflow checks at runtime are inserted, unless the compiler can optimize them away.

      Just because types are inferred, the type system isn't weak. In fact, Rusts type system is much, much stronger than most. Type inference just means that you don't always have to write the type down, but the compiler still checks it, and shows an error when you're assigning a &T to a &mut T, for example.

      Type inference is not the same as type coercion, which Rust never does implicitly.

      1. david 12

        Competing with C and C++ for speed isn't a high bar. C and C++ get reasonable speed only on the back of decades of compiler optimisation: C wasn't intrinsically designed to be a fast language (quite the reverse).

        Remember that when CS students used to tell you that C was 'fast' and 'close to the machine' they were comparing it to Lisp.

    5. Rich 2 Silver badge

      Weak types?

      I know very little about Rust but from what I have read, my understanding is that it has a strong type system. 'Weak' is not the same as 'inferred'. Modern C++ can also be written (indeed, if you follow the best practice rules, it SHOULD be written) using inferred types. The underlying type system is exactly the same 'strong' mechanism though.

    6. bombastic bob Silver badge

      'Rust Revolution'

      "lest they get left behind in the Rust-revolution."

      just like C and Java got left behind in the "C-pound" revolution, yeah. Heh. Last I looked, C++ was neck-neck with Python, both around twice the popularity of C-pound, "after all these years" and the ZILLIONS of dollars and developer time being thrown at it.

      I've looked at rust a little bit. I don't see it as being all that "superior" to C language coding (and is probably NOT in my opinion). "Safer" might be from the view point in SOME cases, for poorly managed/written code, but i don't see it being 'fit for purpose' inside of a kernel.

      Just reading about how its memory allocation works makes me think of the worst Java bloatware (say IntelliJ or the Android build process in general) that I've ever seen. ANY form of garbage collection does NOT belong in the internals of an OS's memory, and non-relocatable memory blocks don't, either. And 'smart pointers' could easily be implemented with C or C++ and reference counting, kinda like COM in Windows. Nothing special here. I've been doing things _like_ that for DECADES (like when COM aka OLE 2 was invented back in the 90's).

      I can't imagine allocating buffers for the network stack using any method OTHER than what is done inside of Linux or FreeBSD's kernel [they are very similar]. Zero copy buffers also. So in short you'll need "raw pointers" for those which basically GOES AROUND the definition of "safety" for pointers...

      And there goes your entire reason for using Rust in the fist place, other than "for the lulz".

      Having done a lot inside of kernels (for Linux _and_ for FreeBSD, as well as some inside of Windows) I'd just like to say I prefer using a language that was originally designed for EXACTLY that purpose (note history of C language and UNIX), than trying to make a high level language (one NOT designed for kernel processing) do the same job, better.

      Rust sounds like it might be a good choice for web services running in userland. I think it should stay there.

  7. David Given

    "This leads to absurd situations like the hard disk containing the root filesystem / contains a folder named dev with device files including sda which contains the root filesystem."

    But that's not how it works?

    "In contrast to "Everything is a file", Redox does not enforce a common tree node for all kinds of resources. Instead resources are distinguished by protocol."

    But protocols are also hierarchical, and you need a namespace tree, i.e. a file system, for managing them. Otherwise you end up with a situation where a single protocol contains unrelated resources, like file: containing /usr and /home; or multiple instances of a protocol managing different resources with no indication that they're the same kind of protocol, like AmigaOS's style volumes, which is just as awkward in a different way.

    I mean, I literally know only two paragraphs of soundbite quoted, but they do raise questions.

    1. Brewster's Angle Grinder Silver badge

      "...a situation where a single protocol contains unrelated resources, like file: containing /usr and /home;..."

      As I read it (disclaimer: I've not used the OS) that's exactly what he means. You seem determined to view it as a hierarchy. But if I give you the Cartesian coordinates (3,4) would you insist they're hierarchical and that the y value is subservient to the x value? Likewise, (I'm guessing) the protocol is part of a co-equal tuple describing a resource - i.e. behaving exactly like an url.

    2. Anonymous Coward
      Anonymous Coward

      It could be done as files without the wierd ownership model that unix has.









      etc, with /files-and-that containing what on linux would be /, but dev split into /disks-n-stuff, networks-or-whatever and so on.

  8. Doctor Syntax Silver badge

    "This leads to absurd situations like the hard disk containing the root filesystem / contains a folder named dev with device files including sda which contains the root filesystem."

    I suppose one way round that would be to mount dev on /..

    Back before the internet became commonplace The Newcastle Connection had the network at /..

    1. Peter Gathercole Silver badge

      Newcastle Connection, or Unix United

      Well, strictly speaking /../ was a super root with the other systems on the network seen as directories, so you would access files on system-B as /../system-B/usr/something, with the full file tree being available. You did need a common user and group space, of course, so the permissions would work properly.

      I only saw the Newcastle Connection running using a Cambridge Ring in Claremont Tower, so I do not know how well it coped with multiple network types, but I there was a serial network driver as well.

      The interesting thing about the Newcastle Connection was that it was a library only implementation, meaning that you did not have to make kernel changes to implement it, as long as you had a suitable network device in the kernel. You just needed to re-define the file handling library calls like open, close et. al., and ensure that you linked against the modified libraries in order to use it. If you had access to the source code of the libraries and commands linked against it (or even the pre-linked .o files and suitable libraries) you could add the facility to pretty much any implementation. I saw it on Bell Labs. Edition 7, BSD, Xenix/11 (Microsoft's port of Edition 7 for the PDP11), UniPlus, and I think Durham had it running against Ultrix/11.

      I was needing some software for my PDP11, and I was helped by Dr. Lindsey Marshall who copied it from one system on the Cambridge Ring to a tape drive on another, by using a command like:

      tar -cvf /../system-B/dev/rmt0 *

      Because it honoured all device semantics, you could access devices on another system as devices, something that NFS took years to manage. Of course, AT&T's RFS available in R&D Unix could also do the same, something that I had fun playing around with it later when I was working at AT&T.

      The Newcastle Connection was a very elegant system, but when I saw it, it was pretty much restricted to UNIX, although there as a project to write some client support for CP/M using a serial device as the network at Newcastle Polytechnic, but I don't think that the project was ever completed, although I was asked to add a serial driver to the PDP11 I looked after.

      1. caleb racey

        Re: Newcastle Connection, or Unix United

        Sounds interesting, if you don't already know about it I'm sure Newcastle's computing history group would love to hear from you. Details at

  9. Anonymous Coward
    Anonymous Coward

    He's completely missed the point of everything being a file in unix

    Ie that

    A) the various filesystem objects and devices can all be accessed using standard tools in the shell

    B) In code almost everything can be accessed with an integer file descriptor and be multiplexes with other descriptors

    Pointing at completely different objects. Its 1970s abstraction and its stood the test of time.

    Ok, not everything can be accessed as a file , eg network interfaces, but sockets are still integer descriptors.

    Contrast this with the dogs dinner that is Windows handles - a different one for every subsystem and not

    Interchangeable so you ave the absurd situation of not being able to multiplex a socket with stdin so having to

    multithread even the most trivial applications.

    The fact that he doesn't get doesn't fill me with confidence about his OS internals or usability.

    1. Anonymous Coward
      Anonymous Coward

      Re: He's completely missed the point of everything being a file in unix

      And what OS have you written from scratch, that your opinion is more valid than that of someone who actually has?

      It's a serious question, to make sure it's not just some more peanut gallery commenting.

      1. Claverhouse Silver badge

        Re: He's completely missed the point of everything being a file in unix

        I'm not criticizing any OS, but one scarcely needs to have written one to critique the design and functions.

        Any more than one needs to have built a car by hand to critique the design flaws and handling of a bought car.

      2. Tomato42

        Re: He's completely missed the point of everything being a file in unix

        how many songs have you written to criticise the music you don't like!?

      3. Aitor 1

        Re: He's completely missed the point of everything being a file in unix

        Writing an OS from scratch only helps you understand how hard it is.. not to be great or anything.

        So the question should be "what great OS have you written?"

        1. bombastic bob Silver badge

          Re: He's completely missed the point of everything being a file in unix

          working on kernel code in multiple OSs can give you the same *kind* of insight as someone who wrote one from scratch. You get to see how different architectures work, how easy they are to maintain, etc..

          I've done that, by the way. Already wrote what I think in another thread.

          A quick summary: The "safety" aspect of Rust is essentially UN-DONE by using 'raw pointers' for things that MUST use 'raw pointers' for performance reasons. This ESPECIALLY includes the network stack and zero copy buffers... and when you use "raw" pointers, you essentially bypass the "safety" part. So there ya have it. No real advantage, plenty of DISadvantages, using Rust for a kernel.

      4. Anonymous Coward
        Anonymous Coward

        Re: He's completely missed the point of everything being a file in unix

        I've used, admined and programmed on most versions of unix, I've used and coded on Tandem NonStop OS and some versions of Windows and been a user of VMS and Mac over the last 30 years. I think I have a vague idea of what makes a good OS interface.

        If he thinks that "everything is a file" is a bad idea then he clearly doesn't understand the idea because compared to the way other OSs present their subsystems its a work of absolute genius.

    2. Frumious Bandersnatch

      Re: He's completely missed the point of everything being a file in unix

      The fact that he doesn't get doesn't fill me with confidence about his OS internals or usability

      A bit early for the "X is good. You don't even understand X" line, I think.

      I suspect that the actual situation is a bit more nuanced.

    3. Anonymous Coward
      Anonymous Coward

      Re: He's completely missed the point of everything being a file in unix

      Windows has handles for things than in *nix simply does not exit (as they are implemented outside the OS) - can you treat a window or an edit box like a file in Linux?

      On the other hand many objects in Windows can be used as files as well. Moreover I prefer to get/ser data using a structured API with clear error information than from a generic one returning generic error as well. Unix in 1970 took shortcuts to simplify development, you may think it was a great design, it was just lack of time and resources.

      1. Anonymous Coward
        Anonymous Coward

        Re: He's completely missed the point of everything being a file in unix

        Compremises. You cannot have a tool that does it all. The "this is better" crowds always forget, you can have for example, a large hammer, or a sharp knife. Why consider one better/worse than the other.

        Hence why arguing about Linux and "which is best" is often pointless. Sometimes you need different tools. Kudos to Rust for trying to fill that gap, while also not just being Windows.

      2. Wicked Witch

        Re: He's completely missed the point of everything being a file in unix

        Plan 9 fixed that, but unfortunately it never really caught on.

      3. bombastic bob Silver badge

        Re: He's completely missed the point of everything being a file in unix

        "can you treat a window or an edit box like a file in Linux?"

        a file would not be fit-for-purpose for a UI element, just like it's not fit-for-purpose for a single keystroke. however, the connection to the X server is DEFINITELY a file underneath the hood, either a pipe or a socket (really in the POSIX world it could be a serial port and the library would still work).

        Yeah I've done low-level X coding. writing my own toolkit even. But my project doesn't "make ink" in El Reg I guess because it's not "sexy" enough, doesn't use "new language of the month", isn't controversial, etc.. [and I keep having to adapt to the moving targets caused by OTHER toolkit/WM makers, who can't just keep system settings as it was, for example, and must change and change again to adopt their OWN way of telling you what colors to use...]

        window identifiers are like handles. that's just for events, though, to designate 'who gets it'. Processing events, drawing, etc. is up to your code to perform. And it's VERY low level.

      4. Anonymous Coward
        Anonymous Coward

        Re: He's completely missed the point of everything being a file in unix

        "can you treat a window or an edit box like a file in Linux?"

        The edit box is not part of the OS so why would you be able to? Unlike Windows the GUI is not part of the kernel, its in the X server. But you CAN get the socket to the X server and multiplex on it waiting for events if you need to do multiplexing (or you can just use some higher level API call eg XNextEvent()

        In Windows it would be a dogs dinner of various handles and lots of threads just to complicate things even more. No sane OS uses threads as a necessary requirement for multiplexing apart from Windows.

        Nuff said.

    4. Doctor Syntax Silver badge

      Re: He's completely missed the point of everything being a file in unix

      AIUI he hasn't. He's just taken "everything is a file" and changed it to "everything is a URL". It extends "everything" to include stuff not on the physical computer. Whether or not this is a good idea is debatable. It assumes that the computer is on-line and if it isn't there must be a lot of URL equivalents of "file not found".

    5. Aloso

      Re: He's completely missed the point of everything being a file in unix

      Well Redox uses a different model: Not everything is a file, but everything is an URL. This is neither like Unix nor like Windows, and I guess it has many of the same advantages as "Everything is a file". We shouldn't judge a system we don't fully understand.

    6. richardcox13

      Re: He's completely missed the point of everything being a file in unix

      > Interchangeable so you ave the absurd situation of not being able to multiplex a socket with stdin so having to

      As they're all handles in Windows, this is exactly what you can do. When you need it, the kernel will even dispatch IO completions across a size limited thread pool for you. (Not possible with the BSD like helper library, but any useful program moves past that pretty quickly.)

      1. bombastic bob Silver badge

        Re: He's completely missed the point of everything being a file in unix

        I have a program that uses a serial port for basic communications, designed for use with things lke Arduino. it runs on windows as well as POSIX systems. Serial I/O on windows is unnecessarily complicated and requires using threads to manage it. Serial I/O on POSIX systems is relatively consistent and does what you expect when you send something and wait for a reply, timeout if you don't get it. I've done a lot of OTHER things that are very similar. In windows, the "un-abstracted" way in which you perform IO is *PATHETIC*.

        using a URL, and depending on the protocol, has the potential of requiring "different methods" downstream. This is where a model like this falls apart. You should not have to know about communication protocols to/from a device (example, is it USB or built-in hardware) for communicating to it, unless it being a USB device is particularly important (for example), and that's where /dev entries and ioctl operations come in in the POSIX world...

  10. YetAnotherJoeBlow Bronze badge


    I'm probably missing something, but the last time I looked I found C code in I think in relibc and one other spot I can not remember. I also noticed that when it's time to do "the fun stuff" every thing is prefixed with unsafe. So in the end it is still unsafe correct? Like I said though, I'm probably missing something.

    1. Tomato42

      Re: redox

      well, it does make 3rd party review easier, as then you need to look at only the parts that are marked unsafe

      but yes, any kind of hardware access needs unsafe keyword

      1. Anonymous Coward
        Anonymous Coward

        Re: redox

        Oh, I remember coding in a language with safe/unsafe options back in the 1970s ( actually I think it was user/system rather than safe/unsafe, but the same effect ). Originally intended for use in chemical plants, IIRC, so it was quite important to know which bits were 'unsafe'.

    2. DrXym Silver badge

      Re: redox

      relibc is a C POSIX library. It's there so Redox can run ports of software compiled with other languages like C. Other Rust software wouldn't use it directly though I suppose it could come in indirectly if there was a crate that depended on a C library which depended on relibc.

      I expect the safe/unsafe situation in the rest of Redox largely depends on context - something that is interacting with hardware, lowlevel structures or gnarly scheduling stuff might be unsafe, but the remainder, the majority is going to be safe. It would be interesting to count the relative amounts of safe & unsafe code but let's remember that all C code is unsafe.

  11. David 132 Silver badge

    "Buggy driver"?

    BSD is preferred but "a single buggy driver can crash the system".

    So just don't let the Amish use it then. Problem solved.

    1. Evil Harry
      Thumb Up

      Re: "Buggy driver"?

      "So just don't let the Amish use it then. Problem solved."


  12. Anonymous Coward
    Anonymous Coward

    Get over your Filesystem operating systems

    Was the file system and abstraction too far? I think so. The drawers in a filing cabinet might be a natural way for humans to put physical information but this doesn't lend well to the computing world. Spinning rust never know about "directories". About time this filesystem thing was done away with. Its all about data and coded, the two are separate and not one. Protocols an are interesting way of looking at data that is being shifted about. Time to stop overthinking some things.

    Rust still isn't there - its too far away from the silicon IMHO.

    1. pogul

      Re: Get over your Filesystem operating systems

      "Everything is a file" *might possibly* be an abstraction too far, though I'm a Unix fan in general.

      The filesystem though... are you serious? It's possibly the simplest abstraction I can imagine for organising data. You have leaf nodes (files) and ways of grouping files (directories). A simple graph.

      I had a boss who thought he was very forward thinking and clever by suggesting the department wiki should be all about search - none of this old fashioned file/folder business. So what happens? Chaos - stuff went in, but it was basically a black hole.

      When amazon came up with the genius of S3 (everything goes in a bucket), wasn't one of the first things that happened was being forced to create a fake file/folder system over the top?

      If anything, we need a richer abstraction - versioning and tagging on the standard system calls perhaps.

      1. lee7

        Re: Get over your Filesystem operating systems

        Everything is either a leaf or a node. I have a bit of a mania for this. My home dir has directories only; no (other than hidden) files. I try to adhere to the idea that a directory should generally contain either files or other directories- it makes for better organisation, to my mind. conf.d directories are great, but should include the main config file, rather than have that outside. I believe that better fs organisation would make for an easier to navigate system. Linux /etc should contain only directories - one for each subsystem.

        Whenever I see ~ full of random files, I sigh. But then, I don'tlike the desktop analogy, either. Don't let me start about icons all over the screen. I guess I must be a bit of an old curmudgeon.

        1. Anonymous Coward

          Re: Get over your Filesystem operating systems

          For a moment I thought you meant "my pc has no files, it's all Directories", then I re-read it. XD

        2. Jamie Jones Silver badge

          Re: Get over your Filesystem operating systems

          I tend to prefer to organise things that way too, though I'm not anal about it in situation where mixing things is more suitable.

          However, somewhat related, and I bet you agree: BRING BACK "DOTDIR"

          Remember the days where you could define environment variable "DOTDIR" and all programs that created crud would create it within $DOTDIR instead of $HOME/.xxx

          I hate all the .shite that accumulates in the top level directory... Why did DOTDIR go out of fashion? I'm guessing it was mainly used for shared accounts, so became less relevent over time.

          1. TimMaher Silver badge

            Re: Get over your Filesystem operating systems

            Ah yes @JJ.

            Working on DX10 for a while I used to structure directory trees in morse code as you could have empty names (code only, not shell or Ui. There was no Ui.).

            So you could create a structure called “...dashdashdash...” and it was fine.

            Worked as well as 505.

            Happy days... Sigh.

      2. Trixr

        Re: Get over your Filesystem operating systems

        so if you created your OS structure like LDAP (including its API), you satisfy exactly that kind of "richer abstraction". Your heirarchy, a well-defined location for everything, and, with the objects and their classes, various attributes that define the methods used for accessing them.

        LDAP isn't a file system, obviously, it's a protocol. I can certainly imagine a low level OS protocol that acts as the "directory" for all the other OS protocols. I mean, they specifically reference EHCI for USB devices.

        And for everyone bleating on that they're saying this OS will be "fileless", I don't know where they're getting that from. It wasn't implied. It said that file system methods would not be used to access system resources unless they are of type "file".

    2. Pascal Monett Silver badge

      Re: Get over your Filesystem operating systems

      So, what OS have you written that doesn't need files ? I'm really curious to take a look.

      Before going to back to something that actually works, that is.

      1. Steve Graham

        Re: Get over your Filesystem operating systems

        "So, what OS have you written that doesn't need files ?"

        Both iOS and Android attempted this. They pretended that there were no files, just apps and their data. Of course, files still existed in reality, and over time have made their presence felt.

    3. martinusher Silver badge

      Re: Get over your Filesystem operating systems

      The quotes in this article don't make a whole lot of sense. First of all is something I really don't like and that's the naming of directories as 'folders' (complete with cutsie icons of filing cabinets on a GUI desktop). They're not 'folders', they're directories and furthermore, a directory is just a file with file information in it. Then there's the "what's the purpose of /dev/null?". Its one of those things that "If you have to ask then you probably won't understand the answer". The purpose, like everything in UNIX, is uniformity and regularity.

      UNIX is a relatively old operating system so there will be 50 years of evolution and entropy to manage. The character/block device construct is obvious in 1970 but leads to contrivance and workarounds in 2020. The network abstraction makes a lot of sense but at the same time the drivers are still constrained to the device driver model (but the resource locators work exactly as they were originally devised for filesystems, both local and remote). The problem I have with network protocols is that they are constantly abused -- people don't really appreciate the difference between stream and datagram oriented protocols so they build structures of immense inefficiency and poor behaviour (putting DiY framing on top of a stream protocol is a pet peeve of mine, the fact that the entire Webverse is built on this is irrelevant, it just shows how most programmers have absolutely no idea what's going on under the hood).

      (Personally, I blame companies like Apple and Microsoft for this mess. They flooded the market with 1960s architecture minicomputers in the 1980s and their 'operating systems' never really kept up with the capabilities of the processors as they developed.)

      1. Jamie Jones Silver badge

        Re: Get over your Filesystem operating systems

        The character/block device construct is obvious in 1970 but leads to contrivance and workarounds in 2020.

        FreeBSD got rid of block devices about 20 years ago (in

      2. Anonymous Coward
        Anonymous Coward

        Re: Get over your Filesystem operating systems

        "putting DiY framing on top of a stream protocol is a pet peeve of mine, the fact that the entire Webverse is built on this is irrelevant, it just shows how most programmers have absolutely no idea what's going on under the hood"

        A touch of arrogance perhaps? I imagine most network programmers such as myself have a very good idea whats going on underneath and we arn't really concerned about the stream side of TCP - what we're interested in is the virtual connection side and guaranteed in-order RX of data. Neither of which you get with UDP. But hey, if you want to reinvent the wheel and do all that with UDP just so your code can be "pure" with 1 frame = 1 UDP packet then knock yourself out.

    4. Charles 9 Silver badge

      Re: Get over your Filesystem operating systems

      "Its all about data and coded, the two are separate."

      Oh? What about a compiler, especially a JIT compiler, which CAN'T work in a strict Harvard architecture?

      1. Anonymous Coward
        Anonymous Coward

        Re: Get over your Filesystem operating systems

        I only see benefits there. Just in time is way too late, unless you like lesser languages.

    5. Doctor Syntax Silver badge

      Re: Get over your Filesystem operating systems

      The whole of S/W development (and that includes micro-code in processors) is about abstracting the real bits in the computer into some form that's easier for the user to grasp. If that were not the case we'd still be writing programs in machine code.

      1. Anonymous Coward
        Anonymous Coward

        Re: Get over your Filesystem operating systems

        I beg to differ. The whole point of S/W is to make a profit. If could visualize software like bridges then the engineering would be rated utter shite at best.

        What next, filesystems that are more complicated that databases and serve you up snapshots of things of long ago. I could not think of a more flimsy foundation up which to serve up code for execution.

  13. lee7

    what problem is it solving?

    I don't recall the last time I saw a system crash in Linux, nor why having /dev (with a device for the root partition) mounted under / is an absurdity. Most recent major problems seem to be in the cpu, rather than the os. As for complaining about "everything is a file", it's as silly as the systemd arrogance of "lowercase command names are old-fashioned". Oh, I hate NetworkManager.

    1. detuur

      Re: what problem is it solving?

      Redox seems to be a solution in search of a problem.

      As neat of a project as it is, I really don't see a purpose for it. It's a playground for its developers to have fun writing an OS. To call it a toy OS would be unfair considering the amount of work gone into it, but I struggle to give it any more purpose than TempleOS (at least that one had a self-hosted compiler).

    2. werdsmith Silver badge

      Re: what problem is it solving?

      I'm always given hope whenever I see that there is a possible alternative to linux on the horizon.

      Linux was once hailed as the great hope but has unfortunately now sunk into a pit of shitness that nothing ever gets out ot.

    3. Aloso

      Re: what problem is it solving?

      My Linux machine at home crashes from time to time, but I'm too lazy to invest time to find the issue.

      Linux contains over 27 million lines of code, excluding comments. The presence of countless bugs is inevitable in such a codebase, even with experienced programmers and careful code reviews.

      On average, software has between 15 and 50 bugs per 1000 lines of code. Just imagine how many bugs Linux probably has. This is the reason why microkernels exist (and languages that try to prevent the majority of bugs related to memory unsafety, which, BTW are often exploitable).

  14. Blackjack Silver badge

    In twenty years...

    Redox OS will have legacy issues too, if it still exists of course.

    There is no OS that has lasted more than a decade out of beta and doesn't have legacy issues.

    1. james_smith

      Re: In twenty years...

      A microkernel tries to sidestep the legacy driver issue by having them in userspave, and therefore much more isolated from the internal structures of the kernel itself. One problem with monolithic kernels is that subsystems and driver's get deeply enmeshed with each other, leading to problems with refactoring or replacing conceptually self contained bits of code.

      1. Charles 9 Silver badge

        Re: In twenty years...

        How does a micro kernel balance the needs of control with performance, especially for throughput-sensitive functions like high-speed low-latency networking?

        1. Doctor Syntax Silver badge

          Re: In twenty years...

          It depends on what balance you want to achieve. Traditionally performance requirements have dominated. The result seems to have been at the expense of security. Is that the right balance now? As H/W gets faster should the balance change? Could we reduce the performance penalty, at least in user-facing systems, by cutting down on UI bloat?

          1. Charles 9 Silver badge

            Re: In twenty years...

            "As H/W gets faster should the balance change?"

            No, because increased speed forces a focus on performance. Physics gets in the way: a little problem called the Speed of Electricity (measured as a fraction of c). To put things in perspective, a photon, given only 1 nanosecond, can only travel (at most) 30cm. Electricity's limit is going to be somewhat lower than that, and at this point we're still only talking theoretically. And BTW, the performance demands I'm talking have nothing to do with UI. This is to-the-metal, pure hardware issues here. How else can you keep a 40Gbps (or faster) link fed without choking somewhere from sheer physics?

  15. Lee D Silver badge

    1) My ZX Spectrum boots in under a second without anything approaching even 1% of NVMe speeds. Boot time is not an indicator of performance. I used to be able to boot old PCs just as fast from hard disk (if you ignore the BIOS memory check), it doesn't mean anything. And even then, it "needs" NVMe to boot that fast in the first place.

    2) Microkernels suffer from poor performance because of the sharing of data and simultaneous data access between all the different subsystems. Memory gets very contended and the system just can't perform as well. This is quite literally the MINIX vs Linux argument all over again. So it might boot on a sixpence, but it could be dog-slow after that.

    3) Kernels, and drivers (which are literally stated as one of the largest areas of Linux) require direct memory interaction to function properly. This is an unsafe operation. A bug in a driver or kernel at the wrong place means crashes. BSD is god-knows-how-old and you're still complaining about crashes in its drivers. How long before a Rust OS that has to separate all the safe vs unsafe parts out and put all the *same* kind of checks around the unsafe parts to make them "logically" safe (but not guaranteed safe to operate on) will be able to compete with hardware support, speed and everything else?

    4) The "as a file" scheme is not a problem. Why is having a device show inside the device tree which is hosted on the device a problem? Of course it has to be... it's a device itself, therefore it's in the device tree. The device tree, however, is not located inside the device (the root / is not on your hard drive... it's a virtual root in RAM. Of course you can overlay mount over it, almost everyone does, but then /dev/ is a virtual device that's not on your hard drive, and that's what holds the actual drive). There is nothing stopping you having a virtual root, with /storage which contains all the drives and /devices that contains all your devices and thus eliminate any nesting in seconds. You don't because... why would you? It causes no problems and in some circumstances can come in real handy.

    It's another ReactOS / MINIX, from what I can see. And self-hosting is a milestone, sure, but it shouldn't be that hard. If you have an OS with drivers, GUI, booting, etc. then self-hosting is really not very much a step at all. Horrible to write the initial bootstrap compiler but there are projects for that. First, you write a micro compiler, simple enough to do by hand and very feature-limited on purpose, then you write a mini compiler in the micro language, then you throw something like tcc at the mini compiler, which gives you a full compiler but without all the bells and whistles, from which you can then make anything else (including gcc).

    If you have a full compiler project already written, in a C-like language, that you have control over, it's just bootstrapping to greater and greater functionality (and likely memory safety! The first micro compiler won't be very memory safe at all but it won't matter because you'll be writing precisely one program in it that you want to interact at the lowest levels). I think it's an actual problem that it's taken that long to bootstrap a working Rust compiler.

    You've gained nothing security-wise, for unspecified performance, on a niche system, when you could have just Rust-ised the entire userland of one of the common OS and then started Rust-ising the driver layer of said OS. You'll get to the same cliff-edge where you lose all memory safety anyway, but you're not re-inventing the wheel and could smarten up the kernel/user divide of working OS along the way.

    1. Do Not Fold Spindle Mutilate

      Do you have links for further reading on your ideas?

      Your comments interest me and I would like to learn more about them. If you could, do you have links to articles which expand on your comments? Specifically (1) MicroKernal problems ex Minix vs Linix. (2) Drivers need direct memory interaction.

      I took an operating system course many decades ago, when the machine was an IBM 370 and was just being replaced by Amhdal. I am now a retired former DBA.

      Thank you.

    2. Doctor Syntax Silver badge

      Whether micro-kernels are the way to achieve it or not (and possibly they might be) I think there's a need from the security PoV to move away from the idea of an all-powerful root user. The MO of a lot of exploits is privilege escalation.

      The functions of root need to be split. One function would be to allocate user IDs. Another would be to allocate storage space. A third would install applications. None of them would have the ability to read or write anything other than what they need to do. If an application needs to access file space it should do so in a space allocated for that particular application or at most a class of applications.

      Instead of calling a common kernel service an office application, for instance, would call an office data storage service with some means of checking that the client was a registered office storage client in addition to the check that the user had access rights according to user, group or public settings* with nobody else.

      The storage service might even be able to check file format. It might provide versioning. What it wouldn't do would be to provide read access to some malware trying to exfiltrate data or hold the data to ransom. Such malware would not only have to impersonate the user but also the application.

      This division of responsibilities might have a performance impact but that would be the cost of security. As things are we currently see security being sold cheap in terms of convenience and performance. It needs to be given a higher value.

      * Or some other ACL

      1. Anonymous Coward
        Anonymous Coward

        "If an application needs to access file space it should do so in a space allocated for that particular application or at most a class of applications."

        The problem becomes, at some point, everything has to be done at once and, for whatever reason, you need a skeleton key: a last resort. Kinda like how the guy with the red key gets locked in a room that only accepts blue keys and the guy with the blue key gets locked in a room that only accepts red keys. Or in your case, someone's trying to install an application but needs more space to do it, but the user who cleans up dead space isn't available or is for whatever reason locked out. Too many locks, you run the risk of a lock-IN.

        Furthermore, what if the application in question requires absolute speed (which is increasingly demanded to avoid getting complaints or--worse--getting beaten to the punch--think a trading floor).

  16. 2+2=5 Silver badge

    Everything is a file

    The question is not what is 'wrong' with Unix/Linux but what is 'right' with Redox?

    Why should I use Redox instead of RiscOS, for example? Or SmallTalk? [*].

    Don't get me wrong - I like the idea of an OS written in a more robust language but why not just re-implement BSD, for example? If its main problem is buggy drivers then using Rust to re-write them would seem to be the obvious solution?

    [*] Okay, you have to go quite a long way back to find other examples of self-hosted language+compiler+operating system. :-) Maybe a more recent analogy would be Elementary OS which uses Vala for the desktop and main apps but is Linux underneath.

  17. Pascal Monett Silver badge

    So, booting in three seconds is not fast enough ?

    If I may, 3 seconds is largely enough since the time between you press the button , take your seat, grab your coffee and get your mouse ready is going to be at least 3 seconds. And that does not take into account putting your glasses on.

    Come on, anything less than 5 seconds is perfectly functional. Of course, I have a Windows history of needing to wait for more than a minute with Windows 95, to several minutes in a Vista corporate environment, so I've probably been beaten into submission on that point, I'll give you that.

    Just remember one thing : better is the enemy of good. Don't go ruining something just because you want to shave another second off your boot time. Most people boot their computers once a day, if that, so saving one second is not really a heavy priority.

    Then again, saving 33% can be viewed as a priority I guess, but that's just how percentages can screw you.

  18. John Gamble

    What Needs the Re-Write?

    Re-writing critical libraries in a safer language is a good idea -- I would be fascinated by a BSD or Linux system with lib* replacements written in Rust.

    I'm less sanguine about a whole new operating system with a new interface and the learning curve that comes with it. At a time when we're still making cynical jokes about desktop Linux, getting a new OS into the mix seems to be a task with uncertain benefits, unless one is targeting an entirely new device.

    1. Anonymous Coward
      Anonymous Coward

      Re: What Needs the Re-Write?

      if it can build on Rust, and live up to potential of being safer / more secure than existing options - and, critically, manages to stay that way (no bloat, no "features" you can't remove if unneeded, etc) - then I guess it could be an interesting option for embedded / industrial / control systems, especially if higher SIL?

      1. Anonymous Coward
        Anonymous Coward

        Re: What Needs the Re-Write?

        "Re-writing critical libraries in a safer language is a good idea -- I would be fascinated by a BSD or Linux system with lib* replacements written in Rust."

        Yes, and even only once openssl has been rewritten in rust.

        Re-writing a whole OS in 2019, /shake head ...

  19. ibmalone Silver badge

    In contrast to "Everything is a file", Redox does not enforce a common tree node for all kinds of resources. Instead resources are distinguished by protocol. This way USB devices don't end up in a "filesystem", but a protocol-based scheme like EHCI. Real files are accessible through a scheme called file, which is widely used and specified in RFC 1630 and RFC 1738.

    It's certainly a time-saver: I hate it already.

  20. Henry Wertz 1 Gold badge

    In case you wondered "WTF is rust"?

    In case you wondered, "WTF is rust"? In very short, it is like a modernized C, especially adding security features (to avoid buffer overflows, out-of-bounds access, etc.) I'm quite sure it adds other features as well (otherwise it could probably be added to C with a header or library), but the security is what they tout.

    I'm pretty sure it's intentionally kept similar enough to C so that existing (well-behaved) C code can be ported very easily to Rust (of course, "badly behaved" code might run but would probably access RAM in ways that Rust would deem illegal. But it'd also shoot out compiler warnings and such in gcc already, and probably have security flaws.)

    1. Lee D Silver badge

      Re: In case you wondered "WTF is rust"?

      And one of the very first features it had to acquire to be useful in some areas (and which it has FAQs about because they are used that often) is unsafe memory access, just like C has.

      How do you take a block of RAM which, say, is a memory-mapped graphics framebuffer, network buffer, hard disk access buffer, etc. which the hardware tells you where it is (e.g. PCI layer or pointer to it), and then utilise those underlying bytes of arbitrary RAM in a type-safe way without having to treat the underlying bytes as completely untrustable, potentially-out-of-range or -nonsensical bytes which you then have to sanitise to make them useful and hope you never made any mistake? You can't.

      From the Rust language book:

      "Another reason Rust has an unsafe alter ego is that the underlying computer hardware is inherently unsafe. If Rust didn’t let you do unsafe operations, you couldn’t do certain tasks. Rust needs to allow you to do low-level systems programming, such as directly interacting with the operating system or even writing your own operating system."

      Until you get "memory-safe hardware", you've got the boundary of safety at exactly the same place that it is for every other language or OS - right at the critical point of having to interpret hardware-provided information in the deepest, most privileged rings of the processor/kernel, to do so in an incredibly performant manner, and to never make a mistake.

      Windows blue-screens or Linux kernel panics not because an application crashes (those days are hopefully long gone), but because graphics drivers, network drivers, etc. all have to be well-programmed and able to take everything you throw at them and get it right every time or the underlying hardware or software crashes at the highest levels of the kernel in an unrecoverable manner. And yet, the drivers also cannot possibly affect the performance of the hardware, or the hardware you're selling then looks like trash compared to the competition and nobody will use your driver or hardware.

      1. Charles 9 Silver badge

        Re: In case you wondered "WTF is rust"?

        And it should be noted that it's not always the driver at fault. No software in the universe can solve for when the actual hardware suffers a real physical glitch (common example being a drive controller failure--sudden and usually permanent). Hardware like this is untrustable--physics gets in the way at the metal level.

  21. C.Carr

    That's really quite impressive, actually.

  22. sajattack

    My cthulhu statue is front page news lmao.

  23. Kevin McMurtrie Silver badge


    The real question is whether or not enough people can work on this project to maintain it. Rust could fall out of fashion then Redox is done.

    I get the same feeling about Rust that I get about Go, like somebody created them to prevent coding anti-patterns that somehow became convention. Some highly opinionated idiot throws a million different types of declared exceptions and then does a catch/re-throw on every f'ing line of code, but nobody wants to deal with that crap so they declare that they throw the root exception object. Another opinionated idiot hates this and demands that a coded return value is tested on every f'ing line of code, but nobody wants to do that either and errors are missed. Meanwhile, the offshore contractors flunked their Data Structures courses so everything is an unsafe cast. Rust and Go are born. Now nobody can have nice things.

  24. FrogsAndChips

    if Rust continues to grow in popularity, [...] perhaps Redox will become more prominent

    2020, the Year of Redox on the Desktop?

  25. horse of a different color

    Rust vim

    I'm disappointed they didn't call their Rust-compiled vim rum.

  26. John Smith 19 Gold badge

    "Everything is a URL" sounds like WAP to me

    That's how Wireless Access Protocol was designed to allow phone specific websites to run on both the fairly low MIPS phones at the turn of the century and the more powerful ones running major UI's (for the time).

    As you will have noticed it did not set the world ablaze.

    Let's face it 50 years of C/C++ have given us quite a good idea of what to do to write secure code.

    But no PHB wants to invest the time, money and effort to do so.

    Because almost no customer wants to pay for it when they can have "The new shiny," WTF "new shiny" is this year.

    1. Zippy´s Sausage Factory

      Re: "Everything is a URL" sounds like WAP to me

      C and C++ have also given us quite a good idea of what not to do to write secure code, as well. I think I started a nervous tic when I saw "everything is a URL"...

      1. John Smith 19 Gold badge

        "C and C++ have also given us quite a good idea of what not to do to write secure code"


        That was sort of my point. By showing people what not to do.

        I'd like "buffer overflow attack" to be something you only see in a dictionary of retro IT terms in my lifetime.

        But I don't think I'll live that long.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021