"in our copious free time"
That's a good one :)
We all know that the Rust language has become much more popular. By Slashdata's count, Rust users have nearly tripled in the past 24 months. Mark Russinovich, Microsoft Azure's CTO, tweeted that "it's time to halt starting any new projects in C/C++ and use Rust for those scenarios where a non-GC language is required. For the …
This post has been deleted by its author
Ok I'll bite.
I need to point out, that anyone lumping C together with C++ is off the rails to begin with. C++ was made to make things possible that were not realistically possible in C, it accomplishes this by significantly extending - and to a smaller extent changing - the syntax and semantics of the C language. These are two different languages that solve two different problems; anyone writing "C/C++" must have missed this point - and if you miss that point, I'm not really sure how any opinion that person may have on languages, especially those languages, would be relevant.
C++ solves a number of real world problems; and we can always have an argument about how well it does that - clearly with the legacy of C compatibility the syntax of C++ may not be as it would have been, had the language started from a clean slate with not a care for compatibility and adoption.
Rust set out to also solve some problems, and that's great. One of the most notable problems in my view, that C++ solves to a high degree and which Rust doesn't even attempt to solve, is that of reliable error handling. Rust encourages a coding style where errors are ignored, because passing back errors via return values is tedious and leads to boilerplate code. Yes Rust tries to help you remember this with compiler enforcement, but nobody likes boilerplate code and the language encourages you to circumvent this mechanism.
In contrast C++ solves this with exceptions (which is by no means something C++ invented), which again comes with its own set of requirements for competently written code. C++ offers all necessary mechanisms to safely handle errors in large scale applications without the use of boilerplate code - and I personally find that to be a huge advantage over languages that do not (such as Rust and many others).
This is not to say that you can't write good software in Rust; of course you can. Lots of great software is written in C too. And I'm sure Rust is a slightly better C than C for many uses - and that's great. I'm not trying to detract from Rust here.
But honestly, replacing C++ with Rust for large scale applications that need to work in the real world? Sure you can do it. Given enough investment anything is possible - I have to say I don't see this happening on a large scale for business that actually need their software to work all the time and every time.
During the meanwhile, there is a reason that kernels and similar close-to-the-hardware stuff (so-called "drivers", for example) is usually written in C and/or assembler.
There is no one size fits all for software development. Anybody who tries to tell you otherwise is deluded.
Horses for courses & all that.
Rust encourages a coding style where errors are ignored [citation needed]
I think most would argue the exact opposite, many errors are impossible to ignore in rust, and warnings are clear and can't be resolve with //ignoreme statements.
There are many that argue exceptions _are_ ignoring errors + gotos. And many C++ frameworks that don't use them and have C style error handling.
It sounds like you are only looking at memory management coding mistakes.
While very important, there are a lot of types of runtime recoverable error.
How does Rust handle "can't open socket", "corrupted input", "hardware vanished", "insufficient contiguous memory available" and the like?
Idiomatic C++ uses exceptions for this, automatically cleaning up everything until it reaches a level in the stack that knows how to handle it.
> How does Rust handle "can't open socket", "corrupted input", "hardware vanished", "insufficient contiguous memory available" and the like?
Like it handles all errors: By returning an Result.
Results need to be handled, there is no way around it. If you force it, then application panics when an error actually happens.
Yes, there is boilerplate passing Results up the call chain: `?` gets the OK value out of a Result or bubbles the problem up the call stack. I love it: It is simple, obvious and you always know where and what can actually happen.
> Idiomatic C++ uses exceptions for this, automatically cleaning up everything until it reaches a level in the stack that knows how to handle it.
... except when your code is not exception safe: Things explode in a nice display of color -- sometimes.
Or uses Qt or something that has banned the use of exceptions.
Okay, I'll bite RE error handling. Rust explicitly DIScourages a coding style where errors are ignored. You are forced to handle errors to get the "Ok" value out of a Result. And most boilerplate code from handling errors is sugared by the "?" operator: https://doc.rust-lang.org/rust-by-example/std/result/question_mark.html
In contrast with C++ exceptions, I think something worth mentioning is that Rust's error handling has no need for stack unwinding. "Unwinding" in the context of calling destructors during error handling is already handled by the Rust compiler's Drop magic. https://doc.rust-lang.org/reference/destructors.html#drop-scopes
Err... unwinding is for much more than memory deallocation!
If you are used to an exception-based error handling model, it is extremely powerful but it means taking many actions other than calling destructors during unwinding. Various external entities (objects within the system, and external systems outside it) will require being told to stop doing something, to release resources or cancel reservations. Many housekeeping and tracing systems have to be informed about the unwind for logging, debugging and understanding performance.
If you don't understand the power of, and need for, stack unwinding and its handlers then you should never be allowed to raise an exception in the first place!
I am well aware! I am just talking specifically in the context of unwinding the stack to throw an error. Rust does not need to do this, is my point. (It does however do this for panics, which are intended to be unrecoverable errors. And you of course can capture a backtrace manually for logging purposes, too.)
Calling Mark a non-programmer is provably false.
Refer to https://learn.microsoft.com/en-us/archive/blogs/markrussinovich/vista-multimedia-playback-and-network-throughput, and his work on sysinternals and malware de-compilation.
His focus has been on low level interactions, programming around drivers and kernels - it is definitely programming.
I would argue system internals of windows and malware is a great reason to listen to him about security and memory security of C/C++ based programs / operating systems.
Uhhh....Well the original sentence did suffer from ambiguous references, but with sufficient parsing, I think the direct object of the sentence was the article's author, Steven J. Vaughan-Nichols, not Mark Russinovich (who is the "Microsoft programmer" badflorist was referring to).
The attraction for Rust and the nanny-style memory safety is clear. Rather than it take 3+ years to develop a really good C programmer skilled in protecting memory bounds, just spend a few month teaching the new kid with crayons Rust syntax and turn him loose -- what could possibly go wrong. Having written in C and C++ since you still included <string.h> in C++ programs, there is no substitute for experienced programmers.
While I'm all for anything that helps eliminate buffer overruns and idiots who fail use the field-width modifier reading with "%s" or "%[]" and scanf(), I've yet to see that latest and greatest next "New" language ever pan out. I hope rust is all it's advertised to be, doesn't have a significant performance penalty over C and doesn't produce a 5M "Hello world", but in the end, there will always be C and C++ available if there is no rust-arm-none-eabi....
I don't know if I agree with Mark on this but I don't think its his financial interest. He already made a packet when he sold Sysinternals to Microsoft. Its why Bryce Cogswell retired I assume once the contract of "You have to work for us for x years then you can cash out", when that hit, Bryce said "I'm out". I assume he just wanted to enjoy the money and do other things, but Mark didn't.
I didn't agree with Mark's comments about engineers being against cloud because they know it will take away their jobs. That's not the only reason we dislike it. And I feel that comment was self serving.
I'm also assuming it will be easy to write insecure code in Rust. Does he forget that there are so many free courses now, with so many people wanting to learn stuff just because. Because of this and maybe because its not fully part of their career people rush though so code will be buggy. I'm currently dabbling in Python and watched one guide where the "trainer" exploited how exceptions worked just to get her code to work for the video. I worked that out, others might not and write the code in the same poor way, eventually head into the industry for a small company that can't afford to check. Wrote a small or large app for them full of holes.
That's one way insecurity happens, management forcing you to rush also doesn't help.
Author: "Rust was written for a world with containers and the cloud..."
Was it now? With nearly a years experience with Rust, I can tell you there is nothing in the std that is for the cloud or container. I've never seen 1 language standard that does have such things. std::AWSConnect()? Nope, and it would be crazy if it did.
I doubled checked the link below to make sure I wasn't missing an entire section of the standard (because I've done that with other things :-/), I am not.
https://doc.rust-lang.org/std/index.html
And people like me learning are just as bad.
In one of the problem sets that I got working, if I remember right, I did an if statement one way, then the else a completely different way. Realised after had passed the checks. I thought it was amusing that I could get the code working but in a section do it two different ways. I should of changed it but just left it.
And theirs my point. There's a good way of creating insecure code. Let people like me write it. I bet I could cock it up just as well in Rust.
They know the Windows kernel is a huge security problem. They already had a memory safe OS in their research dept, but never made it a product.
https://www.microsoft.com/en-us/research/project/singularity/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fprojects%2Fsingularity
"Seems to work perfectly fine for hundreds of millions of people."
You seem to be confused. Four and a quarter billion flies all agree that great steaming piles of shit are nice places to raise their kids. I'm not planning on emulating them any time soon Are you?.
Vast numbers of users has never equaled good code. All vast numbers of users, and the dollars that they bring, signify is that whoever is marketing the software has scratched an itch with the GreatUnwashed. In other words, it has checked the boxes to satisfy the lowest common denominator. See Ubuntu and Red Hat as other examples.
"And by 'cognizant' I assume you mean 'Linux snobs'."
Don't be daft. Most "Linux snobs" use a Ubuntu or RedHat derivative and have no clue about this kind of thing. What I mean is people who have an extensive knowledge of the subject thanks to long-term personal experience. You know, the actual experts that the rust fanbois are poo-pooing.
We're past due for a revolution in operating systems. We're still using designs from the 70s (Unix\Posix) and 90s (Windows NT). Sure people have extended them with things like microkernels, but we haven't seen any ground-up OSes get major usage. Maybe Google Fuchsia will gain some traction. It's the most likely candidate in the modern OS category, and although Google isn't my favourite company it is at least open source.
"We're past due for a revolution in operating systems."
Why? Most cars have had 4 wheels, a steering wheel, internal motive power and a braking system for well over 100 years. Is it time for a revolution in how autos work? (Any reply referencing Elon's dope-pipe dreams will be summarily laughed at).
"We're still using designs from the 70s (Unix\Posix) and 90s (Windows NT)."
I think you'll find that UNIX is from a '60s design, and NT leans heavily on VAX/VMS from the 1970s.
C++'s most powerful feature is that it can directly consume C headers. It is not a mathematical superset of C but in practice 99.999% of stuff just compiles and works (or can be tweaked) and that is all that matters.
Rust will beat C++ once it can consume C headers directly without needing bindings. FFI, like JNI is time consuming and error prone. The bindings created tend to rot because they cover the entire API so are very fragile to breakage in a tiny area you might never even use. Creates.io is a lazy solution to an unnecessary problem that can instead be avoided by just using a homogenous language in your projects.
This is simply no good. Legacy code will not be rewritten, people will continue to use C for libraries because they can be wrapped by all languages.
If C++ *needs* to be replaced, then CppFront, Carbon, heck even Objective-C++ would be where I would place my bets. In the industry, changes need to be evolutionary rather than wasting our time rewriting stuff that has already been solved.
Mark Russinovich knows this, he is just being a twat.
> TypeScript's most powerful feature is that it can directly consume JavaScript code.
Doesn't sound nearly as impressive when you take your bias out of it, does it?
> The bindings created tend to rot because they cover the entire API so are very fragile to breakage in a tiny area you might never even use.
I'm not really sure what you meant by this, but generated bindings are created at build time and 99.999% of the time never included in source control. Ergo, they can easily adapt to environmental differences and changes to the code they're binding to, and regenerate in response to these changes automagically.
I would recommend reading the bindgen user guide to learn more: https://rust-lang.github.io/rust-bindgen/
It wouldn't exist if it couldn't EMIT JavaScript.
The fact that it can consume EcmaScript (possibly version 6 or greater) and transpile* that to EcmaScript (probably version 5 or earlier) simply demonstrates what a total waste of time it is.
-A.
*Just what is the point in a completely unnecessary and extremely tedious "build" process when the target language is just as capable and expressive** as the source?
** Rather more expressive, I would say.
Would you say that C++ is a "total waste of time" as well then?
There are many benefits from using TypeScript in a project over JavaScript. Type safety is no joke. A whole class of runtime errors in JavaScript is eliminated by TypeScript. That's half of the point in it.
> Would you say that C++ is a "total waste of time" as well then?
Certainly not. It brings an entire new paradigm. Transpiling from one high-level functional and object-orientated language to another very similar one doesn't. You can get almost all the same benefit from using a linter. In fact I've seen a JavaScript linter highlight defects in transpiled Typescript.
Typescript attempts to eliminate problems that I simply don't have.
Whether it is sensible to start from C, designed as a portable assembler, when constructing what purports to be a high-level object-oriented language is, however, debatable.
-A.
Being able to consume C code *is* its most important feature of C++.
Well, it is and it isn't.
C++ would never have taken off had it not been readily compatible with the huge existing C codebase so, yes, that's immensely important to the fact that C++ exists today ... but it's hardly the point of the language.
The point of C++ is that it provides a higher-level abstraction than C, and greater type safety, and that it is therefore a much safer programming language. It's tragedy is that in order to gain widespread acceptance it had to be written in such a way that the vast codebase of existing C software -- much of which is poorly structured and difficult to maintain -- can be compiled as C++ code without first having to be rewritten in a safer C++ idiom.
C++ doesn't even get to tag legacy code as "unsafe", as Rust does, which would at least focus the attention on the fact that C++ doesn't have to be so.
So you would know that binding generators can't understand lifetimes of data. RAII, GC is all very unsafe and manual pinning is slow and error prone.
Likewise function callbacks with void * user data. How do you get that working safely? I'm going to assume your answer is lots of manual hours spent?
Lifetime of objects is outside the scope of binding generators (I think that's your point, though). All they do is expose callable, `unsafe`, FFI functions that you can call in your Rust code. It doesn't automatically convert pointers into references or anything, that's impossible and up to you to do.
Generally, the idiomatic way to handle all of this is to create a "-sys" crate. This crate will be your raw FFI bindings. Then, you create a new crate where you create safe abstractions around the unsafe bindings.
For example, say we have a C library that exposes 3 functions: a function that mallocs an object and returns a void* pointer to it (`init_it`), a function that frees that object (`free_it`), and a function that... I don't know... sends the object as an email to Bill Gates (`email_it`). I'll generate bindings (or manually write them myself, which in this case would be 25 lines of code or so - bindings to the functions and a bit of code in the build script that compiles & statically links the C library) to those 3 functions in my "-sys" crate.
Then, I can build a safe Rust API around these C functions. I'll create a struct whose constructor calls `init_it`, and stores the returned void* pointer in a struct field. Then, I can implement a method that calls the `email_it` function. Finally, I can implement the `Drop` trait on the struct, which defines what to do when the object's lifetime ends. In this case, it will just call `free_it`.
This is basically how almost all of Rust's standard library interfaces with the operating system. File is just a struct that wraps a HANDLE on Windows, or a file descriptor on UNIX, and its Drop implementation closes the handle or fd.
No one out there is bothering to spend loads of manual hours doing this unless there's a lot of demand for it. The open source community is nice for this, and we have a lot of these safe "C library interfacing" crates already available. For example, OpenSSL, Tesseract OCR, libpng, etc. all have safe Rust libraries that will even compile and link these libraries for you.
Anyway, handling lifetimes isn't a problem exclusive to generated bindings. If you use a C library in a C project, you are still going to have to handle lifetimes. And when it comes to RAII, luckily, in Rust, you can just use the cxx crate to wrap unique_ptr and friends. I'm sure there are similar libraries to help with interfacing with GC'd languages.
> I'm sure there are similar libraries to help with interfacing with GC'd languages.
Quite a few, actually:
https://www.hobofan.com/rust-interop/
https://areweextendingyet.github.io/
Using Rust to write compiled extensions for GCed languages without having to manually get the `unsafe` stuff right for the C FFI in between is quite popular.
That's an odd way to look at it.
"C++ doesn't have a mechanism for teaching the compiler to watch your back. Therefore, it's only worthwhile to write code in a language that does if you don't have to annotate the stuff coming in from C++ at the boundary."
It reduces down to "Rust depends on OS APIs that don't encode lifetime information, which get wrapped by Rust standard library APIs. Therefore, writing code which depends on the Rust standard library is a waste of time."
"Sure I do! Most of my work is FFI."
Annoyingly posts can't be edited with whatever web browser theregister thinks I am using.
The fact that most of your work is FFI does suggest to me that you might be better seved by using C++ directly in your projects? Then you can focus on making software rather than bindings with FFI.
Well, I would estimate maybe ~10% of my time in these projects are related to bindings. I find that Rust saves me a ton of time when actually developing the software, so the trade off of having to do a little extra work with bindings breaks-even.
C bindings are a breeze; I've never encountered any issues with bindgen for C libraries. It's really, very, surprisingly good. Sometimes it takes a little tweaking to get it how you like it (e.g. choosing your desired style of how enum "bindings" are generated), but these are just a single switch in the build script when configuring the generated bindings.
C++ bindings - not so easy. It has no stable ABI, so that's where the "little extra work" comes in. This usually just involves exposing stuff I need under `extern "C"` functions. This is a really silly idea for projects where you're going to be interfacing with A LOT of C++ code, and you may as well either just use C++ directly, use some new kid on the block like Carbon, or give the Rust cxx crate a try, which can do a lot of heavy lifting for you.
It's worth mentioning that none of this would be easy without Cargo. In build.rs, you can use the cc crate to compile and statically link in C code to your program in a criminally little number of lines of code. I'm currently working on a project that does this with C, C++ and Objective C code, so my src/ folder is pretty diverse. To quantify, the amount of Rust in this project is 71%.
That's by design. The bindings are how you trick people into doing the effort to annotate the C or C++ API with the information the Rust compiler needs to verify memory-safe operation at compile time, same as with a language like Python. (eg. PyOpenCV, PyQt/PySide, etc.)
Sure, there are various ways to do that sort of thing with just C or C++ (eg. splint supports that sort of thing) but they never really caught on.
[programming...] It's just that it has never, ever been easy.
An that is the point. Programming is hard. Period.
You should use the language suited for the job at hand. Not go into the whole this or that is better/worse fight. The hype is that we "should replace" a language. That is the wrong premise. It is mentioned, a language must apparently be "sexy". Well, 25 years ago, suddenly everything just had to be done in java, regardless whether it made sense or not. We see the same hype today.
Again, we should use the programming language suited for the job at hand. And, never ever forget, you need a good programmer because programming is hard!
Rust has lots of target architectures. Not sure what you are looking for. But I can imagine no-one makes a chip without C support so no lang will likely match C in that regard for a while.
Rust programs are larger generally due to how generics are duplicated so rust may not make sense where memory is limited.
>>Not sure what you are looking for
They are looking for a "qualified compiler"
Which is very different to "Oh here's one I downloaded yesterday from github..." or "here's compiler I built yesterday from source" or "Here is gcc that came with my distro, thats got to be OK hasn't it?"
...I know of one project where they do not trust their compiler. They check the generated binary code against source, MANUALLY. They also do lots of automated static checking.
Project is rock solid and has almost 600 flying examples. No crash due to software yet. Unlike some others in the same business.
Safety: Ferrous Systems have been developing Ferrocene for this: https://ferrous-systems.com/ferrocene/
Microcontrollers: I'm no expert on this, but Rust is gradually improving its support here. Supported targets are explained in detail here: https://doc.rust-lang.org/nightly/rustc/platform-support.html
I think it's also worth mentioning that a GCC frontend for Rust is in the works, too: https://github.com/Rust-GCC/gccrs
This post has been deleted by its author
C was designed and implemented for real programmers: competent people who know what they are doing and take the trouble to do it properly. As opposed to the two disasters of modern coding: the contractor fixated on "productivity" and getting paid; and the "free software" types who rely on the "thousand eyes" of users and other programmers to fix their bugs.
Rust merely makes the latter two evils slightly less worse.
Totally agree
I write C code on a daily basis. Yes, I sometimes find a bug and I correct it. But I can’t remember the last time I had a bug caused by memory corruption, bad pointers or running off the end of an array. These are schoolboy errors - if you’re competent and disciplined enough, you just don’t do this sort of thing.
C seems to get a lot of flack from people who are clearly not competent enough to use it.
Ah, I'm guessing you must not be human, considering you simply don't ever make mistakes, right? When you make mistakes, they're "bugs", but when others make mistakes, it's a "schoolboy error", and they're "not competent" enough to use C? Was Heartbleed a "schoolboy error"? Are the OpenSSL maintainers "not competent" enough to use C...?
Co-pilot has gone too far, it's gained sentience, a God complex, and has started posting in El Reg comments sections! We're done for.
Heartbleed is indeed a very good illustration of the point.
It happened because of mistakes in the handcoded implementation of a poorly specified binary protocol between program. This was a mistake on several levels.
First, the mistake itself - a coding error of the sort many recognise as "well, those things can be tricky to get properly right".
The second level of mistake was to not use some tool specifically designed to do this job for the programmer, and get it right on their behalf. There were and are tools for this - things like Google Protocol Buffers, ASN.1, both of which predate Heartbeat. ASN.1 would have been the ideal choice.
I'll caveat this by allowing that good ASN.1 tools cost money, a reason why the authors of RFC 6520 may have specified their heartbeat protocol in verbose English language defining a bit-bashed binary message set, thus setting up the actual programmers for a big chance of a fail.
But I'll then uncaveat that by pointing out that TLS etc. also handles certificates, which are ASN.1 encoded data structures; they were already going to be using ASN.1 in one form or other, and could easily have used it for the heartbeat protocol too. But they didn't.
Had they defined a complete ASN.1 scheme for the heartbeat protocol - complete with constraints - and then trusted the tools to correctly implement the protocol, the bug would likely never have occured.
It's not like there weren't precedents for doing it better either. DBus predated RFC6520 by quite a few years, and they went about "solving their problem" by creating their own protocol implementation technology; with DBus, you write a schema and use Dbus tools to build the message serialisers in a language of your choice.
So it's not like there were no other projects around at the time that were helping solve security-critical interfacing / protocol problems by using tools, not programmers, to implement them.
Really, the authors of RFC6520 had no good reasons to define it as they did, and one of the problems with the IETF in general is that it allows and accepts such poor standards in the first place. I know there is often a lot of dismissal of ITU-originated standards by the IETF community ("too big, too complicated" is often they kind of thing that is said), but HeartBleed is a very good example of why thorough and helpful standards and tools really, really matter.
The difference in attitude is, I believe, because members of the ITU standards bodies have a vested interest in things being done write - the telcos don't want to lose money - whereas the IETF standards are often created and accepted with no direct consequence either way for those doing so (instead, it's the poor blighters who've gone and based money-critical systems on such things).
Rust and C/C++
This applies to the Rust vs C/C++ debate. For anything that is going to be shared, choices of tooling are something one has to do altruistically, for the benefit of the project overall and for one's fellow developer, and not ever because of personal preferences. Saying "I can and will do this in C" is increasingly becoming a "yah boo sucks to you" gesture to all other programmers involved now and in the future.
Afterall, what would one C++ developer think of another, if in the modern era that other had written a massive pile of complex C++ and had not once used a unique_ptr or shared_ptr, sticking to just plain old raw pointers throughout? Quite. Sure, it may be "perfect" and well done them, but it's very unhelpful to someone else.
The deliberate avoidance of using things that are good is just perverse and unhelpful, unless there is a real benefit (like speed). What's interesting about the involvement in Rust in the Linux Kernel is that the resultant driver is reputed to be pretty damned good. So, the speed reason is likely receeding as a justification to not use it.
Heartbleed occurred because the developer added the code on a Friday evening after an exhausting workweek and never bothered to look at it again.
Personally I think that's pretty daft, but these things happens in real life. People have a life beyond their open-source pet projects.
It's great to have something like Rust to cover your back.
These are schoolboy errors - if you’re competent and disciplined enough, you just don’t do this sort of thing.
EVERYBODY makes schoolboy errors from time to time. That's why we continue to improve programming languages so that errors are harder to make and easier to detect. Both C++ and Rust are much better than C in this regard.
There are also programmers that get paid no matter what, and closed source programmers who know no one outside the company will see their code.
I find a lot of OS code decent quality, not because 1000 eyes have fixed the bugs, (most projects have one or a few core committers) but because 1000 eyes will look at your creation and offer you a job if they like it.
Just recently we learned that the TCP stacks of an RTOS would make things like medication pumps in hospitals exploitable. Some of these exploits would not work in Rust or Sappeur.
https://support2.windriver.com/index.php?page=cve&on=view&id=CVE-2019-12255
"doctors thought he would recover, but then he mysteriously died in intensive care"
This post has been deleted by its author
Depends what you're working on. If it's a hosted application, then the runtime is more important than it is for a free-standing application.
Most of my code is free-standing, and it never touches the runtime library (other than the bits that initialise the RAM and make the call into main() ).
This post has been deleted by its author
>limited and archaic systems that, generally speaking, no longer exist.
Billions of small scale, 16/32 bit embedded processors exist. They're not going anywhere. Lack of supply is a big issue.
>lack of type safety and monumentally poor runtime library
Those who understand C understand the library and its limits. It was appropriate for its time.
Your issue is with history.
You sound like the moaning Rust eggheads, except they moan much better:
https://thephd.dev/binary-banshees-digital-demons-abi-c-c++-help-me-god-please
I prefer how Bryan Cantrill put it:
> When developing for embedded systems — and especially for the flotilla of microcontrollers that surround a host CPU on the kinds of servers we’re building at Oxide — memory use is critical. Historically, C has been the best fit for these applications just because it so lean: by providing essentially nothing other than the portable assembler that is the language itself, it avoids the implicit assumptions (and girth) of a complicated runtime. But the nothing that C provides reflects history more than minimalism; it is not an elegant nothing, but rather an ill-considered nothing that leaves those who build embedded systems building effectively everything themselves — and in a language that does little to help them write correct software.
-- http://dtrace.org/blogs/bmc/2020/10/11/rust-after-the-honeymoon/
Competent people still make mistakes and small tyops.
C requires CONSTANT VIGILANCE!!! as the language won't help you if you do something silly by accident.
C++ fixes most of that by strong typing, RAII, exceptions and 'smart' pointers but allows you to do something silly if you want.
Rust claims to fix issues, but it seems to be mostly fixing problems that C++ already fixed in 2011
Back when I mostly wrote C code I set up my compilers/IDEs to report all warnings as errors, and to terminate the compilation. This is where I agree with the Rust philosophy: eliminate at many (potential) errors at the earliest possible moment.
Nonetheless I always stepped through my C code in the debugger to check that it was doing what I expected.
I do the same today with my C# and JavaScript code.
Static analysis is great, but it can never replace observing what a real program is actually doing in the real world.
-A.
Every non trivial C project will have bugs that can be found using PC Lint, PolySpace, Coverity etc.
Rust has strong static analysis baked into the compiler.
You say you can replace that with step by step debugging.
Extremely weak reasoning, because some errors only manifest on "evil" input or they will at first not damage the memory bad enough so that you realize it in your debugging session.
Wherever it matters (aerospace, auto, trains, medical), static checking is now mandatory. It reaps lots of index and arithmetic errors in the code of experience software engineers.
Maybe you are in accounting ?
Someone should let Microsoft, Google, Apple, etc. in on the secret. They just can't seem to get their memory-safety CVE numbers down despite spending tons of money and effort on the problem.
(Setting facetiousness aside for a moment, I think the problem is that, no matter how good individual programmers are, writing memory-safe code is functionally impossible when you combine them in groups.)
What utter nonsense. Programmers then were the same as programmers now; some good, some bad, some lucky and some clumsy.
C was designed to solve a set of problems that had no other easy solution in the early 1970's. It was easy to learn and easy to use and you could get something working quickly so it became popular.
Rust was designed to solve problems that didn't exist in the 1970's. It is not as easy to learn or use as C and it takes a little longer to get something working, so it will never be as popular as C, but if the problems it solves are ones that concern you, then it will repay the extra effort required to learn to use it well.
Personally, I find I can solve most of the problems that matter to me most easily using Python, but when the occasion demands, I will and have used most other languages
Unfortunately, there are regressions in this community, such as using C++ in the JSF*.
Other projects, such as JÄGER90, still use Ada. Ada was once championed by the Pentagon.
seL4 is also sponsored by German, Australian and US governments, but still not used to the full possible extent.
*but I guess it is befitting to the high wing loading, another cr4p decision
It is perfectly possible to write memory safe code in C++.
One things that seems to get forgotten though is that while some other languages (eg Rust) and the GC-based languages claim memory safety (which is largely true) what they DON’T provide is general resource (file handle, device handle, etc) usage safety. C++ does provide this and it does it using the same general mechanisms used to provide memory safety.
Exactly. I cannot remember the last time my C++ code had a GPF or more subtle memory access bug. We've had RAII for a long time now, it works very well and is easy to learn. You can turn pretty much any concept that should have a limited lifespan into an object by writing a simple RAII wrapper, and it will happen automatically when the stack unwinds...even if there is an exception.
I am growing weary of The Register pushing political opinions on Rust vs C and C++.
And do you think that you switching to Rust will fix the bug in a library you wish to consume? Because I daresay whoever wrote that library would rather just fix the bug than completely re-write it in Rust.
I also have recently had to deal with a read-access violation in a GPU driver for a 10-year-old GPU. If you use a library that has a bug, there is nothing you can do except work around it. That will never change.
I managed one the other day by parallelising some unit tests, which are completely independent and were running without hiccup serially.
The reason, as I eventually discovered? One of the functions on the line declares a local variable that ends up being ~480kb; meanwhile the particular parallel for on my particular OS provides a much smaller stack to things dispatched in that manner.
In practice one of the threads was stomping on the memory area of the next before it had started up, so the actual crash occurred when a thread I don't own mysteriously appeared to start up with a single-item stack trace, being a crash inside an object with this = 0.
So, ummm, either I'm arguing that it is possible to hit subtle memory access bugs even with modern RAII code, or I'm revealing that I am a 'bad' programmer.
I assume you are a good programmer and you just discovered one of the many memory safety issues of C++. Mr Stroustrup is not fully truthful in his claims, as far as I understand it.
Memory safety requires quite a few things and it requires a compiler to enforce it. But are incomplete with C++ at this point.
But very nice to see Mr Stroustrup sees a need to be memory safe, too. This validates my work and that of the Rust guys.
Could you elaborate on what you mean? Rust wraps handles in safe abstractions that even perform cleanup (e.g. closing handles) when they go out of scope.
https://doc.rust-lang.org/std/os/unix/io/trait.AsRawFd.html#implementors
https://doc.rust-lang.org/std/os/windows/io/trait.AsRawHandle.html#implementors
Of course i cannot exactly know what rich2 meant, but i upvoted the post because for me the RAII in c++ provides a very clean way to initialize and deinitialze "hardware". And that's not only file descriptors or the like, but communicating with a device that hangs off a specific serial port, network address etc. Initializing is in the constructor and it can allocate memory (well i dont new/malloc but automatic vars or make_shared), open sockets etc, and fail any way it likes - throwing from a ctor is basically the RAII way to signal errors, but the important thing is to avoid anything that could throw or allocate additional memory in destructors, and then only catch by const reference (else it copies the exception, a mem alloc).
Sticking to this strategy, afaict you can safely feed the dogs, park all motors and turn off the big freaking laser (basically, keep hardware safe) before the process* exits in any situation short of the operating system forcefully killing the process, or dying on its own.
*i'm aware of threads, newer c++ has appropriate tools to keep that topic being irrelevant here. The process is what matters in this context, because that is what the operating system handles.
More accurately, it only allows you to omit them in `unsafe` blocks, which are what the standard library uses to implement them, thus reducing the amount of code that needs to be audited for memory safety by an order of magnitude or more, because Rust is heavily focused on removing the need to reason globally when evaluating a codebase's correctness.
(Is this module correct? Did I design the API to enforce that correctness? ...then the compiler won't let someone else break that without using `unsafe`... and if they do, it's likely that running the code under Miri or UBSan will report that they had to violate a language invariant and invoke undefined behaviour to do it... one such example being to circumvent things that correspond to emitting the LLVM IR noalias annotations also produced by the C restrict keyword.)
That's probably the biggest problem: you CAN write a perfect C language program. It's just in real life this almost never happens and memory corruption occurs all the time in most larger C programs.
The same holds for C++, although to a lesser degree, since RAII is commonly used there.
Aside from a few things that haven't yet been spec'd out, like placement new, Rust is almost literally "C++, with the compiler enforcing best practices", even if it is also more of "an ML-family language in a C++ trench coat".
Rust uses the same RAII resource management techniques as C++... it just uses destructive move by default, doesn't have copy/move constructors because humans have demonstrated an inability to get those right reliably, and adds compile-time pointer validity tracking based on a model analogous to reader-writer locking.
IF you were dumb enough to retire C/C++ then people would ask 'why had this project budget gone from 500hrs of programming and testing to 500*10**6 hrs?" Oh we had to re-write all the C and C++ that the project needs to avoid having to hire in C/C++ programmers later to fix problems in compatibility and the vanishingly small errors that would be in the code and we translated that into RUST not knowing what the fuck we were doing, oh and the giggle factor budget (where we have to pay these programmers to lie down and recover from going "Kerching" and then laughing non stop for two weeks has grown exponentially too.
So basically you either learn to live with C/C++ or you go bust. Your choice. A choice that has been made many times in computing history. That is why you will find Fortran doing the leg work for anything that needs to run on Fucking Massive Iron or whatever they call supercomputers these days.
I've been looking at device drivers written in RUST and it doesnt look like its makes life easier either.
You are presuming that the coders who write rust dont/can't write C/C++, I'm not sure if that is realistic. I doubt many if any rust coders started with rust.
Rust and C integrate nicely, you only rewrite what you get benefit from rewriting.
There is evidence that rust is a fast language to write code in. I read an article recently by people that had ported to rust (nb coders were competent in more than one language and rust was new to them) and what pleased them most was the speed with which ocode could be added.
I find that speed of writing code is directly proportional to how far forward you push errors. If its pre compile time or compile time it's almost always faster to write such code than code that can only be tested when it's compiled and running.
There is evidence that rust is a fast language to write code in. So far. And I was suggesting its not the writing that's the problem its the rewriting. There are millions(billions maybe) of active lines of C++ code and if an error crops up in something that has not failed for 15 years (like the linux speedup big yesterday) and no-ones coded in C or C++ for 15 years will they even be able to understand the code to get to fix it? I'd bet it would take 40 years or more of legally imposed everything must be written in RUST to get us back to now. Then there would still be code errors unless someone writes microcode in RUST....
Its also worth remembering that changing code to RUST is fast - that's because all the bloody work has already been done and you're mostly simply translating.
There is evidence that rust is a fast language to write code in
What evidence is that, exactly?
Given that almost all Rust programming will be brand new code, and most C/C++ programming will be modifying or extending existing C/C++ code, it stands to reason that the Rust code will be faster. It is always much faster to write new code without having to worry about the limitations and restrictions of existing code.
New languages are also more likely to be adopted by smarter programmers, because poor programmers are more likely to stick with what they know.
The only way to establish in a semi-controlled experiment that writing code in Rust is faster would be having programmers that have measured equal ability to write C/C++ code compete, some in Rust and some in C/C++.
Rust may be a very good language but it is also a disruptive language.
1. Learning a new language takes time. Rust is no different.
2. Developing a broad enough community of programmers in a language takes time.
3. Rewriting code which works well in another language takes time.
Let's see Microsoft rewrite Windows in Rust. They can start off small, bits here and there. And they can publish white papers explaining the problems they encountered and how they overcame them. Then demand that all new drivers for Windows be rewritten in Rust. Let's see Visual Studio for Rust so anyone coding GUI apps can start producing them using Rust.
It will indeed takes years. COBOL is still here, C and C++ will still be here. But let's see the disruption in Microsoft first.
-> But nobody has suggested rewriting old stuff.
I disagree. It has been suggested a lot. Do a search for 'linux kernel rewrite in rust' or words to that effect. You will see proposals, working code snippets, etc. If there is a "move" to rust, then it will eventually become a "requirement" to submit code in rust.
Disagree all you like, but the Russinovich tweet behind this discussion says:
it's time to halt starting any new projects in C/C++ and use Rust for those scenarios where a non-GC language is required.
So suggesting that MS rewrite their old projects in Rust because of what's going on in linux kernel discussions is a bit of a goalpost shift.
Very well. Let me adjust my comment. All new parts of forthcoming Windows OS should be written in Rust. I think that should be clear, because obviously MS do not rewrite XP or W7 anyway. The point about new drivers also holds.
It's all well and good for Russinovich to make a tweet. So what? If there is a C or C++ programmer, are they supposed to down tools to learn rust before they write new code? That's easy for somebody else to say.
As I wrote, rust is a disruptive programming language. Let Microsoft take the lead. Let them go through the disruption first. Russinovich has a voice at MS. If he is unable to convince MS then why should I be convinced?
then it will eventually become a "requirement" to submit code in rust
Doubtful.
The world used C and/or assembler. Then C++ came along and showed a better way. Then Java was where it was at.
Now everybody's going woohoo over Rust, the new "look at this awesomeness" language. But in ten or fifteen years from now it'll be something completely different and all the Rust code will be yet more legacy stuff to have to deal with. It will have became (awful pun alert) rusty.
Meanwhile C will still be chugging away in the background, being regularly insulted and derided, but never quite going away.
Just because you have a fancy new programming paradigm that doesn't mean that it makes sense to go your own way on all the bits and pieces that surround the source code.
So, we've been at this game for a while and have an existing build system ready to go; does all the obvious, keeping generated files well away from sources, pulling in sub-dependencis, generating sufficient OSS compliance docs etc. Multiple languages and targets, the usual. Loads of relevant support libraries neatly curated, all version controlled, waiting to give their all to our next project.
Rust does sound interesting, let's start reading about how to get it added into our setup so we can asess how well it'll work for real; there are books on Rust and rustc online, looks good.
Section 1.1 Installation: "If you prefer not to use rustup for some reason, please see the Other Rust Installation Methods page for more options." White hair equals suspicious mind, what could "some reason" mean? Follow the link: "Offline installation. rustup downloads components from the internet on demand. If you need to install Rust without access to the internet, rustup is not suitable." OR, access to the Internet aside, you're in a Serious Business and want some modicum of control over what is being used.
Section 1.3: "Cargo is Rust’s build system and package manager". Um, they've created yet another build sytem of their own, catering specifically to Rust?
Section 7: We are now using new terminology for compilation units? Including calling any old sub-module without a main() a library? "Most of the time when Rustaceans say “crate”, they mean library crate, and they use “crate” interchangeably with the general programming concept of a “library"."
Gritting teeth, carry on reading 'cos that is the job. Get to the FAQ:
Q: "Will Cargo work with C code (or other languages)?" A: "Yes!" "Our solution: Cargo allows a package to specify a script (written in Rust) to run before invoking rustc". Go that route and before you start into your Rust journey you need to know enough Rust to duplicate at least enough of the existing build system to cope with "Building a bundled C library" and "Finding a C library on the host system" - taking into account their methods of avoiding unnecessary rebuilds...
Maybe the next FAQ will ease our woes:
Q: "Can Cargo be used inside of make (or ninja, or ...)" A: "Indeed." ... "We still have some work to do on those fronts, but using Cargo in the context of conventional scripts is something we designed for from the beginning and will continue to prioritize." That is it. No link to any helpful hints.
Look, this is just a load of rants from a crusty, but if Rust wanted to be taken seriously as a language that we should migrate to, it should really help with migration. It is easier - a LOT easier - to drop a COBOL compiler into our build.
And I've not even touched on crates.io
> Rustup needing internet access
You do pick the specific version of the compiler you want and they promise not to replace the packahes behind your back. So the situation is pretty similar to any other project, just that rustup downloads the packages for you -- instead of you downloading them yourself.
> Building an existing C library
Yeap, you need a build.rs to build non rust code with Cargo. There are lots of examples out there: Check crates.io for any crate ending in -sys. Those will contain an example build script for some existing library. If you are lucky you might find that somebody else already went through the trouble for you.
Note that there is no real way around cargo when using rust: Cargo is the "offical" interface to building rust code. Calling the compiler directly is not supported. You can not drop the rust compiler into your build system (and get support for that combination!), you have to drop the rust build system into your build system.
In the setup you propose I would not bother to switch the overall build system to cargo, but use cargo in the existing build system. Depending on the build system you use there are projects to help with that. E.g. corrosion-rs/corrosion on github helps to integrate cargo into an existing CMake project.
There is no problem with calling the compiler directly. Rustc is available, documented and usable.
If you have a small amount of Rust and a complex existing build system, there is nothing to stop you going that route.
This is a completely ridiculous thing for someone promoting such a new language to say, for one simple reason. How many libraries and frameworks for building real world applications exist for Rust right now? Whether I want to write GUI applications, scientific simulations, games, networked services or whatever else, there are huge amounts of mature, well tested C and C++ code with large user bases that have evolved and stabilised over years, providing functions that other programmers need.
Yes, they could and probably will be migrated to Rust, this will be a good thing. But walking in the door with a new language and declaring yourself 100% ready to consign the rest of programming to history is a stupid mistake, and one that has happened many times before, usually with complete failure because the evangelist who declared the revolution completely ignored the vast amounts of work done and experience gained with the "legacy" languages they thought they could easily replace.
"Now, both C and C++ are very flexible, but we're a long way from single processor/single core computers!"
Whilst that statement is true insofar as it's written, it's misleading to imply that just because *computers* are now almost never built around a single processor/core, there's no longer a need to still have languages targetted at such architectures.
Unless, that is, your view of the world of coding is narrow enough to believe that it begins and ends with desktop systems, and the billions of embedded processor systems out there (many of which most assuredly are still single processor/core setups, sometimes barely any more capable than a home computer from the early 80's) just somehow automagically do whatever it is they're supposed to do by guesswork/pure blind luck, and not as is actually the case, by having been programmed to do so...
Even in the world of embedded control units, multiprocessing is already well established, even if most units are probably single core. Language support for multicore definitely makes sense.
In addition, memory protection units are considered a necessity in automotive, because of the lack of C memory safety.
It definitely makes sense if you're working on a multicore/processor architecture, yes.
However, as I originally stated, and as you've just agreed with here, the majority of embedded systems are single core/processor in nature, hence why I took exception to the seemingly quite blinkered statement in the article that the world has moved on from such types of programmable systems hence the need to be adopting shiny new languages like Rust...
The counter-argument is that Rust, in order to be able to determine memory safety statically, makes it sufficiently harder for average developers to use it correctly that they immediately resort to "unsafe". I only have anecdotal evidence of that - the number of "how do I do that in Rust" discussions that seem to recur on StackOverlow and the like without reaching a conclusion and the extent to which proponents of Rust lament the way people fail to properly understand it and approach it with the right mindset: an appeal to true belief is always a warning sign in my experience.
I also fear there's simply too much of it - apart from the language itself with its pointlessly "improved" syntax, there's the build system, the packaging system and, of course, the macro system. It's a lot to learn - and most of it adds no significant obvious value. The language is also going to have to co-exist for some considerable time with others. If it appears in the Linux kernel it's still mostly be going to be working with data structures whose lifetime is controlled externally and so beyond the reach of static analysis.
I'm happy to be proved wrong, but I'm afraid I start from a position of skepticism.
Edit: BTW, I see that posts are still inexplicably being moderated. Assuming the moderator permits, I'd just like to say cheerio to fellow commentards who have been incredibly informative and (mostly) good-humoured over the last 10 years. I've greatly benefited from your wisdom and will still occasionally visit, but I'm afraid the lack of spontaneity means participation has become unnecessarily tedious.
I see few crates (libraries in rust) that actually use unsafe blocks. Those that do tend to have a good reason for it: E.g. interacting with code in other languages via C bindings or hairy data structures.
The idea is to build safe abstractions around that unsafe code so that normal programmers do not have to deal with all the hardship. That is very much the same approach good C++ libraries around existing C code use as well: They hide away as much of the complexity of using the C interfaces in exactly the right way by providing simpler to use and safer APIs using C++ features missing in C.
You do have valid points about needles syntax "inventions".
BUT - unsafe code sections must be minimized+reviewed by fiat of the system architect and management.
Each unsafe section must be justified by a valid reason or be rewritten.
That would be the approach of a skilled and sane engineering organization. Rookies and dysfunctional organizations are not the fault of the language.
I don't code anymore, but my tiny distro now has two rust compilers. For Mozilla products, fwim...
One user comment
"That Mozilla's two most popular applications target different versions of Rust is telling. One would expect an organisation of their size to have standards to cover which version may be used in production code."
And dev commenting on this...
"Also, this demonstrates that Rust has something of a backward compatibility issue, making it a poor choice to develop in.
The name seems fitting, as Rust code quickly develops bitrot. I can't wait until we need to keep a third version of the Rust compiler on hand for the kernel."
Except for certainly not liking this, what do I know... ^ ^
It's because some projects (Firefox and, yes, Linux included) don't want to wait for certain language features to get added to the "stuff you can use in stable channel will be forward-compatible until the end of time except in situations like 'your code only compiled because it was unsound and shouldn't have compiled in the first place'" promise that came into effect with Rust v1.0.
(eg. Firefox was using SIMD intrinsics in Rust code years before the Rust devs committed to an API for them.)
You can keep track of the Rust-for-Linux progress on getting them stabilized or replaced at https://github.com/Rust-for-Linux/linux/issues/2
Memory safety isn't just about "corruption", "leaks" and so it; it is sometimes critical to know how long an allocation will take and to be able to precisely determine the behaviour when memory resources are low. Rust is only able to help with a subset of these cases (as is C++), so there is still a lot of responsibility for the programmer to "do the right thing".
Does Rust allow memory allocation to be controlled in the same way as C++ does (user-provided allocators and the like)?
A lot of safety systems prohibit the use of dynamic memory (or at least limit it to startup), which can be a problem with C++, which has "hidden" dynamic memory allocation (for example, when throwing some types of exception).
> Memory safety isn't just about "corruption", "leaks" and so it;
You can leak memory in safe Rust: That is fine and not a safety issue at all.
> Does Rust allow memory allocation to be controlled in the same way as C++ does (user-provided allocators and the like)?
That is in the works still: Linus rightfully insisted that this is needed to use Rust in the kernel. Usually Rust will panic when running out of memory.
> A lot of safety systems prohibit the use of dynamic memory
Rust comes with three standard libraries: Core, alloc an the std nowadays. Core holds all the core stuff and does not allocate memory ever. The alloc crate adds memeory allocation and data structures that need that. The actual standard library (which core and alloc were split out of) is basically a combination of those two plus a few extras.
It is perfectly possible to write code using core only -- that is widely used in embedded development. There is a ever growing library eco-system that is "no-std" and avoids alloc or allows to opt out of alloc use.
More specifically, those "few extras" that `std` adds over `alloc` are APIs that depend on having userland things like a filesystem (`std::fs` stuff like Rust's analogue to opendir(3)/readdir(3)), a network stack (`std::net` stuff like Rust's analogues to connect(2)/listen(2)), etc.
Thus, for a microcontroller that can support a heap allocator but has no OS, you can implement a backend for `alloc` without pulling in those things, but being able to do so doesn't mandate that it be done for bare-metal platforms with no heap.
This is more implementation related than the language itself.
The amount of "crates" typically used, and the stupid simplicity of a lot of them.
Sure, it's stupid to invent the wheel, and it's good practice to make use of good and established libraries, but the usage by rust is more akin to that of the node.js folks. Some of these libraries are only a few lines long!
I guess that is what happens when you make it easy to depend on other peoples code.
With C++ its such a hassle that many people end up rewriting the world... or they go for one of the big frameworks like Qt that have everything and the kitchen sink built in to avoid having to deal with dependencies too much.
Upvoted even if in the industry im in it is basically irrelevant whether it takes you one minutem to add a library or half a day. What matters are the hours and days every additional library may generate in clearing it for use, trackin safety issues, in configuration management, over the decade(s) the application is likely going to exist.
When I'm on a new (usually bare-metal embedded) platform using C/C++ the first things to get integrated into the build tree are boost, pcre and a decent JSON library such as RapidJSON. That usually covers 75% of my requirements apart from protocol stacks. (Think lwIP/mbedTLS, Lely CANopen etc.)
For embedded Linux once I've got buildroot or yocto to behave the options are much broader of course.
There exist ones that are only a few lines long, but they're typically written by newcomers coming from an ecosystem like NPM and, on average, people are generally more averse to actually using them and more concerned with supply chain attacks than in the NPM world.
(Source: Among other things, I've been hanging out in /r/rust/ since around 2013 and observing the reactions to announcements of that sort of crate.)
That said, one reason you would see more smaller crates is that, in Rust, the crate is the primary unit of parallel compilation, not the file, so, within reason, smaller, more numerous crates allow compilation to be parallelized more effectively.
Write a program in Rust (probably better in Perl IMHO) that converts C to Rust - you can call it crust - I won't trademark it.
Then run the Linux kernel code (and the GNU stuff) through crust and fork a new distro, let's call it crustux and see how well it goes.
Let's face it, most of the code at this level is idiomatic so it should be a fairly simple task to do the convert.
It'll be useful as well, it will show up all the errors in C that are in the kernel and hopefully lead to a better kernel so those of us who are not bright enough* to master Rust and love the simplicity of C can be left in our own little C kernel space in peace.
* me included because I have tried Rust a couple of times and keep falling over when I hit str and String and borrows and traits and structs with strings....
The Linux kernel is in GNU's extended C dialect, and it builds its own fanciness on top of that, so it's not the C99 that the c2rust tool knows how to ingest.
(Yes, such a tool exists. Its purpose is to produce `unsafe`-heavy Rust code that continues to pass all the same test suites, so you can take what it spits out and focus on the refactoring that can't be reliably done mechanically.)
The problem with any "new, improved" language is that the creator not only invents numerous round wheels but also tends to assume that their creation will be the one and only language used everywhere and anywhere for ever and ever. So the basic syntax of Rust could be made the same as C but the result wouldn't look new so the language has to have a syntax that's nearly but not quite the same because otherwise it wouldn't be new.
A lot of the language features come from integrating the linker and package management with the language itself. Its easy enough to enforce types in C, most programmers don't use generic types anyway, they use synthetic types similar to those Rust has, and by turning on warnings and using Lint most type SNAFUs can be managed (I tell people to "beware of implicit casts" -- just because you can fit bit pattern 'A' into 'B' doesn't imply equivalency of value). At the cost of becoming a bit unwieldy Hungarian notation (effectively systemized in C++ as name mangling) can be used to manage typing to turn lukewarm warnings into hard errors. Modujle linkage doesn't belong in a language definition as with package management -- and I think there's a special place in Hell for those programmers who think its OK to structure code using the directory tree of a development computer, you structure things logically so you can find them, not the development system.
Anyway, that's my 2c's worth. Such as it is. If I can get a rust compiler to compile a module I can link with other components I'll try using it -- I've always been focused on embedded so "Hello World" really doesn't cut it (I need to know a lot more about the system than an ELF file). YMMV.
I could have done the work in C or C++* but I wanted something different - purely for the sake of it, I'm comfortable with them so why not try something else.
I started at GO (where else) but didn't like the enforced style and funny brace rules, then went to Rust and spent a lot of time reading the book and trying different things and tutorials, felt quite good and happy with how it was going then went to write my first bit of code for the project and started getting warnings and compile fails that I'd never had before so back to the tutorials and experimenting.
In the end I got fed up and found D (https:://dlang.org) and it's really quite nice, there are a couple of minor issues with it but the community is very helpful and it's not been that difficult to get up and running with it. It certainly fits my brain wiring better than Rust.
I had my first module written after about four hours and while not complete, it's workable for what I want at the moment and will grow as the project grows.
& there's no evangelism either denigrating other languages and telling you how D's going to take over the world.
It's just quietly good.
This is where I think Rust is really getting it wrong - it's being rammed down our throats and pushed but it just ain't a better mousetrap.
*I used to like C++, but C++ as it was in the early days, it's now a bit of a dogs breakfast with C++11 or C++17 etc flavours and new ones coming every few years and each new one getting more arcane (and still not trapping IDIV0 exceptions) & (or && if you will) constantly changing the spec says to me that they haven't got it right.
I learned C++ but in the end I was working mostly with C libraries and therefore ended up coding in a sort of C+-. Wrappers were out of the question because it takes too long to write them and subsequently costs too much so I take advantage of the some of the STL to reduce documentation time and classes for compartmentalisation. It works and it gets the job done without project overruns.
If I had to do my time all over again then perhaps I would have stuck with C, but the idea of learning the new syntax of the new latest and greatest over and over again is just horrific as sooner or later I would find myself convoluting the languages to such an extent that I would not be good at any of them. "Do one thing and do it well" as they say. Having said that I have found myself having to learn some javascript as I am writing a web interface in C++ using cgi.
If you look at CVE's by programming language then yes C is at the top, but its been around a very long time. Even when you count total CVE's ignoring the years, Java and PHP are ahead of C++ which has been around twice as long as the other two.
My point is, people are going to find interesting ways to shoot themselves in the foot.
For C++ at least:
"We can now achieve guaranteed perfect type and memory safety in ISO C++," Bjarne Stroustrup, creator of C++, told the Register this week
It's there if you want it, or you can be unsafe....just like Rust.
The whole comment about 70-80% of bugs are caused by memory safety and its all on C and C++ is complete horse shit. It's a made up on the spot statistic. You can point to actual CVE's by language and then it shows that all the managed languages are equally as bad in terms of serious CVE's.
There is a scene in an episode of MASH where Major Burns is lecturing on the strategy of Romel; and Hawkeye slyly notes, ' so now we are taking advice from the losers ? `
At what point does anybody from MicroSoft think they can advise anybody on anything about secure programming, design, not being a complete dip shit, and not making really really shitty software?
I took it on a general OK that RUST was likely pretty good. Now that MS thinks its good; I have to re-evaluate my preconceptions.
Everything MS touches turns to shit.
Its worth bearing in mind that Rommell did wonders in North Africa because he was able to read the Allies's mail -- an American diplomat in Cairo wa sending comprehensive reports back home using a compromised coding system. Once that was dealt with then things reversed pretty quickly. Sometimes genius gets a helping hand.
>Everything MS touches turns to shit.
Corrodes, more like it.
Surely this should read "with great power comes great responsibility*"?
When writing in a computer "language" with "known issues" surely it is the responsibility of the programmer to ensure that they verify that they cater for the "known issues"?
Not much different to developing a web site and catering for all the "known methods" of malicious code injection, etc, etc
* Sorry - bit of a Stan Lee / Spider-Man fan...
> In 1793 the following statement appeared in a volume issued by the French National Convention as mentioned previously:
>
>> Ils doivent envisager qu’une grande responsabilité est la suite inséparable d’un grand pouvoir.
>>
>> English translation: They must consider that great responsibility follows inseparably from great power.
>
> [...] The transcript of Lamb’s words in 1817 used quotation marks to enclose the maxim indicating that the expression was already in circulation. [...]
-- https://quoteinvestigator.com/2015/07/23/great-power/
Initially learnt Rust by recoding some personal pet algorithms to see if I could make them as fast as my optimised-over-many-iterations C99. Almost made them as fast; got close enough. It took longer than I expected because new terms for old concepts slows me down. I'd prefer libraries to be called libraries, for example. That's not unique to Rust but with each language I try, I live in hope they're sensible with terminology.
Second project was small contract-work-for-a-friend to split up a C tool that had got too big into separate C tools. The majority was just moving code around, so I finished with time to spare. They asked me to rewrite one smaller chunk in Rust so that they could compare in house and it went OK - I found (well, the Rust compiler found) two edge case memory issues while converting over. Arguably neither were dangerous as they both required an impossible state to be reached but it was interesting. Today's impossible state might be tomorrow's feature.
The feedback I heard at the time from their internal team was that they liked Rust and would consider using it for new stuff but only because the developers were already keen, not because they suffered from client-losing memory bugs (their test regimen was impressive). I know since that they're building the next version in Rust and the downside they have is that some libraries they depend upon in C/C++ don't have Rust counterparts (yet) so that makes for a more complex build and development cycle.
I don't think Russinovich's Tweet has helped anyone. I'd rather he hadn't tweeted that. Opinions are like bum holes. Everyone has one but you should take care getting it out in public.
> The feedback I heard at the time from their internal team was that they liked Rust and would consider using it for new stuff but only because the developers were already keen, not because they suffered from client-losing memory bugs
I heard recently from my San Francisco-based overlords that they are now regretting having adopted Angular* because it's currently hard to recruit anyone** if you're not doing React.
Code is code, FFS. If you can write code then the choice of language, let alone framework du jour is at best a secondary issue.
-A.
* Personally I regret their having adopted Angular because it's a pile of shit.
** Difficult in San Francisco, that is. So far as I can tell the only career consideration there is what looks good for the next five minutes on your CV. I mean "resumé".
I don't see how Rust can really be considered any safer than C++ when it is still possible to have runtime errors when indexing into an array out-of-bounds.
That is the most common safety error in software.
You can avoid it by using iterators but that's also true of C++.
Of course, Rust programmers will say that a panick error is safe compared to undefined behavior but it's still an undesirable runtime bug.
And the author does not consider any of the downsides to using Rust which in many cases will outweigh the benefits of easier safety. You can also achieve safety easily in modern C++ with static analysis without having to give up backwards compatibility.
You are confusing safety with program correctness. Safety is for security purposes: a buffer overflow can allow an attacker to read or write to arbitrary memory. A real world example of this is the Heartbleed vulnerability, where a buffer overflow from unchecked array indexing opened up a security vulnerability that could allow attackers to read sensitive data such as private keys, session cookies, passwords, etc.
A memory safe program will immediately terminate on an out of bounds index error. C and C++ programs potentially "soldier on", which enables exploits. In most cases, the exploit can then commandeer the process and read its full memory. Or it can perform arbitrary modifications of program behavior.
Worst of all, the user will not be aware of the cybernetic intrusion in the case of C or C++.
I think it's a bit irresponsible to look at memory issues with programs and assume that the one solution is to use a 'memory safe' language. There are many problems that could cause these issues to show up more frequently, such as having a very complex program, or trying to get a job done quickly, meaning there are now other solutions, like not making a stupid complicated program, and not rushing to patch up other issues in the program.
I also think that the data being presented is unfair. Microsoft and Google are pretty notorious for writing bad code. The chromium browser is probably one of the top 5 most complicated programs that exists, so it's insane to say it speaks for all c,c++ code.
I personally use a lot of suckless.org software, the majority of which is written in C, and the programs are incredibly reliable, despite being written in an "unsafe" language. Is it really so much of a language issue if these exceptions exist.
Of course I'm not against using higher level languages. Some languages are better suited for some jobs than others, and can make programs less of a pain for the devs and more stable for the users. Still, Rust is not the only language to have built in safety. Just take a look at Go. Go is fast, has garbage collection, type safety, forces the programmer to use good habits like error handling, has package and modules system for library usage, and is developed by great programmers, who even worked on unix and plan 9. For microcontrollers, there's tinygo.
It's just obnoxious seeing all these rust evangelists act as if there's no other solution to the problems that rust solves, and that rust itself doesn't have problems.
That's fair.
The big reason I don't see Go, for example, as a viable alternative is that garbage collectors are solitary creatures, CPython already has one, my preference for QWidget GUIs without having to write something memory-unsafe like C++ basically means PyQt, PySide, or QtJambi, I'm not a fan of Java, and I like to write components I know I'll be able to reuse across all my projects.
(Well, that and I don't like how limited Go's support for metaprogramming and DSLs has been compared to Python, Rust, etc. and how readily you find yourself using structural rather than nominal typing. Serde, Clap, Rayon, and the typestate and newtype patterns FTW.)
I am sure I read this piece a couple of years ago but the replacement back then was Python. I usually agree with what Linus has to say but the philosophy behind Linux has always been "Do one thing and do it well". That is something that has worked so well over the decades while the bloatware that is Windows grinds to a halt with every update until you replace your hardware to cope with the layer upon layer upon layer upon layer of convoluted junk that only one tiny department in MS truly understands and that is built on another layer that they only have a passing acquaintance with.
I see the problem with a lack of C developers, but I really don't think that we should look towards Windows for a solution.