VI ... always ... though it's the viom incarnation these days ....
RUST never sleeps
The Linux kernel is 33 years old. Its creator, Linus Torvalds, still enjoys an argument or two but is baffled why the debate over Rust has attracted so much heat. "I'm not sure why Rust has been such a contentious area," Torvalds said during an on-stage chat this week with Dirk Hohndel, Verizon's Head of Open Source. "It …
I'll use all of them as required (e.g. remote session quick editing of a file vi is often most efficient), and I'll make sure novices' default git commit editor is set to nano (because life is too short), but Emacs remains my go-to precisely for ease of use as a windowed editor with advanced region selection (rectangle select is particularly handy at times) and keyboard commands for things many other editors don't.
It's a bell curve dude...and you're at the hump.
Newbies start out with nano (because it's easy), then spend ages learning vi/emacs (because it's company time and using it looks badass), then when they're experienced and just want the job done (because they want to get to the pub), they go back to nano.
You just can't match the face melting pace you can achieve with nano and it's entirely because of it's extremely stripped back feature set...which in itself is a feature.
Nano is the Porsche GT3 of text editors, blazing fast, simple and pure...vi/emacs are like the Subaru WRX of text editors...the performance is there but so many buttons and knobs have been added to the dashboard, that by the time you've got it "just the way you like it" your friends have done 8 laps each in their Nano editor and gone home...and if you get the fastest lap, nobody cares because they've moved on to the pub and they didn't hang around to see it.
Since reading is hard:
emacs: nano does not exist as a gui application
vi: use anywhere, for when you have to edit something and would rather do it in ten seconds than start installing nano every time.
nano: life is too short to teach anything more complex to someone to whom you also have to explain version control
"than start installing nano every time"
Nano is installed by default on all distros that matter that you would use in production. What are you talking about?
"emacs: nano does not exist as a gui application"
Who the fuck uses the GUI version of emacs? That's like wearing wellies in the shower.
"nano: life is too short to teach anything more complex"
Precisely, why use something complicated if it is unnecessary?
"to someone to whom you also have to explain version control"
If you're in version control territory, you're probably also in IDE territory...in which case IDEs exist that are better than all three and it's no contest.
All you need in a text editor is the ability to delete a whole line, jump to a specific line, some basic syntax highlighting and the ability to save and close easily. For everything else, there are IDEs.
If you're in the territory that demands a more advanced text editor, what you really need is a basic IDE...Zed is turning out to be a pretty damned good solution for this...I really dig Zed as a fuss free distractionless simple IDE...it gives off the vibes that the early versions of Sublime did...until it went off the rails and started trying to compete with Atom and later VSCode...what Sublime should have done is gone the opposite direction and make things slicker and less complicated.
There are plenty of solutions out there that don't know what they are or should be that piss me off because they're everywhere...like Notepad++ for example...aeons ago Notepad++ was a pretty good tool, it did what it said on the tin...but modern versions of it make it more like a shitty IDE than a solid advanced text editor...you're better off with a basic IDE than garbage like Notepad++.
"All you need in a text editor is the ability to ::mercy snip::"
No.
That might be all YOU need in a text editor; Many of us, probably the vast majority, have vastly different needs.
As a side note, I have never met an IDE that allows me to do anything better or faster than from vi. Scripting is your friend.
Absolutely this. Good programmers are lazy in a very inefficient and cognitively dissonant way. For example, I will build a completely new tool just to avoid doing something I can't be arsed with in case at some point I have to do it again. I've got custom built tools up the wazoo for this sort of thing.
Le Dev: Man, writing these scripts will take ages. I guess I'll build a tool to build the scripts then.
One step backwards to achieve two steps forward...but next time round, it'll be two steps forward, but no steps backwards...we hope.
Agree -- Nano or Joe or any of a dozen other simple, straightforward, text editors.
Unless ... you are addicted to one or more of the features of emacs or vi.
In my case, it's emacs org mode. Outlining, simple spreadsheets, tables, links -- pretty much everything you could want in a Personal Information Manager. And if you aren't completely happy with some feature, you can change it yourself. Good news, bad news joke there. The good news -- Yes you can easily modify how emacs works. The bad news -- emacs is written in lisp.
I went from Vi to emacs, but was then bouncing around and guesting on so many systems that didn't have emacs installed I reverted to Vi and forgot emacs. Later I did much the same with Vim. I never ended up working on a Unix system without Vi installed.
Emacs was better but Vi was universal.
"keen to take on old C code, Rust will inevitably triumph."
Didn't downvote, but by the time all us crusty C programmers have shuffled our way into retirement, it'll be "the hot new thing" designed to fix all that was wrong with Rust, and people will be arguing over that instead. Remember C++ was going to take over the world? Then... I don't remember the names - Swift? C-hash? Some others, and now Rust. It's a churn that slowly turns relentless and unending.
<sarcasm>I say all this modern crap is far too inefficient and bloated. We ought to go back to basics and do the job properly and write stuff in assembler, because back then machines booted in seconds and didn't crash every five minutes. Mine's the one with the 6502 datasheets in the pocket.</sarcasm>
I'm not going to say that you're wrong, Rust was designed to fix the current major issues with using C/C++ and provide some new modern approaches to difficult problems in C. But once C is mostly, slowly, replaced by Rust I'm sure that if new classes of problems need a whole new approach and architecture that Rust is too simple a language for then certainly new programming languages will be proposed.
Rust's aim of replacing C with the same-ish performance, for the same roles, fixing the worst classes of problems and adding some nice modern approaches to stuff that was bolted onto C later is probably what you'd want from a Rust replacement and it's going to take quite some time and usage for us to figure out what the next class of issues are, as well as bolting on features to Rust that it was never designed for.
It's not so much the language that's bloated, it's the reliance on libraries to do anything interesting where things go wrong, and the inevitable lack of organisation that can result.
The Doom3 and Free space 2 source code are well worth studying. Positively elegant in fact. Very much the exceptions.
Rust and C are no different in regards good/bad organisation being applied to library usage. Rust advocates that know more than I do say that certain issues with memory management are mitigated by Rust. That's generally a good thing. Have I a project ready to use as a learning exercise for it? Not right away... But it's on the radar to try.
@Binraider "it's the reliance on libraries to do anything interesting where things go wrong"
What, is the C standard library not good enough for some people, that they need to rely on third party libraries? Or are developers too lazy to write their own libraries. ;)
While I was only joking the production engine side of our software is written in ANSI C (only uses C standard library). Has to be ANSI C as it is compiled for multiple* OSs and hardware including mainframes. Now I not sure I would call what the production engine does as interesting but it is large does do a lot.
* It was 7 when I retired a couple of years ago.
I know there was a joke alert there. The standard library is of course a very, very good thing, and probably does functions better than you or I could write our own.
It's the hell that ensues when you spread out into the windows (or other) APIs, the never-ending uncertainty of which module is going to change and break tomorrow. In the very, very worst cases I've seen rules for handling of maths change on a minor version increment of the .net framework, which was only there supposedly to be doing UI elements. The maths engine was totally separate code sat in Fortran '77. Nontheless, the resulting change from a user point of view, looked like the answers changed overnight.
This of course creates trust issues, both in your program, and in .net.
Such is the nonsense I'm a total advocate of text based IO all over again. I don't need the shiny, it's easier to code, and sure as hell is half a dozen library calls I can forget about entirely. That's a good thing!!
"which module is going to change and break tomorrow"
C's upgrades are a lesson in compatibility. I think they have (maybe) finally removed fgets() after warning about it for an eternity. But, so long as you remain with the standard libraries and nothing system specific, today's compiler will cope with code written decades ago. Hell, it'll even work (with warnings, might need a switch turned on) with the ancient K&R dialect.
I, on the other hand, fear server updates as there's rarely been one that hasn't bumped PHP to a later version...and broken stuff.
And, of course, there's the Python 2/3 debacle.
Let's hope Rust will follow in C's footsteps and not go breaking things because <excuses>.
Python, C#, Java; they've all introduced breaking changes and broken old code. When I'm choosing a language for a project, "longevity" is always a serious conern. It's becoming very difficult to pick a language one can be confident of being backward compatible in 10 years' time. Python, C# and Java have caused problems with their changes. Rust isn't stable yet, though it must surely be getting close. ISO Rust anybody? Actually, cementing it as a standard is probably a necessity, as it provdes a baseline one can be certain of.
I think this is why C has dominated some arenas so long - it's a lowest common denominator. However, Rust is certainly stirring things up. There's a risk now with C/C++ that there won't be anyone left to maintain it in, perhaps, as little as a decade. With existing C/C++ code bases, one is almost obliged to start converting it to Rust whilst there's still C/C++ developers around to understand the old code base, just in case Rust does end up winning.
We have been here before, though with fewer consequences than today. Assembler once ruled, and then C came along. We don't look back on the assembler -> C transition as a big mistake. Trouble is, we've a lot more C code hanging around today that we need to preserve somehow than we'd had assembler back in the 1970s.
I took the 1993 C code for the Kit-Kat on-screen kitchen clock, unpacked it into a dir on my Devuan Linux system, and did make (or was it nmake, or imake, or ...) on it.
The compiler threw many purple warnings on the screen, but made sensible assumptions, and in the end, I had an apparantly-working program.
This post has been deleted by its author
Yeah not so sure c c++ and developers for the languages are going to just disipear in 5 to10 years
From my (very limited) experiance and from what i have seen there are a fair amount of younsters eager to learn c and to some degree assebley too, espcialy in the imbedded side of things ... some of the rust stuff i've seen is just using a c library anyway...
C and c ++ are mostly the languages that most robotics platforms are written in and ain't going away any time soon
"one is almost obliged to start converting it to Rust whilst there's still C/C++ developers around to understand the old code base, just in case Rust does end up winning."
Nah. If Rust is truly better than C for kernel (driver) work, Rust will take over as kernel coders spread the news.
It's not happening, ergo ...
> reliance on libraries to do anything interesting
The Rust standard library provides abstractions around most of the C standard library, and for anything else C relies on libraries "to do anything interesting" to precisely the *same* extent that Rust does.
https://wiki.alopex.li/LetsBeRealAboutDependencies.
Quote
"I say all this modern crap is far too inefficient and bloated. We ought to go back to basics and do the job properly and write stuff in assembler, because back then machines booted in seconds and didn't crash every five minutes. "
Assembler!!!!! should be writing the code in binary and then inputting it via bit switches.... if you cant do that, ya not a real programmer.
I'm old >>>>>
should be writing the code in binary and then inputting it via bit switches.... if you cant do that, ya not a real programmer.
In my day we had to upload the code to our henge by chanting while dancing round with no clothes on and sacrificing a virgin. And they're not easy to find in Glastonbury, during feasting season!
I mean yes. But I value the old kit for a different reason. Whilst I’m quite happy to write multithreaded C code, my old brain isn’t quite so capable at handling multiple concurrent threads. And these blasted new computers with their AIs and enough memory to handle a thousand simultaneous tabs and windows. How am I supposed to remember which one I was working in?
Better a nice simple computer with only a dozen windows at most, and an elegant architecture. 68K for the win!
/shoutingatclouds
The trouble is that there’s some deep seated hardware problems that exist in all CPUs because they have to run C and support SMP. All our operating systems are written in C around the assumption of an SMP hardware environment for multi core devices.
And this is a huge problem, so large that few even realise it’s there. All the recent CPU flaws that have been damaging involve the complexities and difficulties of having caches and pipelines in an SMP environment, which have become unavoidably necessary for those devices to run fast. Meltdown, Spectre, and their like are all due to mistakes made in the combination of caches, pipelines and faked SMP inherent in having multiple memory controllers for multiple cores and CPUs.This is what allows one program running in that environment to exploit flaws in the SMP implementation to extract data from another. It’s also very difficult to write correct code for parallelism in an SMP environment.
If one ditched the need for hardware SMP and every core had its own memory system and cache, and they existed on an inter core network that was exposed only as an API, we’d be able to shake off these hardware problems and make progress with device speed. But none of our existing C would run.
A Language like Rust with its total knowledge of which function owns what data is very well suited for such a hardware environment. Borrows and lending become transfers across the CPU inter core network, and the functions can run in parallel.
We’ve had architectures like this before. Transputer back in the 1980s, 1990s and Cell from IBM for PS3 were non-SMP architectures. And surprise surprise there is indeed a CSP implementation for Rust… And writing parallel code based on CSP instead of SMP is a whole lot easier, and far simpler to make it reliable. Go also has a CSP implementation, in fact that’s its entire reson d’etre.
With CSP one can even prove correctness (no lock ups) mathematically if you wish.
The crazy part today is that Go implements a CSP environment on top of a faked SMP environment (faked by today’s CPUs) whilst the underlying hardware of multiple cores, multiple memory controllers and high speed inter core connections like QPI has far more in common with Transputer architectures than it does with the notional everything-on-one-bus architectures that most programmers have been clinging to 30 years after such hardware architectures became obviously obsolete. It’s a testimony to the ingenuity of CPU manufacturers that they have been able to hide that obsolescence this far since whilst preserving software compatibility, but we’re now paying a high price for the mistakes that are now seemingly impossible to avoid in the hardware implementations.
Biggest Bet In The World
There will have to be a big hardware crunch at some point, and software survives that crunch if it’s written in languages like Rust and Go that can better roll with that hardware change. That is the stakes actually in play in the Rust vs C debate but almost no one really appreciates it. But even if they don’t see it and don’t bet to win, that is the price people will be obligated to pay sooner or later.
There’s been some good articles on this topic here on El Reg.
D came and mostly went in the first decade of the 2000s. It is still clinging on with a few die hards self flagellators.
There is even a SafeD with the same goals as Rust.
E is older than D and is a subset of Joule.
There is an F, which is Fortran derived and also Microsoft's F#
There is more than one G language, not including GO(lang).
Language H is based on Cobol.
....
This post has been deleted by its author
The classic argument, it's the universities fault. They teach the kids and that is what dictates absolutely everything to do with the whole industry apparently.
It's a old argument and one that doesn't make a lot of sense most of the time it is used. If it was all just left up to recent university graduates and what they learned at university then what do you think would happen? It wouldn't be what we see happening now.
They teach the kids and that is what dictates absolutely everything to do with the whole industry apparently.
They said that about Pascal over four decades ago - "the language of the future" - where is it now?
The universities dropped it, because they wanted to teach OS design in c -- because they got free OS and free OS source in c.
And that was explicitly the reason industry gave for dropping Pascal -- because they were getting new graduates trained in c, not pascal.
Could someone who knows Linus, ask if, when it's time for him to depart to the aforementioned repository* may we, the Linux userbase have a whip round/crowd fund for his body to be cryo-preserved, with the hope that sometime in the future, he'll be defrosted and re-animated to continue where he left off?
*when Linus enters the aforementioned repository, would that then become an instance of a "git" repository?
That's actually a sound prediction/observation and very true in general. As the great philosopher of science Thomas Kuhn once observed: "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
It's not only true for scientific paradigms but also for many other trends too (adoption of a modern language).
I'm not entirely sure if it will work out this way but I largely agree with this. I foresee governments actually mandating the use of Rust (or other memory safe languages) in their systems, especially the military.
I mean, tens of thousands of systems have been written in Java and C# in the last 30 years and there've never been large scale memory related errors in them allowing miscreants to take them over. Memory safe languages therefore deliver on their promise!
> and with a lack of new younger developers/contributors keen to take on old C code
But will those younger developers/contributors be keen to learn Rust…
Looking back at the early 1980s, Ada effectively sat in the same place as Rust today; the masses of programmers chose C and VB, perceiving Ada as being too hard to learn and too demanding with all its compile time strictures.
Perhaps people are beginning to wake up to the need for programming to become more engineering than art…
Speaking of hard truths:
More new programmers learn, and become proficient in, C than in Rust, by orders of magnitude.
Further hard truths: C will still be a core language of many projects in 50 years. Whether Rust will still be around in 1/5th of that timeframe, is debatable.
I have C programs and and emacs functions from the early 1980s that still run without modification. And not just with vintage environments -- with modern compilers and on modern machines.
I change Makefiles occasionally and don't have the same expectations there, but there might be a few still around and useful from that era.
I've written code in other languages that has been irretrievably obsolete in less than a year. IDEs rarely remain consistent for more than a few years. I'll use them, but still fall back to C and plain Makefile build specifications if I expect to run something in the future.
My understanding is that for "better" Rust support, some things have to change and that means the C devs have to accommodate. And I think that's the bigger issue. Having to make changes to code purely to aid in Rust support. There's a "don't fix what isn't broke" sort of mentality about it.... but since Rust dev has already come into the kernel and is "blessed", ultimately work will have to be put on the plate C wise to accommodate. So, the "war" (if we can call it that) is who is responsible for accommodating changes for Rust?
So... I think this "war" will continue for a bit. I think once enough changes are made to the C code on Rust behalf, the war will ultimately end.
Who knows, maybe someday, Linux becomes the successful all Rust OS (vs something like Redox)?
An argument I saw somewhere else (possibly in the comments elsewhere on the Register) was that there isn't a firm, fixed standard for Rust yet. If that's the case, then Rust has the potential to be a moving target for code compatibility. However, I'm saying this with no knowledge of Rust, and haven't touched C in the last 20 years.... Other, wiser people will probably be able to correct me!
So, from my very rough understanding of the debate, one of the big issues is that to create proper Rust function bindings, knowledge about the parameters in terms of type (of course) and also the borrowing/ownership aspect is needed. This means that the function needs to be defined in more detail than it is at present.
Some of the maintainers of those functions neither have the time nor the will to properly define them, while there simultaneously is not the documentation: often you just have to look at the code and infer it. The detractors also don't want those function signatures or semantics set in stone. If they change them, then the Rust bindings will now be wrong and it will all come crashing down.
I have sympathy for both sides. On the one hand, many of these existing maintainers are busy, overworked and often underappreciated. They don't necessarily see the benefit to them and it is extra work to document these functions and they don't want the fall out come to them just because they change the interface. On the other hand, this is just the kind of attitude that can be the source of some of the memory leak/lifecycle issues that can kill programs. Poorly documented object lifecycles, modifications/enhancements from an inexperienced contributor getting past the review because they misunderstood some aspect that is not necessarily that obvious. Rust tries to make some of these aspects more explicit and that means extra but necessary work. The functions could have documentation, which might benefit both C and Rust, but we all know how out-of-date (and therefore deceptive) API documentation can become over time.
This is pretty much my understanding of it as well. Rust is not simply C with some stuff added, so code has to be written to interface between the two. Every time some C code is changed, Rust code which uses it may change, even if the function call interface is the same. The question is who is made to be responsible for those changes.
There was some other large C project which had some Rust dramatics a while ago where the Rust enthusiast had promised that his Rust bit of the project wouldn't affect anyone else. He was allowed to introduce Rust on those terms. However, he found the maintenance of his Rust code a major burden and so was trying to push that work off onto everyone else, under the guise of "he just needed them to provide information", when in reality he wanted them to maintain the Rust code so he could go on to write more new Rust code instead of spending all his time chasing a moving target with Rust. I think he wrote a small program to automatically re-write the Rust interfaces, but he wanted everyone else to be responsible for writing the necessary inputs to it, which meant they needed to do a lot of extra work when the conditions for introducing Rust had been otherwise.
There was massive push-back from everyone else, and the Rust enthusiast ended up bailing out of the entire project in a huff while blaming everyone else. Unfortunately I can't recall the name of the project at this time, although I think it was covered in El Reg.
One of the major problem with trying to use Rust in a C program is apparently that while Rust does offer interface options for working with C, you lose most of the supposed advantages of Rust if you make use of them, including many of the memory safety features. And once you do that, why bother with Rust?
I suspect that what is really needed is rather than having a completely new language is for someone to create a new variant of C, but without a lot of the baggage associated with C++ and just focuses on the major security pain points in C as experienced in the Linux kernel. This would allow for a gradual re-write of the kernel without the dislocation of using a completely new language.
After the BitKeeper fiasco Torvalds went away for a while and came back with Git. Perhaps he could do the same with C.
The problem is that, fundamentally, anything which solves the problems being faced will have the same effect Rust does, because what Rust is asking for that C doesn't provide is machine-checkable metadata about lifetimes and invariants.
It's like an assembly-language project pushing back against C for wanting each value in memory to have an explicit type associated with it instead of just using ADD or FADD or what have you, and complaining that the C code should be responsible for magically guessing what type a pointer's destination is.
I have mixed view on standards (and trademarks).
1985: First version of C++
1990: First version of Python
1992: First version of C++ STL
1995: First version of Java
1998: First standardized version of C++ (inc STL)
1999: Boost library first version
2008: First version of Python 3 (breaking backward compatibility)
2013: First version of Rust
Python is widely used, and does not have a standard.
So why Rust needs a standard?
And why C++ has one?
Having a C++ standard helped setting expectation for the variety of compilers on the market (and I guess that's the key difference between C++ and others).
Having the boost libraries really help having "widely known libraries", which was lacking compare to Java. Without it, imho, it's likely C++ would have disappeared and be replaced by Java.
IMHO, Oracle being Sun and therefore Java, because of trademarks issues, killed the language (slowly but surely), and helped new alternatives (Kotlin, Scala).
Considering there is one, and only one Rust compiler (maybe I am mistaken), having a standard does not really matter, as long as the core team does not make a mess of it (to a lesser extent of "mess", the move from Python 2 to Python 3 has been an issue in the corporate world).
In the corporate world, stability does matter, clarity on licensing does matter, and that's what you get with ISO specs, "open source" software (or licensed to a similar effects).
The way I see it (and many will disagree)
* C++: standard, lots of people working on it, lots of libraries -> ok (at least ok now, I went through the version 2 to version 3 move in a large corporation :D)
* Python: not standard, but very well defined and mature, lots of people working on it -> ok
* Java: standard well published, language sounds a bit at standstill -> ok licensing can be considered as an issue
* Go: standard well published, super simple, lots of libraries
...
* Rust: motivated community (+), has been moving a lot (-), complex learning curve (-), lots (too many) libraries, documentation (~) could be better.
My point is it is hard to make a call on starting to use Rust for new projects, or "transform an organisation to Rust". In the corporate world, people like safety... not memory safety, but a high probability for projects to be delivered and minimize negative impact of bugs (as opposed to guarantee eradication). It's all about value for money.
Sure, there will be some projects when Rust is seen as a good bet, e.g. in crypto trading, many firm use it, because "it has c++ performance", "guarantee memory safety" (mostly because people are "not old school" and don't understand what memory pointer is <-- waiting for thumb down!), and more importantly, has some library (like networking) that make life easy (because don't even know about the websocket libraries in boost).
Point being that everybody will find good things in any language, and bad things in any other.
> a very comprehensive regression/conformance test suite.
Without a Standard / language specification to test against there can be no conformance test suites for Rust, all you can have is regression tests.
What Rust has achieved by not have a reference standard is a single “reference” compiler: If the reference compiler can compile your code then your source code is valid Rust.
And Linux has "Whatever GCC does is correct. If your other compiler does something different, it's wrong". Linux isn't written in ANSI C, it's written in GNU C and depends on GNU extensions.
...and Linux isn't an exception. It's very commonplace for C and C++ projects to have one compiler (maybe two) they support per platform and, if you want to go beyond that, you're on your own.
Yes, it is very common for a project to have a single toolset or two to cover their target platform. With community projects where lots of people are volunteering their time etc. it makes sense to adopt a toolset that is widely available for free and is integrated into GitHub.
.
However, when writing applications to run on Linux projects are free to use compilers other than GNU. The Linux caveat, is basically the same everywhere: if you can't recreate your problem using our toolset, we won't investigate.
And there are other compilers for Rust in various states of development, from the specifically-for-re-bootstrapping mrustc (which is already usable but assumes you already got it to pass borrow-checking under rustc) to the intended-for-general-use Rust-GCC, under the paradigm of "If this compiler and rustc diverge, unless it's agreed to be a bug in rustc, this compiler is wrong".
In real-world terms, how is that different from what Linux experiences from being written in GNU C?
Rust "has been moving a lot" in the same way that C89 gave way to C99 gave way to C11 gave way to C17 gave way to C23. Existing language elements continue to work as new ones get added.
Aside from the exceptions made for soundness-breaking compiler bugs and the odd hiccup along the lines of "I hung a method `foo` off this standard library type using a custom trait and now there's an official `foo` and the compiler wants me to disambiguate the call", today's Rust compiler will build anything back to Rust v1.0 in 2015 so long as you didn't use the experimental APIs you can only access in nightly compiler builds which need to be explicitly flagged on with `#![feature(...)]`.
You can write 2015 Rust and compile it in a modern Rust compiler... it'll just look inelegant compared to 2024 Rust.
For example, originally, you used a macro to early-return, so you'd do something like `try!(try!(try!(dom.find("span[foo]")).attr("bar").find("baz"))` and now, thanks to the "try operator", you write `dom.find("span[foo]")?.attr("bar")?.find("baz")?`.
Some of the friction was caused by where this line was drawn, and it wouldn't be solely on the Rust integration team who would resolve it. From what I read arguments boiled over when maintainers who chose not to use Rust, were expected to now maintain interfaces.
In the kernel, internal ABI/APIs are not stable, so that extra workload on top of already overworked maintainers, who then do not see it as something they chose for, was seen as problematic.
Because certain information is missing from the C code - such as documentation/metadata about correct invocation - that is not only necessary for interoperability with Rust (or anything else that checks correctness, for that matter), but also independently something that should already be there for those developing against the APIs in C. It is an omission in the C code in and of itself, the Rust integration just made it harder to ignore the problem.
The problem right now is that Rust is nowhere near ready.
A huge amount of absolutely necessary stuff is still "experimental", there's no attempt to standardise, even memory layout is unstable.
It even uses the C ABIs when Rust loads Rust, because it only supports static linking.
That's not to say it cannot ever become ready, only that right now it's still a toy.
...since Rust dev has already come into the kernel and is "blessed"
The Sermon on the Mount, take 2
When Jesus[Linus] saw the crowds, He went up on the mountain and sat down. His disciples came to Him, and He began to teach them, saying:
Blessed are the c coders,
for theirs is the kingdom of Linux.
Blessed are the Rust devs,
for they will inherit the Kernel.
The "changes" being requested are basically "Please document how this API is supposed to work in enough detail that the Rust compiler can enforce it on Rust code".
To put Ted Ts'o's tantrum in more measured words, he's concerned that having Rust in the kernel will be a de facto push for C developers to either stabilize kernel-internal APIs or take responsibility for learning Rust so the "you fix whichever downstream consumers your change breaks" policy can be extended accordingly.
(Were I in his situation, I'd have politely made my cooperation conditional on extracting a binding promise that Rust in the kernel is an exception to that rule.)
I mean it's not exactly rocket science to pick up the language. It's easy to use. There is plenty of code to start from as examples.
Equally, I'm sure there are some rust Devs who can write C code to get the bindings they want/need to support rust in the kernel.
I've not done any kernel development in nearly 20 years. However, in that time I've got married and had kids and have to work. I don't have time to work on kernel development. In 20 years time when I retire I'll probably be willing to return. I may even do so once my kids are older.
I'm not a teacher, but I could teach the C standard library in an afternoon, and if they were competent programmers I could teach them everything else they need to know prior to that.
Half the morning would be pointers - because people just don't use them (The reason C is used? Pointers. The reason Rust isn't suitable for kernels? Pointers. The most dangerous thing you use? Pointers. The only way that hardware knows how to interact with you? Pointers - it would take an entire architecture change to modify that, e.g. having the entire PCIe bus and every device communicate with well-formed JSON or XML).
It'd take me about another day and a half, though, on how to set up a decent IDE and C compiler set for cross-compiling.
And it would take you months to teach how to properly work out-of-tree and submit kernel-level code to maintainers, by what I see on the LKML.
If a programmer can't pick up C, even with its anachronistic quirks, in an afternoon, they shouldn't be anywhere near an OS kernel.
If you honestly want to solve the memory safety problems, then hardware has to be designed to be memory-safe (i.e. not rely on software interpretation of large blobs of unstructured memory ranges).
C appears to be simple, unfortunately computers are not. And you have to know an awful lot of what's actually going on inside a CPU and the machine to avoid writing even simple applications very, very badly. Rust forcibly protects you from a lot of the pitfalls inherent in the underlying machine's complexity, whilst letting you exploit the machine's benefits (e.g. memory allocation, threads, etc).
I think the problem going on in Linux at the moment is because it's a mix of Rust and C. If it were to be converted to all Rust, there'd be no difficulties related to who does what to inter-language bindings.
We've been here before. Once, applications were written in assembler, or maybe Fortran and a few other historic beasts. Then along comes C, and assembler died off. We don't look back at that transition with regret, even though there were the same debates then as there are now. The only difference is that we've way more code written in C today than we had written in assembler in the 1970s.
But, back then, there was an "urgency" to get on board with C because it was clearly the future, and there was clearly limited time left when it came to ready availability of willing talent in assembler.
The dilemna today is that if Rust is going to win, one had better get started on converting important C sourcecode bases to Rust now whilst it's still possible. By defensively starting that conversion, one is also perpetuating the trend. By delaying that, one could end up with a moribund project. The future is going to be comprised of those projects that betted correctly at the right time on what the future is going to look like. History tells us that ultimately, the easiest-to-use technology is the one that wins. And when it comes to C vs Rust, Rust is the one that is "easiest" to get right, just as C was "easier" than assembler.
"I think the problem going on in Linux at the moment is because it's a mix of Rust and C."
If you mean "Linux, the kernel", no. There are less than a handful of drivers written in Rust, and to date most have not been integrated into the kernel.
If you mean "Linux, my favorite distro", also no. Most problems with distros are caused by the maintainers including inappropriate code.
People tends to get tribal about everything frome editors to favourite/hated cell phone brand (not seeing how most for-profit companies are similar), it's pathetic.
I learned long ago to use both vi and emacs, they're just tools, sometimes I use what's available, sometimes my employer required I use a specific IDE/tooling. I've done C and C++ for decades and now enjoy Rust, it has revived a joy of creating and programming I thought I had lost and it shows that even old dogs can learn new tricks. I take pride in being flexible and giving new ideas a chance.
As for the Linux kernel, the Rust people have stated many times that if anyone needs to make changes to interfaces impacting Rust bits they're happy to be involved and help everywhere they can. And the AGx driver have shown the benefits and suitability of Rust for modules in spectacular fashion - how often does anyone write a huge complex driver in record time that has NO memory faults in prodution for thousands of users..
Rust people have stated many times that if anyone needs to make changes to interfaces impacting Rust bits they're happy to be involved and help everywhere they can.But that's the problem.
It should be that they are required to do the work in to maintain the interfaces. Not to 'help' wherever they 'can'. The wording you said is very wishy-washy and doesn't commit, doesn't put the responsibility on the Rust devs to do any work at all on the interfaces, It's aspirational wording, not binding commitment wording. it just devolves to pushing the responsibility back on the C-developers to maintain the interfaces with the Rust devs choosing whether or not they will assist.
No that isn't the problem.
"Requiring", "responsibility" -- this is the language of formal contracts. It's not the right thing within a single project which progress with social norms.
In any project there is a degree of obligation between upstream and downstream code. If you use someone elses API, then you have to fix your code if the API changes; that is your responsibility. But, whoever owns the API has a responsibility to think about the impact they have on you when changing that API.
The Rust people are building safe abstractions to large parts of the kernel. Which means that they are downstream of many parts of the kernel. Of course, maintaining those abstractions will be done by the people who know Rust; but that doesn't mean it can have no impact on the C development at all. But this is not different from when someone adds a new bit to the kernel in C.
Software dependency graphs may be unidirectional, but the social dependency this introduces are not.
> "Requiring", "responsibility" -- this is the language of formal contracts. It's not the right thing within a single project which progress with social norms.
Well, the Rust evangelists seem to be requiring that sort of requirement on the C-developers to support their efforts. "We must have Rust, and you must maintain the comaptability layer with Rust".
The C-developers have no obligation to maintain compatible interfaces with someone's pet project (which is all the 'Rust in the kernel' is right now).
> t's not the right thing within a single project which progress with social norms.
Yes it is the right language within a project run by real humans, who have real time limits.
> In any project there is a degree of obligation between upstream and downstream code
If up/downstream code were agreed upon by BOTH parties; yes. This is not the case here. Here you have a bunch of people showing up one day, putting their new stuff into the project, and expecting everyone else to just accommodate it.
> The Rust people are building safe abstractions to large parts of the kernel.
It remains to be seen if any of the alleged benefits of Rust will materialize for the Linux kernel.
> the Rust people have stated many times that if anyone needs to make changes to interfaces impacting Rust bits they're happy to be involved
Yeah, and that's exactly where the friction is coming from. Because, no, it's not enough that they would be "happy to be involved."
THEY want THEIR language in there. They are the ones making the demand. So they are the ones WHO ARE REQUIRED to make it work, and they are the ones WHO ARE REQUIRED TO MAINTAIN IT. No one else has to deal with their problems, and no one else has to spend time and resources to solve their problems for them.
Aka., if an interface changes in a way that impacts rust code, the rust developers are the ones who need to make sure THEIR code works with that, not the other way around.
This isn't complicated, or an outrageous point of view: If someone develops a plugin for my application, and later I change some part of the API, it is THEIR responsibility to update their plugin to deal with that. I am not required to maintain their plugin for them.
I will give them deprecation warnings. I will use semver-semantics correctly. I will even manage a mailing list for plugin developers, telling them in advance "hey guys, fair warning, you use X and X is going to change in update Y, so you might wanna update, or your shit is gonna break."
But if then demand that I put in extra hours and free work to maintain THEIR project for them, then I'm afraid all the're gonna get from me is this.
> This is like vi vs Emacs with 'religious overtones,’ project chief laughs
That would imply that the editor wars didn't have religious overtones.... and my memory of the late 80's Usenet editor wars was very much religeous.
I think the writer of the sub-head maybe a little young, likely "sweet summer child" applies!
I also prefer nano, but probably only because I have not put in the effort to learn vim, which most terminal junkies seem to prefer. I'm not a developer by trade, but when I need to edit or create a config file or simple script from the terminal, nano just seems very intuitive to me. I could likely be more productive in these situations if I was a vim wizard, but the amount of time I spend using nano probably works out to only a few hours total per month. It's hard to justify learning vim for this.
I do wonder if you're posting this just to cause conflict, but 'usability nightmares' - sure, they can definitely seem like that to someone new to them. 'Suck ass' - objectively not, they're both highly functional and are more widely available than nano.
Expecting nano is a very Linux centric viewpoint. vi or its derivatives on the other hand are available on pretty much every platform, are highly functional, and work in constrained environments, such as single user mode, without expecting the existence of arrow keys, or in some cases a character oriented terminal (although, if you're switching vi into ex mode to do that, or using ed, it's *even more* of a pain in the arse than standard vi, but it will do the job on a very slow line oriented connection).
I will grant that under Windows for basic text editing I just use Notepad. For everything more sophisticated than cut and paste, I use vim - it even validates XML!
A quick visit to the nano home page shows that it's pretty functional but available for Linux and, er, More Linux. There is a random Github Windows port, not linked off the GNU home page. On the other hand vi is built into pretty much every Unix for historic reasons (the Single Unix Specification), and the rather more functional vim has links to downloads or instructions for everything from Unix to Windows, DOS, QNX, and even AmigaOS.
I prioritise learning transferable skills, and vi/vim is usable on pretty much any platform I care to think of. Nano, not so much.
Choose the editor of your choice (I used to love the X2 editor, which I note is also available by default for more platforms than Nano..), but learning enough ed and vi to do some very basic editing of files is time well spent. One day in the future you may be stuck in an unfamiliar system, or without even a full terminal, and it'll be valuable knowledge.
I'm straining to remember what we actually used for text editing 4 decades ago. In the DEC rsx environment it was something called TECO which was, as I recall, easy enough to use but had a rather terrifying potential for destroying your file with a single keystroke error. On PC clones it was edlin -- not especially fun, but usable. Before that, was the era of punched cards edited by punching a new card which didn't seem all that onerous at the time. vi was around on Unix systems. But I, at least found it to be unusable. Don't remember what the alternative was, but it was less obtuse. At least so it seemed to me.
EVE - the extensible VAX editor. I liked it a lot. It was extensible because it was written in an accessible language of its own, the Text Processing Utility (TPU). This meant that one could write arbitrarily complex or specialized processing routines right inside the editor.
> it's insane that I have to install nano just to edit a damn configuration file.
That's the whole point.
The only editor that is pretty-much guaranteed to be on every unix-like install, whether a server 30 year-old SunOS, a 20-year old Solaris, 40-year old AIX, current gen Linux or AIX or whatever, a full-on user environment or a sparse terminal-only (ssh) command line, vi is likely to be there.
Vi/Vim etc. aren't better (or worse) than other editors, they are the ones that are there.
Therefore basic ability to use vi means if you need to update/edit/create a configuration file on some random box you've ssh'ed into, then you can just get on about and do it, rather than having to install nano (assuming you even have the permissions to do so, you very well might not because of either change control or you just don't have root and are doing non-root administration tasks) or whatever just to create a 30-line script or update a couple values in a conf file of some sort.
The best tool for a job is the one you have. And Vi is pretty much the lowest common denominator (well, ok, maybe that'd be ed) text editor.
the lowest common denominator (well, ok, maybe that'd be ed) text editor.
When you were stuck in single user mode on your Sun after rebooting with a muffed config file or boot file, ed was your salvation.
For those uninitiated to the mysteries of ed see Michael W Lucas' Ed Mastery Tilted Windmill Press. For those who are phobic seeing feminine third person pronouns etc he also offers the Manly McManface Edition for a few dollars more.
Exactly. I have Windows and macOS using colleagues who spend almost their entire time in GUI applications despite our target platform being Linux. They picked up the bare essentials of using vi (nvi in our case as vim sometimes mucks about with primary clipboard selection) in a few minutes.
In the beginning was the WORD *
( clearly Microsoft's Marketing Department has a LOOONG reach [ or are the pipped by Channel 4 ] )
cf the bit in Terry Pratchett's / Neil Gaiman'd GOOD OMENS where the Devils representative sent a a Microsoft EULA down to the contracts department with a stick it note saying LEARN.
* the Star in not to indicate that Wordstar was there earlier [even if it was ] **
** in Latin it is PRINCIPIO ERAT VERBUM and in Koine Greek it comes out as EN ARCHE LOGON
- relevant to IT as it sounds like - "You have to login first", perhaps even more relevant if misheard as as ANARCHY LOGON
Decades ago I lived in St. Andres (Scotland)
An American guy was looking at the inscription in the stone arch* over the entrance to one of the university quadrangles, which read
"IN PRINCIPIO ERAT VERBUM"
I could see he was a bit puzzled, but eventually announced triumphantly to wife & kid who were with him
"It means the principal has the last word"
I thought it a bad mistake as the USA prides itself on being a mega Christian / bible loving nation (he was a white American, so I'm guessing he would have claimed to be a Christian)
I guessed he was there for the golf rather than the culture.
Even worse, there was an additional hint as from his angle, could see the sign on the nearest building read "faculty of divinity"
* An old university, predates US independence by over 350 years, hence the stone building material on that old part of the uni.
(Wordstar predated Word. I used it on CP/M a couple of years before MS-DOS/Q-DOS/Whatever was even thought of.)(The truly modern feature of Word when it came out is that while it was functionally similar to programs like Wordstar and WordPerfect but it was way more bloated.....nothing much has changed in 40 plus years.....)
And how do you explain git then? I get that, for someone who is massively talented - not me, redoing a Unix system from the ground up can be done.
But git seems to have mutated far from previous source control systems, both in its concepts and its storage mechanism. IIRC, he did it on the side when the dude who had loaned a commercial source control system to the kernel team balked at some of the Linux devs deciding to copy-cat it to "make it free" (probably inspired by St Stallman).
That's not just "someone who got lucky".
I mean he could have cloned Rational ClearCase, FFS. Where would we be then?
How do you explain git?
Well, it has always seemed to me that configuration management of a large system -- especially one with contributors scattered all over the planet -- is an incredibly complex task. I assume that Torvalds put together a system that he felt supported ALL the needs of that kind of effort including as many as possible of the weird edge cases. No surprise that what he came up with is complicated and obtuse. At least I find it to be complicated and obtuse.
But what I don't understand very well is why folks whose needs are simpler far insist on using git. I mean, you don't need $10000 worth of tools and diagnostic equipment to deal with a burnt out headlight or flat tire. I can only conclude that they are so smart that git seems simple to them or that they are too dumb to figure out that there are simpler answers to the problem that work fine for their needs.
>But what I don't understand very well is why folks whose needs are simpler far insist on using git.
Probably because with cloud etc. we've made doing something simple like the 'C' "Hello World" programme into something that requires a $10000 worth of tools to compile and deploy on someone else's virtual computer, which your tools set builds and tears down everytime you compile your program...
The idea that you can compile something using only the libraries etc. on your own computer that results in an executable on your local system is so last century...
> But what I don't understand very well is why folks whose needs are simpler far insist on using git.
Because it is neither "complicated" nor "obtuse" when used for simple use cases.
`git` is the perfect example of a well executed tool: Its handling is simple for the simple usecases, and only gets more complex as the the usecase itself grows in complexity.
Example: Single developer: All you realistically need are git add/commit/push. If you want to be a bit fore fancy, git branch/checkout/merge. That's a handful of commands, no need for any of their special bells and whistles, that cover the vast majority of source control requirements already.
Good Architecture not always have outperformed good implementation. Actually to the contrary is true. I never buy frameworks that promise to do the back cleaning for bad code and pushing a coding culture to use and not care for cleaning up allocated resources. If not we would have cobverted long ago form C to memory safe languages that have showed up long before Rust. C is simple beautiful language with its own use case and has proven it soundness in implementing the most sophisticated products like OS es, network stack and primary user land services. To the degree even to implement OOP requirements just with what it has pointers and structure. XWindows is one of those impressive implementations. So there is no Rust or other languages that would save anyone from bad implementations or lazy coding. And if Rust is only taken for memory safety then better to re-write C compilers to enforce memory safety. Under the hood.
Rust doesn't require a framework or runtime in order to guarantee memory safety. In the context of the Linux kernel, I don't think the Rust runtime is even in use.
I'd really love it if you could explain how to guarantee memory safety in a perfectly valid C program which dereferences a pointer received from stdin. I'd love it even more if you could provide a program written in Rust that possesses the same functionality and actually compiles. It's quite a challenge, and maybe you'd learn something in the process!
I live in C and have issues with the Rust/Cargo ecosystem (and I don't want to join their cult), but the language design and safety guarantees won by allowing the compiler to reject badly-written code are not among them.
The same way good C coee with good memory management is written the C compiler could be re written to have memory safeguards without changing the language it self. The same malloc and free could be reimplemented to virtualuze memory usage and make it safe. God C programmers have use d their own libraries to give us robust systems on the past. Looks the conversation with Rust is cult like about making it formal byvpardigm.But bad coders with aRust could do or leave worse things than a good coder on C which is focused on those memory management issues.
You can't have your cake and eat it too. Making a C compiler produce memory safe code would require changing the language because C encompasses both bad C and good C. If the compiler refuses to compile bad-but-valid C, you've not made a C compiler.
Virtualization wouldn't prevent use-after-free errors or invalid casts. It would also cripple C with a massive bloated runtime when Rust doesn't need virtualized memory management in order to have more memory safety than C would have in this scenario.
I'm convinced that there's no such thing as a Rust programmer who isn't focused on memory management to the same degree as a good C programmer. Rust isn't memory-safe because it provides some magical sauce that could be yanked out and applied to C, it's safe because the language was designed to allow the compiler to smack you on the head and tell you that you've done something stupid and that you need to do a less-shitty job if you actually want it to output a working program. When passing a value into a function without passing it as a reference, that value is consumed by the function and can no longer be used in the caller. Only a single mutable reference to a value can exist at any one time. When working with functions that output Options, you can't get the successful result unless you provide error code for handling an unsuccessful result. You have to annotate lifetimes if the compiler isn't able to implicitly calculate them. Bad Rust programmers have to deal with these constraints just as much as any other Rust programmers, and you can't get rid of them by declaring the code unsafe.
I dunno, maybe you could argue that Rust programmers have a weakness in trusting their compiler to prevent bugs in one error class, blinding them to the existence to the sorts that crop up in business logic.
Nothing prevents from wetting a C compiler that virtualize memory and implement a feature to avoid memory violations both in compile and runtime. C compilers were for old hardware with direct bind to assembly but that need is well passed. Ifnothiw do you think Rust ownership paradigm is implemented?. In compiler building once the syntax analysis is complete what you do with semantic implementation is open of every possible implementation under the sun
Oh, for sure! C, when done carefully, can be just as safe as Rust. I use a resource management and string handling library that I've developed and tested for ten years. My biggest concern with memory has been resource leaks coming from circular dependency graphs. I refuse to turn it into a mark-and-sweep garbage collector because that'd basically be admitting that I'm too dumb to design relationship structures that are both suited for purpose and acyclical.
I was largely just trying to point out that there isn't a way to guarantee memory safety in unmodified C when arbitrary pointer dereferences are possible. It might be possible to strip C down and rebuild it as a memory safe language by design, but that wouldn't be the legendarily backwards-compatible C that we all know and love. Plus, it probably wouldn't have any of the other neat features that Rust has that have nothing to do with its safety (closures, traits (far better than class-based inheritance IMO), variably-named namespaces, the whole match syntax thing, variable shadowing).
Eh, maybe if it could do shared libraries I'd be down for it. That's the only thing keeping me from using Rust in my day-to-day work. Well, that and I'm not particularly keen on programming languages that try to be package managers. (Let Gentoo do its thing, Rust.)
Here you go: http://blog.asleson.org/2021/02/23/how-to-writing-a-c-shared-library-in-rust/
And rustc is not a package manager! That’s cargo’s job. My guess it that most users are happy to have it bundled. But if you’re not one of them, nothing prevents you from downloading stuff manually and putting naked rustc commands in a makefile.
Program development is going to be taken over by AI, so all you human C, C++, Rust developers should be retraining yourselves for other jobs. The language used by AI won't matter since AI will be able to write code that contains no errors, buffer overflows, or problems whatsoever.
reverse order than most other languages.
Example: why put the parameter type after the name, and the function return type after the function name?
The Pascal/Ada families of languages generally have the types after the parameters. eg
function add_one (i : integer) : integer; begin add_one := i+1; end
{ After 40 years I am guessing this Pascal is about right }
Algol 60 is probably the source of C's syntax.
The Pascal/Ada families of languages generally have the types after the parameters.
It's a characteristic of 'designed' rather than "grown" languages. Meta information is suffixed rather than prefixed, because the meta information is of secondary importance to the user . You specify the logical function of a variable first, with a meaningful name, then the suffix the type information.
Type information is more important to the compiler, so when you're constructing a language without prior thought, the type information winds up first by default.
I wholeheartedly agree. I hate the Rust syntax even though I love it as a language. The -> to specify the return type is also obnoxious to write every time.
I honestly believe they came up with the syntax specifically to distinguish themselves from C, not because it's better. I mean, millions and millions of programmers know the C syntax already. Why create another one just to be different?
It's because Rust is a language of the ML lineage (ISWIM (1966) → ML (1973) → Standard ML (1983) & Caml (1985) → OCaml (1996) → Rust (2015)) which slapped on a liberal coat of C++ paint to make itself more appealing to mainstream developers. All the syntactic elements you don't recognize are from OCaml, which the Rust compiler was originally written in before it became self-hosting.
(Yes, even that weird `'a` syntax for explicit lifetimes. It's OCaml's equivalent to C++'s <T> because, on an abstract level, lifetimes are generic type parameters. "foo is generic over all lifetimes `a and all values T".)
Using -> to denote a return value is common in the world of academic/functional languages, which got it (and many other quirks C lacks) from mathematical notation and the whole "postfix types" thing, combined with that fixed `fn` keyword, is to avoid needing the lexer hack and to avoid needing some kind of placeholder like `auto` when you want type inference to be the default choice.
There are basically two camps in this: those programmers with hubris who think they can program perfect C and those pragmatists (like me) who know that deadlines, sleep deprivation, managers shouting in your ear and such will result in sloppy and bug-ridden code (which may still work in most cases).
For me the attraction of Rust is that it's more or less "maintenance free." If it compiles, it's guaranteed to be free of memory snafus, something that a C compiler will never, ever give you. This means that even if you're dead tired and hack something into a critical part of Linux at three o'clock in the morning there's no chance you'll end up with a bug that will bit you years later (like HeartBleed).
"there's no chance you'll end up with a bug that will bit you years later"
Oh, the hubris.
You're forgetting that bugs come in two forms. Bad code and bad algorithms. Rust may do a lot to protect you from bad code doing weird things, but it can't save a shitty algorithm that is syntactically valid but arrives at the wrong result.
That's the kind of bug that can turn up and bite years later.
Is a car without ABS,. traction control, airbags, driver aids and the like inherently unsafe? Well, if you're an inexperienced driver then perhaps.
This is what gets me about the whole C vs Rust debate. Yeah, sure, Rust is like a car with all those features. But a car without those features is fine in expert hands. Look at F1 cars; they have none of those features.
So, yes, C and C++ don't hold your hand and wipe your nose for you and you need to be a better programmer.
*better programmers* are more expensive, even though 91.2% of programmers think they are better than average.
While linux has managed to gorge itself to into an impressive obesity, to take on the challenge of becoming truly floor bending requires more tools.
Rust provides the sort of framework required to get those tools shovelling into the great maw.
Good analogy
I would go one step further and say Rust is like a motorbike with airbags, seatbelts, automatic braking, AI lane assist, roll cage, bull bar, .. and a bevy of other perfectly reasonable sounding “safety features” such that the sum of the parts is dangerously unusable contraption that doesn’t belong on the road
What I find mystifying is this push to add Rust to the Linux kernel, which predictably amounts to opening a can of worms, as anybody with a modicum of knowledge of human nature is bound to realize immediately.
Why don't the Rust enthusiasts just devote their efforts to build a better, modern kernel in Rust from scratch? Quite frankly, their efforts to have a presence in the Linux kernel seem to be motivated more by a yearning to be up there with the cool people than a desire to develop more secure, reliable software.
I honestly don’t think it’s a good idea to require multiple compilers/languages to build a kernel. That’s going to be confusing for any humans whether they like/can program in both (or more) languages. It’s just an extra, unnecessary, cognitive load.
I recall, back in the day, Windows 95 use to take multiple compilers, assemblers and other tools to build - admittedly this is a full OS and not just a kernel but even so, it’s a bit OTT.
As other have suggested a new kernel written in Rust seems like an okay idea, and certainly user space tools in Rust should also be hunky-dory.
As a side note: I can see there are certain higher level aspects of a kernel where memory safety is a good thing BUT all of the kernel code is doing some fairly unsafe and “hacky” things, C is good at that I wonder if Rust would just get in the way (or just require lots of “unsafe” sections which would kind of defeat the purpose)
This is the reason that Rust is not required to build the kernel, I think. You can still build a working version without. It's also (one of) the reason that there is work on GCC to make it compile Rust.
As Linus says, they have been at this for two years, which isn't really very long. There will be a long road till it becomes a required part of the build.
Rust was more or less created as "OCaml with less weird syntax and a more low-level-friendly memory management strategy". The original Rust compiler before it became self-hosting was written in OCaml and any syntactic element you don't recognize from C++ is 99% guaranteed to be from OCaml.
(Which is funny, because, in the early days of Rust, people who didn't recognize that were making jokes about Rust being a conspiracy to trick C++ programmers into writing Haskell.)
I keep getting this nagging feeling that people for some reason deliberately don't 'get' it. C is a bit like assembler in that its neither 'memory safe' nor 'memory unsafe', its just a way of conveniently writing machine code. Unlike many -- most -- languages it can be used to make code outside of a pre-existing environment which is why it was originally developed alongside the operating system that was written in it. Its actually not unique in that respect; similar languages were developed for other OS work, its just that Unix and C became widely known and used due to its foothold in education and through that propagated to the nascent PC industry ("and the rest was history").
Since most people never use a language outside of an existing OS environment they tend to think that the environment is an intrinsic part of the language. C, for example, needs its standard library. This isn't true, just as you don't need the crt0 bit that sets up a C program's environment and calls 'main'. This is a convention for a specific type of environment and is absolutely nothing to do with the language. When you talk about memory safety its typically the libraries that are a problem (your code can also be an issue but then the whole point of Unix/Linux isn't to protect the system from the programmer -- there are better languages for that job -- but if they are going to shoot themselves in the foot then it will help them do this in the most efficient way possible). Rewriting the libraries to be safer might be a good idea but typically if there's a problem its not a code issue but a design issue and generations of programmers have found out the hard way that recoding a bad design to make it work better just ends up with another bad design.
I'm quite prepared to use Rust for utilities etc. but this dichotomy that such and such exists to replace something merely because that something is old is utterly bogus and if you don't understand why then prepare for a very long and interesting learning curve.
I agree. When I read "C" I always think "high-level assembly."
Before the advent of the Internet I would've said that C was sufficient. These days, however, I believe we need a programming language like Rust since we can't afford using non-memory safe languages anymore.
You know I watched it yesterday and one thing that struck me was how nice & chill Linus sounded.
Maybe he still goes off on rants sometimes on occasions that deserve it, but this is a far cry from the guy whose tantrums were once the subject of geek lore.
Good on him. Still runs a tight ship too.
I do wish they'd gone down a bit more into the alloc and mem and kernel-specific stuff that they hint makes it hard for the two systems to interop easily (as opposed to Ted Tso's outburst). I think somewhere else something was mentioned about interface? stability.
Changelog had an interview with the creator of Ladybird browser. He says that when they thought to use Rust there were 2 issues. First, he claims Rust makes it relatively hard to work with other languages. Second, but less relevant to an OS, he said that it is difficult to map something like the DOM onto a language that isn't all that object-oriented. Ladybird, primarily C++ based, now thinks of using Go (which strikes me as odd as it is garbage-collected, but ok).
www.listennotes.com/podcasts/changelog-master/why-we-need-ladybird-JVDCWIk0a9x/
> he said that it is difficult to map something like the DOM onto a language that isn't all that object-oriented
That's a fair criticism. The experimental Servo browser engine that various Firefox enhancements were taken from (eg. the multi-threaded CSS engine) is still just reusing Firefox's SpiderMonkey JavaScript engine and they just let SpiderMonkey handle the DOM. (Granted, probably for the same "let's not prematurely make work for ourselves" reason that rustc is built on LLVM.)
.... and had written Linux in Rust, Linux would have remained a personal project, and gone nowhere.
Rust is just too complicated and difficult a language to learn easily. It's even got object oriented stuff in it; who needs OOP in a kernel? C's simplicity is what propelled Linux's development in the early years and likely still does.
What happens with difficult, complicated programming languages is that everybody learns just a part of them, and gets by with just those parts. Trouble is, different people learn different parts, making it difficult to maintain other people's code. Just look at C++ as an example (which intentionally gets more difficult and complicated every three years).
Rust in Linux is a bad idea. If a new language must be introduced, it should be a simple one, possibly with Rust's ideas applied to C, rather than to difficult, complicated languages.
Besides which, the language isn't even standardised, yet. Using it in Linux is creating maintenance time-bombs for the future.
> It's even got object oriented stuff in it; who needs OOP in a kernel?
Let's look at Rust's "OOP":
1. Encapsulation: Rust has "pub" and stuff not marked "pub" will be inaccessible outside the module that defines it. In Rust, a module is a file or a `mod foo { ... }` block within a file. ...sounds not that different from C's "static" to me.
2. Objects: Rust has `struct`, just like in C, and it has "data-bearing enums" (a concept from functional languages) which are just tagged unions with first-class language support. Method syntax is literally syntactic sugar for free functions with the "parent" type as the first argument and they can be called as such. In C++ terms, Rust structs are always POD (Plain Old Data) with no vtable snuck in.
3. Inheritance: Nope. Rust doesn't have the implementation inheritance you're thinking of. It does have "interface inheritance", but that's just the ability for an interface definition to say ""To implement the Error interface, you must also implement the interface for Debug printk".
4. Dynamic dispatch: Same as C... if you take a function pointer or specifically create a vtable, you get dynamic dispatch.
5. Polymorphism: Rust supports generics, so you can define "List of [insert type here]" and have concrete instances generated at compile time for the types you use, and it supports "trait objects", where you can ask to create a fat pointer instead of a normal pointer, with the fat pointer being a (*struct_instance, *interface_vtable) struct under the hood. (eg. for "List of [Debug-printable thing]") It also supports macros, as C does.
> Besides which, the language isn't even standardised, yet. Using it in Linux is creating maintenance time-bombs for the future.
Linux is written in GCC's extended GNU C dialect, which is no more standardized than Rust, and the Rust team have a policy called the v1.0 Stability Promise which is akin to Linux's "don't break userspace" rule.
The only reason the Rust for Linux efforts currently require a specific compiler version is that they're using API-unstable experimental language features (which require a nightly compiler and a special #![feature(...)] annotation to access) which they're working with the Rust team to shake any design faults out of and get stabilized... at which point they fall under the v1.0 stability promise. (Basically, the effort is in a state akin to when Android required a bunch of out-of-tree patches which eventually got upstreamed.)
That's part of the reason they don't WANT their efforts to extend beyond the odd driver here or there yet.
> .... and it supports "trait objects", where you can ask to create a fat pointer instead of a normal pointer, with the fat pointer being a (*struct_instance, *interface_vtable) struct under the hood.
Do we really want things like "trait objects" and "fat pointers" in an operating system? The code is difficult enough to debug as it is, without introducing opaque abstractions. I think you're making my point for me.
Besides, even if the OOP in Rust is light weight at the moment, there's no guarantee that it will stay that way. The language isn't even standardised yet. What's to stop some clever person adding C++-style OOP to Rust "because it makes programming so much easier"?
> Linux is written in GCC's extended GNU C dialect, which is no more standardized than Rust, and the Rust team have a policy called the v1.0 Stability Promise which is akin to Linux's "don't break userspace" rule.
Is it really? I wasn't aware of this. I believe Linux has been built using Intel's C compiler. Can you give me an example of the use of non-standard "GNU C" in, say, Linux 6.6.52? That is, a file name, line number, and a short description of the quirk? I'd be most interested to see this.
> Do we really want things like "trait objects" and "fat pointers" in an operating system? The code is difficult enough to debug as it is, without introducing opaque abstractions. I think you're making my point for me.
My points were that:
1. You can already do this in C and, from what I remember, the Linux kernel already has these sorts of things where it's deemed appropriate... just with extra clutter because the language doesn't know them natively. (Sort of like the syntactic bloat Glib-based libraries like GTK see to implement C++-style OOP in C.)
2. Rust's design is pretty good at making dynamic dispatch and trait objects feel unappealing unless you actually need them. I think the only fat pointers that get used regularly are slices, which are (*data, len) fat pointers to subsets of strings or arrays. (They're great for zero-copy parsing.)
3. Unlike C++, Rust's support for polymorphism doesn't require you to be on your guard about whether the definitions of data structures have been modified. It's just a helper so your vtables and function pointers can play more nicely with any generic container types you might implement.
> Besides, even if the OOP in Rust is light weight at the moment, there's no guarantee that it will stay that way. The language isn't even standardised yet. What's to stop some clever person adding C++-style OOP to Rust "because it makes programming so much easier"?
1. Over the last decade, the RFC process has demonstrated that the Rust team aren't swayed by "because it makes programming so much easier". Heck, one of the things they've actively been focusing on is a proposal's effect on the language's complexity budget.
2. People have been clamouring for it for a decade now. The Rust team has stood fast on "There's no single design that's objectively better than the others, and all of them make trade-offs. We'll revisit this discussion when one of the implementations of this (via procedural macros) on crates.io has seen runaway success". (And none of them have. It turns out that classical inheritance is one of those things that people are loud about, but there's underwhelming little in-the-wild use of compared to other things implemented as procedural macros.)
3. Why did Rust add async/await keywords when you could do it using the futures crate? Not because the futures crate was a pain to use... but because it was demonstrated that it needed to be a first-class language construct for the borrow checker to behave in non-hair-tearing ways when you wanted to hold borrows across await points.
4. The whole reason Rust doesn't have an official, toolchain blessed executor for the async/await keywords is because they want it to remain suitable for both embedded and big-scale servers.
5. The Rust team have strong opinions that features added to the language must be orthogonal and harmonious. That makes adding C++-style OOP highly unlikely because it would overlap too much with the existing design.
6. Standardization does nothing to prevent a language from adding footgun-oriented programming features... it just means they have to live within the undefined/implementation-defined portions of the spec, or be stuff that wouldn't compile in a purely standard-following implementation.
> Is it really? I wasn't aware of this. I believe Linux has been built using Intel's C compiler. Can you give me an example of the use of non-standard "GNU C" in, say, Linux 6.6.52? That is, a file name, line number, and a short description of the quirk? I'd be most interested to see this.
I'll do one better. Here's a blog post that does a run-down of the GNU extensions Linux depends on, including example code snippets:
https://maskray.me/blog/2024-05-12-exploring-gnu-extensions-in-linux-kernel
Both LLVM Clang and ICC have implemented a bunch of those extensions with the intent of being able to compile the Linux kernel. (Getting those extensions implemented was the lion's share of what the project to get the Linux kernel building on LLVM Clang was doing... though there was also patching done to retire use of ones the kernel devs were less fond of than they used to be. For example, the kernel was using GNU C's pre-C99-style Variable Length Array support and removing that, though desirable in its own right, also avoided the need to implement them in Clang. I think the project was called LLVMLinux but it's been a while.)