The language might be super-safe, not so sure about the installer
According to https://www.rust-lang.org/learn/get-started the official way to install Rust on a Mac is:
$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
The Rust programming language celebrated its fifth birthday on Friday and says the future looks bright. Long beloved by those who care about such things – since its 1.0 release in 2015, Rust has been voted the "Most Loved" programming language four years running in Stack Overflow's annual developer survey – the language has …
According to https://www.rust-lang.org/learn/get-started the official way to install Rust on a Mac is:
$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
This doesn't install rust, it installs rustup which installs rust. Everything else is underneath that. You can install it in other ways if you like, but this approach is nice because it's non-root and updated at the same time as Rust is.
If someone hacks rustup.rs, then could put anything they like up there and people would run it now? On the other hand, if someone hacks an .deb, or RPM server, they could put random binaries up there, but they couldn't sign them.
It's not, perhaps, ideal, although as it's non-root, it's a little less of a disaster.
Most installation processes require you give over all power on your computer to the installation process. This is as safe or unsafe as your trust in the organisation at the other end.
Although, actually, the Rust installation suggested above is probably less dangerous than many since it does not require admin access.
Rust has some nice ideas, but don't get carried away. It's still the bitter fruit of the C programming tree. "Immutable variables" that are actually mutable in all but the most theoretical computer science sense are annoying. And the whole "borrow" thing will drive you nuts.
But at least it has a proper Boolean type so you can't shoot yourself in the foot with things like "if i = 0 { ..."
C has had a boolean type for over twenty years.
Yes, but it's merely yet another unsigned int storage type(*) and the expressions in if() or while() can be any scalar type, whereas languages with genuine booleans treat them as a type in their own right, separate from integers, floats and pointers.
I guess you're one of those hippies who can't accept that the world didn't buy into the LSD soaked world of Lisp and its obsession with parentheses.
(fun arg1 arg2) has no more parentheses than fun(arg1, arg2), and there are Lisp variants that use Algol function calling syntax or that drop the parentheses when unncessary. Lisp is most appreciated by computer scientists who know that semantics are what counts and that syntax is for fashion victims.
(*) C11 standard 6.2.5.6
Syntax is what makes code readable for humans , you know, the people who have to write and maintain it. So it's kind of important. Not that C and especially recent versions of C++ are shining examples of the art but theyre lot better than Lisp which is loved by ivory tower academics who dont have to write bet-the-company solutions that others will need to maintain in the future.
Lisp which is loved by ivory tower academics who dont have to write bet-the-company solutions that others will need to maintain in the future.
Right, so you think Walmart is run by ivory tower academics? And the Mail Online(*)?
(*) Ack, spit!
Having been burned by other peoples thread safety failures, and irritated beyond belief by Java pop ups for null pointer exceptions in "enterprise" software, I was over the moon to hear of rust. I've now been learning/using it for a year. I *like* the DO NOT PASS GO borrow checker, it's precisely what programming languages *should be*, a tool that prevents the user (wave) from wacking their own thumb. I like to do things the 'correct' way, and take pride in my work, but sometimes I'm tired or distracted and type something incorrectly (ah, shit, I meant == not = in that if statement....), and I can't bloody get away with it! I *do* prefer that my code doesnt run if it's incorrect. Let hardware drivers do weird and wonderful ("unsafe") things, test them properly, and allow application developers to build useful stuff on top without tearing big security holes in the machine... yes please.
I am not surprised at all that people get carried away
In what sense are these immutable variables not immutable? The value is bound to the variable and is read-only.
Borrowing is simply C++ style references with lifetime checks and to ensure that not more than one mutable reference exists at any given time. Yes it may drive someone nuts because C++ doesn't care if you pass around a reference to something that no longer exists whereas Rust really does. It is trying to protect you from bugs like use-after-free and data races. Most of the time it is implicit so it's not really a big deal.
I consider myself a “long term” c person - 30+ years, and I’ve never got along with the 0 == x thing. It just doesn’t read right and makes my brain turn itself inside out. Mush easier to just write the thing correctly (x== 0) in the first place. I can’t remember the last time I accidentally used a = instead of a ==. Probably about 29 years ago
I prefer if(!x) rather than if(x==0)
though I don't mind doing if((expression)==variable) which 8is an alternative of the '0==x' thing
/me points out that if you write your C or C++ code in such a way as to assume that NOT you, or a FORGETFUL you, will have to MAINTAIN it, a lot of programming problems just "go away" on their own.
Coincidentally I tried rust for the first time today to write a simple line following program for my kids Lego mindstorm. I liked it for two reasons. 1) The pattern matching made writing a state machine very concise. 2) I wrote it out, fixed all the compile errors, uploaded it to the device and it worked the first time. That never happened with C or any other language, where I'd end up fiddling around several mistakes before getting to the part of testing the actual program logic. It definitely felt like the compiler was helping out a lot more.
From a C++ & Ada perspective, Rust fails in multiple areas:
* It weakens strong typing by adding type inference (much like the often abused 'auto' keyword in C++).
* It prefers obscure symbol series over clear phrasing in English words.
* It allows for many ways to accomplish the same thing.
* Its non-obvious syntax that does nothing to prevent logic errors.
* Its crusade against OOP and replacing of it by a much harder to learn and correctly use alternative.
The fact that the Rust developers were at no point inspired by anything in Ada/SPARK, which is unquestionably at this point the pinnacle of safe and reliable programming, should speak volumes. Maybe reading the Steelman requirements before Mozilla's devs embarked on throwing out the baby with the bathwater might have been helpful.
Not trying to have an argument, but this is a bit sweeping and slightly unfair IMO, just to provide my view:
1. At compile time, not run time, so doesnt allow ambiguity or undefined inference, I dont think that's a weakening
2. English is the best and all programmers should know it...? I will take symbols with good documentation and I dont care what symbols (ASCII or not) we're using
3. Fair enough, but it does provide backward compatibility, the only other option is breaking changes which I would frown upon (or not? Not sure it's the only option, just all I can think of if the language is to add features)
4. "Obvious" is subjective
5. https://doc.rust-lang.org/stable/book/ch17-03-oo-design-patterns.html
As far as I can see, its incorporated pretty good things from multiple paradigms. Only thing missing from OOP is inheritance, which *is* nice to use, but default Traits provide a similar ish behaviour
Hmmm... ADA as the pinnacle of safe and reliable programming? [yet hardly anyone uses it]
I wasn't aware that Rust represented a crusade against OOP. I think the OOP-obsessed have done that to THEMSELVES, by making *EVERYTHING* into "an object" and forcing everyone ELSE into "the java way" or something (as is '.Net' throughout). I don't even like seeing this in Python and Perl. It's "bass-ackwards". And THAT is what's wrong with OOP-obsession. But I digress...
True "Object Oriented Programming" seeks to abstract the physical data storage from how it is used, and so "an object" becomes "a data storage unit and the methods that work with that data". Any requirements BEYOND that were added later, somewhat artificially; as an example, polymorphism is more like a derivative of OOP and not OOP itself.
I'm not familiar enough with Rust's syntax to comment much, but it was initially confusing to me when I read through some Rust source. "Steep learning curve" is _NEVER_ a good thing for a new programming lingo.
And I'm also a bit skeptical when "safety" is claimed. From what I understand, you have to turn MOST of that "safety" OFF to work with 3rd party libs and operating systems directly.
As for efficiency, it's a fair bet that "cumbersome" things happen under the hood, and once people figure out where those are, they'll be writing "unsafe" workarounds to BYPASS it. As an example, an STL class that uses vectors or allocators. 'Nuff said yeah.
SO based on things _LIKE_ STL and how it fails to be universally "a good thing" [especially for microcontrollers and tight loops in time-sensitive applications] I also have my doubts as to the internals of Rust [the ones we don't see but do all of that "safety" stuff] are JUST AS BAD because they were created with the SAME _KINDS_ of THINKING behind them.
^^^ let's hope I'm wrong. Honestly I'd rather be wrong on this. But I bet I'm *NOT*. Sadly.
I wish Rust could become a money maker for Mozilla in some form. Not because I don't like that many things are just given away for free. It's just that I consider Mozilla to be a force for good in tech and anything that could help them diversify their income streams and gives them staying power is a plus.
Richard P. Brent, Paul Zimmermann: Modern Computer Arithmetic, Cambridge University Press, 2010. See either here or here.
Some write-up by myself is in Part 4, "Fast Arithmetic", of the book, Matters Computational (Springer, 2010). For final version see bottom of that page.
I can give more references if you are interested.
Yup, I meant finding the exact integer square root of an integer with 17000 decimal digits. I just tried it using the Newton-Ramsey algorithm as alluded to earlier, and my program takes even longer, mainly, I suspect, due to the amount of time my long division algorithm takes. (I'm not a number theorist or computer scientist, by the way, but a math logician, I've just stumbled upon an amazing sequence and am trying to prove it is infinite...)
Sounds veeery slow, but in general Rust program speed will be very very close (but slower) than C 1) As there are additional runtime checks that do occur for some operations, 2) precisely because C will let you do weird and wonderful things with your memory etc. A properly written C library will get you there the fastest, but if that doesnt exist and you want to write one, it's probably *safer* in rust.
As you say, the most likely path to success will be changing your algorithm, rather than the same one written in another language. As a mathematician you're probably the most qualified to say, but I'd try to work out if you can get any of your computations done in parallel, then you can use multi threading to physically crank the numbers with less elapsed real world time.
WIkipedia list of arbitrary-precision arithmetic software. For the indicated problem (real numbers), I'd suggest to look at MPFR.
For the speed claims I made, computing Pi to 10^6 decimal digits using Pari/gp takes 400 millisecs on my (pretty decent) system.
For general info (another poster mentioned GMP).
MPFR is a wrapper on that which provides "correct" rounding modes. More relevant here; it provides an amazing selection of mathematical (and other) functions that put it miles ahead of all the other arbitrary precision libraries I've looked at.
Rust does very few runtime checks -- the majority are done at compile time, and the runtime ones are normally explicit at development time (that is, you know you will be getting a runtime check). In all cases, if you *really* care about performance you can run time checks and do most of the things you can do in C, sacrificing the safety guarantees in those areas.
I would also say that Rust has fewer undefined behaviours than C, so the Rust one can optimize to run faster than the C one. Of course, it might not, but I think the idea that C is going to be faster is not true.
Could be right, my understanding was that there was a very small runtime with Rust (as with C) and have seen references to it around Trait implementations, array bounds checking, memory ('dropping') etc - but might need to switch to a rust forum for detailed information on whether that is actually different/bigger to what comes out of C for trivial cases. Overall I would recommend it purely for safety, and if any performance sacrifice exists, meh.
Yes, almost all benefits are realised at compile time but it's all assembly somewhere, and C doesnt check that stuff it just does it...
Trait implementations have overhead if you cannot determine what the trait implementation is at runtime -- you end up with a virtual method call. However, if you use traits as a generic bound, you can work out the implementation at compile time, so no over head. Array bounds checking, yes, also, although if you iterators like the `for` loop does, it does not array bounds check. For some uses of arrays in C, Rust uses tuples which again do not bounds check (at runtime -- they are type safe at compile time). Memory dropping is worked out at compile time in most cases. You can explicitly use reference counting where it's too complex to work out where dropping should occur at other times and yes, this has some overhead.
Finally, all of this you can turn on with unsafe code. And, of course, all of the above is true on the Rust side; in practice even when Rust thinks it is array-bound checking, LLVM may be optimizing it away.
What I am saying here is not that Rust is faster or that C is faster. I am saying that you cannot reason it; you have to build it and check.
"if you can get any of your computations done in parallel"
multi-thread parallelism is good in SOME cases, but not all. I took a look at how Rust manages this and I wasn't too keen on it. Since I've done this myself in C and C++ using pthreads and windows threading, I know what's going on under the hood. [I once wrote a cooperative thread library for Windows 3.x the 16-bit OS decades ago, similar to fibers - I've been doing this kind of thing a LOOooong time!]
In short, every time you create a thread you have to initialize it within the OS, which takes a bit of time. In the case of a quick sort, half the process in the initial part of the algorithm [the pivot] must be single threaded, and the followup can be multiple-threaded (in my multi-thread demo, qsort with 4 threads is about 2.5 times faster than single-thread, YMMV).
Typically I would use work units and a mini-scheduler to limit thread count, and since I've written this sort of thing before, adapting an existing one is pretty straightforward. The 2 main features, "spawn a thread" and "wait for all work units on this thingy to finish" need to be built into it like an object, but it's not that hard to do. The pthreads 'join a thread' is very useful for this purpose. With Winders it's just a bit more difficult though [not much just a bit]. No spinning on 'yield()' either, you have to go into true 'wait state' and NOT do rapid polling, or the CPU will read 100% all of the time when it's really NOT. [I've had people try and argue with me over this, too, yet it's easily observed when you do it wrong - but then again, does Rust do it RIGHT?].
When the overhead of creating multiple parallel threads is ABSTRACTED, then it won't be obvious to the coder that his threaded solution becomes LESS EFFICIENT. And it easily CAN.
And anything that's trivially multi-threaded doesn't need "all that overhead" that I'd expect from the internals of Rust. Those could easily be coded in plain-old 'C'. (DFT comes to mind)
Still, having threading built into the lingo is interesting. Perhaps in the next evolution of C++ it'll be there, too.
C++ has threads since version C++11 . Rather than create threads ad-hoc, the more efficient way is to manage a thread pool, fixed to the number of available cores. The problem with efficiency remains because of synchronization overhead, see also Amdahl's law (not to mention whole new category of bugs). Although of course there are better alternatives to explicit synchronization, e.g. message passing (for C++ example see seastar library - it looks ugly, but is also very efficient). Message passing is one of the reasons to try Go because channels are quite a good abstraction.
yeah C++11 - I think I remember looking at that, STL support for threads (googled just now, saw it - yep). But I'm not an STL fan for a number of reasons and since I've been doing it "the old way" before threads were 'cool', I just continue to do it the way I always have. And, like I mentioned before, I sometimes run into "some standard library" doing things the WRONG way, and end up using my own code to fix/workaround their bugs/features.
For trivial things, using the pthread lib in Linux or 'CreateThread()' in windows is pretty straightforward. Most simple threading things would just translate the STL thread stuff into that inline and 'no difference' in performance. So if I saw it in someone else's code I'd just leave it as-is. But there are cases, especially when background tasks are being done, etc. or the "thread pool" you mentioned (something I might use in a 'work unit' manager object), when waiting in idle state, Linux, FreeBSD, and Windows all have the same problem, of spinning CPU at 100% if you don't explicitly enter some kind of wait state within the kernel, and that usually involves a synchronization object or an explicit time delay.
And, last I looked, Windows 10's TIFKAM/UWP system is guilty of having this very problem. You can see it during application start when two TIFKAM/UWP applications are trying to exchange info for some reason. Maybe this has been fixed, but last I checked it wasn't... (and so my point about rolling my own solution that uses my own experiences to avoid such problems and avoid arguing about validity of pull requests that would fix it but for some reason do not get approval).
It might be interesting to try and repro this problem with two Rust applications.
STL threads not the same as what I remember reading about Rust though, having loops that automatically split into threads depending on your specifications (as one example). That's kinda what I meant. i do not recall any feature in C++ that could do that directly.
Having a C++ lingo feature that uses the STL objects for an alternative for() would be interesting. Not sure if any of the newer compiler features might be shoehorned into doing this. But, in short, if the for() loop with threading support automatically used STL threads to spawn a separate thread for each iteration AND manage a thread pool, then that is closer to what I was thinking of.
Messaging is less desirable unless you have a hyper-efficient message queue set up. It can be done, but it would be less ideal than using standard synchronization objects. In some cases, setting up a message polling loop (like you do for Windows old-school programming) is a good idea, but I'd guess that for the vast majority of uses, it's excess overhead and would perform poorly by comparison.
I actually set something up like this in my X11 toolkit, to handle X11 events asynchronously from a queue - I re-prioritized Expose events [for painting] and combined them when possible to speed up draw operations. Also set it up for user-type messages to bypass the X server and be posted directly to the queue - but it's pretty big code and too complex for most cases, yeah. It's not something I'd really want to have to do again, either.
(for IPC a generic messaging system makes sense, but not thread-to-thread.)
Serious question.
The docs boast about doing things that you could easily do with unique pointers in C++. Boring.
It handles ArrayLists easily. So does C++.
It does not have the legacy pointer arithmetic that was was always a bad idea in C. OK, but boring.
It does not have proper garbage collection as found in Lisp, Java, .Net. Pretty important for most applications. Rules it out for me.
How does it do the hard stuff? Like stuff with real pointers like Linked Lists? I never waded through the hype far enough to find out.
This post has been deleted by its author
What does Rust do?
Well, it provides a clean, high-level language which is comfortable to use, while providing a single powerful language feature that makes a very large class of bugs extremely unlikely in normal use (ie double free, buffer overflow, uninitialized memory, and memory leak).
It does not have garbage collection any more; it did at one point, but it now uses "lifetimes" to be able to work out when it should automatically free things without the programmer having to do anything. Looked at another way, it has a static analysis garbage collector -- so like a Java style GC without the runtime overhead. For when that isn't enough, it has reference counting, but used only for those bits of code that need it. No application needs GC; they need automated memory management, and Rust has that.
How does it do "hard stuff" like Linked Lists with pointers. It uses pointers. The safety guarantee is, as I say, just a language tool. Most of the time you use this tool, and that's fine, but you can write "unsafe blocks" which let you outside the box, which includes raw pointers, even pointer arithmetic.