Eh?
'... it may be time to dump "unsafe legacy languages" and shift to more modern, safer ones.'
No thought as to the capabilities of the developers they employ?
Microsoft Security Response Center (MSRC) is waxing lyrical about the risks inherent in C and C++ coding, arguing it may be time to dump "unsafe legacy languages" and shift to more modern, safer ones. The Redmond-based biz has long been a C++ shop when it comes to the programming that matters most to the company – the Windows …
Even with the best coders in the world, there will always be bugs the problem with c and c++ is they make these kinds of error really easy to make.
The idea behind languages like rust is you accept that mistakes happen and any performance hit you take is a worthwhile tradeoff for less debugging and reduce risk of these catastrophic bugs.
That's strange because as I remember C and C++ are the major languages for other operating systems which have far fewer security problems than Windows.
Definitely not convinced that the language is to blame..
A system is never going to be more secure than it's underlying code and OS model..... Bad code/model = bad security..
.
Microsoft has the most CVEs, but many more products listed. There are many CVEs there that would be listed multiple times, for each different Windows OS.
What I see? There's a lot more vulnerabilities in Linux, macOS or Android than there is in stuff made by Microsoft. But then I knew that.
There's a lot of misconception out there amongst the fanboys.
"Scroll down to the bottom and move the pointer over the biggest culprets in the chart. What do you notice?"
Are you looking for this page? https://www.cvedetails.com/top-50-products.php
The worst Microsoft product doesn't even appear until #9 (I think if IE was added up properly instead of split across two entries that would be #9 instead of Windows Server 2008).
The idea that Microsoft code is somehow more vulnerable than their competitors' is a hangover from almost 20 years ago before they got really serious about security.
A simple count of the number of flaws is not useful - a common, easily used vulnerability counts for as much as an obscure (but possibly more dangerous) one.
Also, the number of copies of a given piece of software containing a flaw can result in a 'less dangerous' flaw causing widespread damage while an obscure flaw may never be exploited.
> That's strange because as I remember C and C++ are the major languages for other operating systems which have far fewer security problems than Windows.
You weren't supposed to bring that up. :-)
Dear Microsoft,
Your operating system is the only operating system that I know of that requires a heavyweight program running constantly, with escalated privileges, and for the sole purpose of preventing your own operating system from destroying itself. And/or any other instance of your operating system that it can find in the net neighborhood. I'm talking about things like Bitdefender, Malwarebytes, Avast, etc.
And for the past two decades you have done exactly nothing to mitigate this shitshow, except talk about user experience, write memos and publish whitepapers.
Oh, yeah, almost forgot. The Ribbon.
The day when Windows users won't be required to run an anti-virus program to prevent your operating system from becoming a festering Petri dish, we can start talking about C or C++. Until then, look inwards.
Funny how I don't have to run Avast or Malwarebytes on Linux.
except talk about user experience
Funny you should mention that, because I just spent 40 minutes getting an image to print something other than blank pages from my wife's Windows-10 tablet to a wireless printer. I generally avoid Windows, but circumstances made that impossible this morning. My impression -- only reinforced by the morning's miserable experience -- is that Microsoft's UI has deteriorated badly since Windows 9 and XP days. Probably too much pretty. Definitely too little useful information/discoverability. I count myself lucky that it was only 40 minutes, not 4 hours.
> [ ... ] we also don't know that Linux boxes aren't festering [ ... ]
You need quite a bit more understanding of the inherently open attack vectors on Windows as opposed to Linux.
On Windows, the Windows registry and the lack of effective privilege separation are open attack vectors waiting to be exploited. And they are the two most exploited ones. That does not exist on Linux or UNIX. Linux/UNIX don't have a registry, and privilege separation is effective.
Or, prove your point with facts: URL to a list of known Linux viruses. Do you have one?
This is what Wikipedia says:
As of 2018 there had not yet been a single widespread Linux virus or malware infection of the type that is common on Microsoft Windows; this is attributable generally to the malware's lack of root access and fast updates to most Linux vulnerabilities.
Come back when the Windows registry isn't a Petri dish waiting to be infected by design.
You obviously missed the tongue firmly planted in my cheek when I was typing that reply, though I was making a not so subtle point about the argument you made when you were trying to quieten down the anonymous Windows coward who was being inconvenient and not getting infected.
I'll risk my luck a little more and point out that, by your own infallible logic, there's no way there could be a list of *nix viruses because no-one running *nix runs AV and thus *they don't know whether they have viruses*.
Or, to paraphase your good self: How do they know? They don't run any anti-virus, so how would they know?
Well, you have a self serving system there.
If you look at how AV works, most of them look for the signature of already known viruses and trojans. This requires you to know what to look for.
As there are no common UNIX or UNIX-like malware, there are no signature in the AV databases, so running something like clamav will not see Linux viruses because it has no fingerprints to look for.
Conversely, there is lots of Microsoft malware, and many virus signatures. So clamav will detect Windows viruses, but not Linux.
There will be (and possibly already are) Linux viruses, but at this time, the execution model of Linux vs. Windows means that pre-compiled executables are significantly less likely be run on Linux than on Windows, and even if they are, it's less likely to be able to infect the whole system without relying on other vulnerabilities, but that wold not prevent user-mode self-proliferating viruses, especially as scripting languages become more capable and desktop GUI shells become more like an OS in their own right.
The real thing is really to prevent tools auto-executing code when, say, an email is opened, which is why a lot of mail systems won't even process attachments, which was one way (particularly on Windows) that malware triggered code execution.
I think that we will see user-mode, scripted, self-modifying malware on Linux which infect the GUI startup mechanisms at some point (which would make it much less likely to be discovered by signature based AV systems), but I don't think that virus writers have caught on to this method of attack yet.
> I think that we will see user-mode, scripted, self-modifying malware on Linux which infect the GUI startup mechanisms [ ... ]
No, we won't. There is a fundamental difference between the Windows execution model and the Linux/UNIX execution model.
This difference is by design, it is baked into the OS, and unless Windows adopts the Linux/UNIX execution model, Windows is screwed forever.
I'll give you a simple example: on Linux/UNIX systems, it is not possible to load a shared library (*.so) into memory and execute it with or without escalated/root privileges. On Linux/UNIX systems, shared libraries are not executable, and can only be loaded from within a separate and independent execution context -- i.e. a running program. Privilege separation applies to the running program.
On Windows, it is possible to load a *.dll into memory and execute it. It's a common attack vector: poisoned URL downloads a *.dll that appoints itself as Administrator, and then takes over the entire OS. No user interaction required. As long as that is possible, there is no point in discussing Windows security, because there is none.
And I haven't even mentioned hardened versions of Linux such as SELinux, which is the default on distros such as RHEL and Fedora.
Ubuntu - and I think SuSE - use AppArmor.
Personally, I am a fan of SELinux because I believe it is more effective than AppArmor. In spite of the fact that configuring SELinux in enforcing mode can be major PITA.
You are wrong about this, shared libraries work much in the same way on Windows and Linux. You can not load a dll without an executable like rundll.exe ( a big culprit in spreading malware ) and privilege escalation is not possible without exploiting some kind of vulnerability , which traditionally are quite a few in Windows. The issue with Windows is all the outdated code and backward compatibilities, no new language is going to change that.
>I'll give you a simple example: on Linux/UNIX systems, it is not possible to load a shared library (*.so) into memory and execute it with or without escalated/root privileges. On Linux/UNIX systems, shared libraries are not executable, and can only be loaded from within a separate and independent execution context -- i.e. a running program. Privilege separation applies to the running program.
I don't think you understand what I am suggesting. If you are trying to start, say, a crypto-mining operation, then if you can get the startup of the GUI to run some scripted code through Perl, Python et. al., you don't need to actually infect the operating system proper.
For example, if you were able to drop an executable script somewhere under a user's home directory, suitably disguised, and add it to the GUI startup (all it takes is a single line dropped into one of the startup files, which are normally writable by the user), then if the code was in some capable interpreted language like Perl or Python, you could get an infections that was able to open up network connections, run local code, and/or send out attacks on other systems without actually compromising the OS.
It's true that the scope of the infection is limited (and probably relatively easy to disinfect), but it would still be able to consume resources, dig into other aspects of the user's environment, and act as a vector for further infection in an environment. And as a scripted language, it could be readily changed on each attempt to make signature recognition more complicated for the AV writers.
Modern scripting languages are just so capable, it is no longer necessary to drop compiled code onto a system to be able to do complicated things.
"Or, to paraphase your good self: How do they know? They don't run any anti-virus, so how would they know?"
The same way it was originally realised windows had viruses and so needed protection - shit starts to go wrong. Given the number of linux kernels installed out there and the fact that STILL no antiviruses are routinely required I think that tells you all you need to know.
Really ? Do you think that malware is on Windows because it is the buggiest OS ? What would happen if everyone suddenly started using Linux as their main desktop ? ( besides a huge shit storm )
That being said , I agree that AVs were a burden on computer resources ( not really now when you have 10+ cores ) , but there's no amount of "secure" code that will prevent user idiocy. Also, most people would not like a dictatorial OS that restricts you in every way possible "just for your own good"
>The idea behind languages like rust is you accept that mistakes happen and any performance hit you take is a worthwhile tradeoff for less debugging and reduce risk of these catastrophic bugs.
I disagree with that comment because it feels like you're mischaracterizing what Rust is trying to accomplish. Maybe I am just misunderstanding what you are trying to say. The philosophy behind Rust is that, by the old way of doing things, all these mistakes are guaranteed to happen due to lack of compiler-level enforcement of resources.
Rust is a functional language with one very very key feature: variables have ownership. Assign the value of one variable to another, and the old variable can't use it anymore.
eg (in pseudocode):
a=5
b=a
print b \\ get 5
print a \\ compiler error because a gave up ownership of its value
That one single concept creates a language that entire classes of programmer errors impossible to perform, because if you make a mistake the compiler itself will detect it. This also makes the resulting binaries faster because the code has been proven proper at compile time and so a variety of runtime checks are no longer needed.
The downside is that the average programmer will have a brief but steep learning curve because they will need to completely rethink how they approach most problems. But IMO the benefits to long term code viability are just so utterly overwhelming that it's worth the effort.
Also, Rust is inter-operable with C so libraries from one can be used in the other
Except the whole point of Rust is that there in no performance hit.
A combination of language features and compile time checking make the accidental zapping of random bits of memory much more difficult. There are no extra runtime checks so performance is close to native C. You can still access and update arbitrary memory locations but you have to mean it.
"the problem with c and c++ is they make these kinds of error really easy to make."
W R O N G ! ! ! ! !
When you know what you're doing, C and C++ are probably the BEST possible programming languages to use, which is why they've been around since the 1970's.
So, it's worth the "performance hit" - that's an excuse that Micro-shaft has been using since Windows Vista.
NO EXCUSE I say. Let them DiE ON THE VINE with that kind of WRONG thinking.
I knew the moment they referred to other-than-C/C++ as "modern" that it was just another page out of Arthur C. Clarke's "Superiority", where THE WHIZ KIDS have come along, and it's THEIR TURN NOW, and so they must ABANDON that which is tried/true/reliable for the PREVIOUS TWO GENERATIONS, because *THEY* must *ASSERT* *THEIR* *IMPORTANCE* and *THROW* *ALL* *OF* *THAT* *AWAY* because THEY know best, THEY are smarter, THEY are younger, THEY are IN CHARGE, and it's THEIR TURN NOW.
Expected+predictable FAIL to follow.
(at least they're not using JAVASCRIPT like NodeJS or similar for anything IMPORTANT like maybe their DEVELOPMENT TOOLS... No, wait...)
... there will always be bugs the problem with c and c++ is they make these kinds of error really easy to make.
That's true of C, certainly.
One of the design goals of modern C++ is to make it really hard to make errors of the type being discussed here -- and that is a goal that is very largely met when the language is used in a modern, safe, idiom. Of course, you can write Fortran (or C) in any language, but if modern C++ is used correctly it should be a very safe language.
The trouble is that a lot of C++ programmers use the language as "C with bells on", and fail to reap the advantages.
"The idea behind languages like rust is you accept that mistakes happen and any performance hit you take is a worthwhile tradeoff for less debugging and reduce risk of these catastrophic bugs."
I hope that's not the idea behind Rust. There is no reason why correctly written code should run slower than buggy code. The code should not be bug fixing itself at run time, it should be correct at compile time.
<<There is no reason why correctly written code should run slower than buggy code>>
You miss the fact that there is no way to tell apart "correct" from "buggy" code, much less automatically. The best you can hope is to avoid operations that could make bugs become attack vectors.
And yes, as Linus said, every bug is at least a denial of service vulnerability, but let's not dive into that.
PS: please, the "but there's this and that code that has been verified" people save themselves from commenting, as there is no one able to asses that the verification is correct, see Turing, Godel, etc...
Twould be Kolibri OS then. Blimey, the application ecosystem is a bit sparse...
Chris Sawyer programmed Rollercoaster Tycoon 2 in 2002 in MS Macro Assembler V6 *
But hand written assembler is far more readable than what my latest GCC spits out at -O3
Wonderful game by a programmer Genius.
*https://www.quora.com/Why-was-Roller-Coaster-Tycoon-written-in-assembly
I remember, at Xerox, operating systems written in Smalltalk-76 and Interlisp. Great user interfaces for the day and when something broke the debugger was helpful - or you could just blow away the debug window and continue. The Smalltalk-76 footprint was a megabyte or so and the interlisp footprint was about 10 megabytes. This was with editors and printing etc.
Of course the big board computer with 64 kilobytes of RAM and 2 8" floppies booted in less than a second.
I think that if you were to profile the startup of any modern OS these days (with the exception of those that are designed as lightweight OSs), the startup time is not due to starting the OS kernel, but to the identification of system resources and the startup of services.
I occasionally boot an Intel port of Edition 7 UNIX in a VM, and it's noticeable how little is actually running. Just init, cron (which also performs the actions of the sync daemon), your shell (and a ps), lpd (if you're runnning it), and any gettys for any other terminals.
Something like this starts in a flash! I suspect that the system you're describing was similarly lightweight.
Just look at a modern Linux or Windows. There's literally dozens of processes, all of which need to be started, most of which you don't know what they do (especially with obscure, ambiguous naming conventions used by Windows!)
I'm sure that all of these things are needed, but I sometimes wish for the more simple systems of the past.
No thought as to the capabilities of the developers they employ?
...writes the developer who's never made a mistake.
Exactly, time pressure, tiredness or that Dunning/Kruger thing makes this inevitable.
Combine that with the nascient software industry with far more programming jobs than programmers. (And if you change that to 'Engineers' even fewer) and it's a dead cert.
Unfortuantly ideals don't often work, wanting good software doesn't mean you'll get good software.
"Microsoft has the luxury of being able to employ the cream of the crop..."
Ha! That's crazy! Gates was the senior "architect" for 25 years, look where that landed them. That said, Gates was clearly a better programmer than the Force-The-Updates-NOW! and Just-Make-It-Flat goon running the current undertaking (that and when Gates left they kinda went into a permanent down cycle, credit to Balmer there).
Microsoft is currently so dense that when they sneeze they shatter diamonds. They just can't help to break shit
Code injections, buffer overflows, accessing memory beyond the end of your pointers and so on are all developer errors.
If you hit your thumb while hammering in a nail, or a nail goes in at a bad angle, or it gets bent, or goes shooting off to the side rather than into the timber, do you blame the hammer, or the person who is using the hammer badly?
When you provide your workforce with hammers and screwdrivers, and the screwdriver users only occasionally fuck up a screw and the hammer users regularly fuck screws up, then your answer isn't "who do you blame, the hammer or the person using the hammer badly" it's "why are you consistently using this tool that evidence shows doesn't produce good results?".
You can say it's not the tools fault all you like but the data shows there aren't enough devs who are competent enough with the tool they've been given to do a good job. Moaning that the tool is perfectly servicable does nothing to help fix the problem. It's just sticking your fingers in your ears and going "la la la, can't hear you!".
it might well be that your hammer is very old and/or crappy and made of soft steel and so the head has rounded over and mushroomed and nobody thought to file or grind it flat again.
My good quality CrVa hammer has a FLAT head because I keep it that way.
BTW you can drill pilot holes for nails in hard woods or where you cannot afford to bend one. Been there, done that.
And I'm just a home woodworker. But my father was an engineer and taught me the proper respect for a good tool.
No thought as to the capabilities of the developers they employ?
I personally believe that is the root of the problem. Take a six-months to one-year course in "programming" and suddenly you can make some good money.
I've made my share of mistakes but usually managed to catch them. This new crop that I've dealt with off and on lately seem to ignore errors and logically thinking.
Goodness me, there's a lot of MS bashers / Rust deniers out there today!
It Really is Time to Move On.
Setting aside concerns about staff heritage, these days one would really struggle to justify C/C++ to write a new OS in from scratch, as opposed to choosing Rust. Ultimately the only thing stopping it is the opinions in old dogs not willing to learn new tricks. Other concerns such as the stability of the language are merely temporary barriers, not fundamental no-gos.
The fact that MS is thinking of going the way of Rust is interesting; if they do it wholesale, their kernel (and whatever else they write in it) is going to become very solid indeed. That would start making things like Linux and the BSD look positively antiquated. Whilst those communities would be spending a lot of time making sure there's no memory mis-use in their code (and there's likely shed loads), Microsoft would be concentrating on eradicating functional bugs.
Round the bazzars, there has been some loose talk of re-doing Linux in Rust. Because the C interop isn't too bad, you'd be able to do it bit by bit, there's no need to do it one big bang. There's also a bunch doing a fresh OS, called Redox, looks pretty good.
The Next Generation of Programmers
The real killer will be if universities, at least those still teaching systems languages like C/C++, dump it and pick up Rust instead. This has happened before - Java killed of a lot of C/C++ tuition. This happened simply because it was easier to teach Java, not especially because Java was superior or anything like that. Rust might just finish C/C++ off in the educational sector. In a few years time the supply of graduates who even know what C/C++ is could dwindle to zero.
Companies (who have a hard time recruiting already) will be faced either with very lengthy and expensive training to get newbies up to speed in C/C++, or the quicker-to-learn Rust instead. Most of what you learn with C/C++ is not the syntax and libraries, it's avoiding all the ghastly pitfalls littering the language reaady to trap the novice programmer. With most of those eliminated in Rust, you're left with just learning the syntax and libraries. That's far easier and quicker.
Develop, or Die.
So for all those dyed in the wool C/C++ stick-in-the-muds, it's probably time to start worrying about becoming obsolete. You have to ask yourself, what's better? To be a leading light in the adoption of a better and more sustainable language? Or to grudginly learn it when it's become unavoidable, and get paid the same as a fresh faced graduate straight out of college? All that valuable and renumerative experience of how to avoid pitfalls in C/C++ is going to count for sweet F.A. if Rust takes off.
Either show the money in which direction it should be going, or try and catch up when the money has made its own mind up.
So your entire comment boils down to "old bad". Ain't nothing wrong with C for system programming. Rust is absolutely pointless. Any and all amount of "safe" code goes to the shitter the moment you try to dereference pointers. Which is an inherent part of the way operating systems work. Use Rust where it is applicable. It definitely is not in this area.
No my comment boils down to old is expensive, and at grave risk of becoming a dead end.
Your view that it's not appropriate for a systems language will come as a shock to the team writing Redox. What's impressive about that project is how quickly they've done it. From nothing to a kernel + gui in about 3 years I reckon. Took Linux a lot longer, and that simply borrowed an existing X server. It's taken Google a lot longer than that to get Fuschia as far as they have.
You don't need unsafe code across the entire kernel, and none higher up.
I remember using C on a system that did not have memory allocation malloc(). Programs were far more reliable with static structures and arrays. Systems would run for 12 months or more and never crash or need a reboot.
Pointers and memory allocation are the devil.
I remember using C on a system that did not have memory allocation malloc(). Programs were far more reliable with static structures and arrays.
It's common for very memory-constrained systems not to support dynamic memory; they still have bugs -- many of them in the things programmers have to do to work around the lack of dynamic memory support!
When you have to implement a solution that cries out for dynamic memory, on a system that can easily support it, in a language that doesn't (such as Fortran on a mainframe, in the "good" old days) you end up implementing your own memory management in arrays ... and the scope for getting that wrong is far greater than the scope for messing up langauge-supported dynamic memory handling -- even in C!
[To see how to do it well I recommend "Fortran Techniques" by A.C Day, Cambridge University Press, 1972. A great book for people trying to do 1970s programming in a 1950s language.]
Not ironic at all. Perhaps you meant "coincidentally" or "parenthetically"?
I agree %100 that using languages designed for writing operating systems in are not a great choice for the general ledger, nor those that try and look like languages intended for writing operating systems.
But today's programmers-sorry-software-architects run screaming from languages that do not resemble "C", I've found.
Or you could drop the Redmond option from your list and move on.
Rust isn't a Microsoft thing. It's a Mozilla thing. Plus, last time I checked C was pretty universal, the same problems exist in non windows worlds too.
C isn't going away until long after COBOL and Fortran are cold, dead and buried ... and that isn't going to happen until long after my Granddaughter is retired.
This is no reason not to consider alternatives though.
Yes, thank you, I know rust is mozilla. I was discussing the advisor, not the advice. Please, do try to learn to read for content ... unless it will get in the way of your near-constant ire and/or bile of course. Ta.
The rest of all y'all ... I know what rust is, and where I would use it. But that wasn't what I was talking about, I was merely pointing out that the poster I was replying to had another option. The world has a place for good C coders, and pays well for them, so why take career advice from an outfit that clearly has issues when it comes to the profession?
Yes, never learn anything new, never improve, just be a stick in the mud.
Are you saying the C/C++ of today is identical to the C/C++ of yesteryear?
Even over the 4 years I was at university new features were added to these languages. It is still being improved and evolving, it isn't a static 'dead' language that is fixed and never changing. Just like the English language that changes, expands, every year, or humanity itself evolves. The perfect example is C++, it is an evolution, an extension of, K&R C.
I don't believe I said that. But if you want to know my opinion, the C++ language is certainly changing in positive ways but its still not safe and never will be, not for new code and certainly not for existing code. And if you were rewriting old code with the intention of making it safe (i.e. stable, reliable and resistant to exploitation), why not rewrite it in a language with that express purpose?
I'm told that its something of a brain @~%$, to begin with
A lot of Rust is basically C improved by nearly 5 decades of experience, and it's been rapidly improving itself as things are tried and either work or get dropped. (The earlier versions looked like they were going to rival PERL for greatest use of random punctuation characters, but most of that got stripped out as the language was refined.) The biggest leap of faith was the decision to not have a required runtime system, which I think was what made Rust a plausible replacement for C/C++, rather than just another safe language like ML or C#.
The borrow checker (and memory lifetimes) is what seems to cause problems for programmers new to it, and that has been improving as well. Rust pointer types follow linear logic, so if you've met that before life isn't too hard (although mistakes still happen). If you don't know linear logic, you may have a bit of a struggle until it clicks.
also significant and quite possibly related was the decision to strip out garbage collection and go the full monty on manual memory management. my understanding is that it aims to codify tried and tested safe memory practices such as RAII by baking them into the language semantics (‘borrow’ for example).
it is NOT a diss on C/C++ as such. rather you could consider it an homage, where it borrows the ideas that drove the success of those languages (including tasking the programmer with understanding system language complexities). but tries to use the 50 yrs since K&R to simplify where possible, trim where iffy (I’m mostly thinking inheritance here). but, mostly, aggressively force good memory practice at the language/compiler, rather than linting level.
I don’t see the world ready to ditch C/C++ yet. but one attribute of a highly skilled programmer is the ability to learn new languages/concepts and be open to innovation, rather than reflectively insist that all must remain stasis. this was true 30 yrs ago, when I was told MVS+COBOL would rule forever. 20 yrs ago when it was client server. 10 yrs ago when Java/J2EE was the one language to rule them all.
C has had a tremendous run. It may still. But discounting that it can ever be improved on is a mug’s game. Even if Rust itself does not turn out to be promised land.
i’ve dabbled in C and enjoyed it. wish I had time with Rust.
(p.s. may I respectfully suggest dropping “super lang” and the like? our industry is so full of hype that it, unless meant sarcastically it triggers skepticism)
"this was true 30 yrs ago, when I was told MVS+COBOL would rule forever.
45 years ago. MVS (zOS)+COBOL still runs government/big business. It's not going to go away any time soon.
"20 yrs ago when it was client server.
50 years ago. We're still client/server, and it's not going away. Maybe the names will change, but the concept isn't going to any time soon.
"10 yrs ago when Java/J2EE was the one language to rule them all."
20 years ago. That was just a lie. We all knew it, but it pretty much signaled the rise of marketing becoming in charge of engineering, which is why we're in the shithole we're in today. Marketing minds making engineering decisions pretty much guarantees a clusterfuck.
Rust really isn't hard to learn if you know C/C++. The syntax and structure is C-like and from a complexitiy stand point the language sits somewhere between C and C++ - it doesn't have classes, it has structs with function bindings.
The biggest thing you'll have to get used to is the compiler will force you to write safe code. C/C++ compilers really don't care if your code is safe whereas Rust will check every lifetime and make sure that it is. Fortunately it tends to have friendly / helpful errors but it can still be frustrating at the start.
Thanks. You and everyone else espousing RUST, not to mention the MS guy, have convinced me to give it a few tire kicks and a test drive. I've been looking at new, to me, languages lately to give my brain a work out, in a constructive way. This might be it! I feel that itch at the back of my head...
"it looks like its time for me to hit the books and look into rust."
Don't be in too big of a hurry to jump on Micro-shaft's "new bandwagon" - keep in mind that after nearly 2 decades, C-pound only has around 5% or 6% on the TIOBE index, unlike the Java, C, and C++ is was _SUPPOSED_ to supplant...
/me checks - make that ~4.4%
https://tiobe.com/tiobe-index/
(If you plan on jumping on ANY bandwagons, monitor THAT page and see what's trending)
RUST not even in the top 20. Neither is Kotlin, I might add...
That TIOBE index is...um....bullshit?
C# is at 4.4% while VB.Net is at 4.2%? Even Microsoft are happy to admit that C# is an order of magnitude more popular than VB (https://devblogs.microsoft.com/dotnet/the-net-language-strategy/)
Of course, if you look at their approach, you realise that asking Google how many hits it returns is *maybe* not the most accurate methodology (https://tiobe.com/tiobe-index/programming-languages-definition/)
If they base their index on number of Google hits, then it seems to me to be more of an index of languages that users most need help with. I suppose it's as good a method as any other completely arbitrary method, short of asking actual developers what they use and how often.
Shrugs. In danger here of getting drawn into vituperative discussions about coding languages, which usually devolve into calling others stupid or screaming faggots, which is something I've been called after saying something mean about C++. I still don't understand why. This was Usenet so I don't know what else I expected. Perhaps coding in C++ is something only real men do, and anyone else must not be real men. God knows how women fare in these circles. But I digress...again...
With all due respect, is an index of the popularity of programming languages (based on a dubious and arguable methodology) really the best way to decide which language to choose for a given project, which is what tiobe propose you do when starting a new project? If I'd done that at any of my previous jobs they'd have slapped me very quickly, and if I did it now I'd slap myself.
Using a hammer on screws doesn't make the hammer a bad tool. Just the wrong tool for driving screws.
But a hammer *does* work good for *setting* screws; getting them to stay in place in the wood so you can *then* use the correct screwdriver (this is usually when you don't have pre-drilled holes). Mind you, the correct screwdriver is key too; that Phillips is going to be crap for driving that slotted screw.
Right, it's still a poor workman who chose to use a hammer to nail in screws.
Being a good workman isn't limited to being skilled at using tools, it also requires being skilled at decision making with regards to which tools should be being used for what purpose.
If a workman can't dovetail timber with a screwdriver, are you really going to blame the screwdriver?
Different programming languages have different ways of working, some tools can spot problems because of this, others require you to run it through either some sort of code or runtime analysis.
So its less of "a bad workman blaming his tools" and more of "a bad workman has bad tools to work with and makes the best job of it given the circumstances".
We've spent 40 years learning that there are classes of mistakes that people make over and over again not because they're shit coders but because they're people.
Rust makes you do a bit of extra work up-front so the compiler can prove mathematically that you haven't made certain classes of mistakes.
Of course there is still opportunity for shit-ness but it'll be the same kind of logic bugs or bad design you can get in any language. At least there won't be dangling pointers, buffer-overruns and memory leaks too.
"The majority of vulnerabilities fixed and with a CVE assigned are caused by developers inadvertently inserting memory corruption bugs into their C and C++ code. "
If they can find a way to insert bugs in their C/C++ code, I'm sure the can do the same in any language.
No-one is claiming that Rust will eliminate bugs.
The claim is that Rust will *reduce* the number of *exploitable* bugs. Because a whole class of usually-exploitable bugs change from being "very easy to write unless you're incredibly careful" to "impossible to write".
There are other classes of exploitable bugs (e.g. missing a permission check) that can be done in any language. However, if you have code review, the reviewers don't have to worry about memory bugs so can spend more time looking for other bugs.
I'm trying to work out who would downvote such an obviously correct comment. C programmers offended that it's easy to write memory overflow bugs? Rust programmers offended you feel it's still possible to create exploits in Rust? Perhaps Exploit writers trying to throw people off the scent?
Rust stops entire classes of problem that C/C++ doesn't care about.
For example C/C++ doesn't care if you forget to protect some shared data. Rust does and won't even compile until you do. C/C++ doesn't care if you call a NULL pointer, Rust doesn't even have pointers in safe mode, and won't let you call a reference either unless lifetime guarantees are satisfied. C/C++ doesn't care if you write off the end of a buffer, Rust will panic and bring your program down with a stack trace.
Most CVEs are for these things - NPEs, buffer overflows, corruption, data races. All gone just by using a more stringently checked language.
I think C++ is tarnished with C's brush. Don't use bare pointers, use references or smart pointers. Don't write to a block of memory, encapsulate it in an object which won't let you overrun, like string does. Today, if you want that safety, then just don't use that feature. Perhaps a future C++2X compiler should have some switches which stop you using older easily abused older C features, then maybe C++ would be considered as safe as Rust?
C always assumes the programmer knows what they're doing, if it didn't it wouldn't be useful for writing operating system kernels or device drivers which is what it was initially created for. So we don't really want to remove these features, but we want to warn more. Perhaps an ISO C standard lint as part of the fompile process which is as picky about code as the MISRA guidelines and requires a compiler switch to ignore.
All of the problems come about because people (and for people read businesses) are skimping on the tools or the training. Rust fixes that problem by bringing tools and training into the compile stage and making certain features impossible to use and others impossible to use unless you tell it you really want to use them.
Something similar should be done for C and C++. Retain the flexibility but make mistakes harder to make.
You don't need to rewrite everything from scratch in a new language, use a compiler that sanity checks things and won't allow e.g. strcpy(a,b) and inserts run time sanity checks where possible, inserts code to zero memory on the stack before allocation and so on. Libraries like libc etc. could add some sanity checking of arguments, system call stubs could do - not only catch security issues but crash issues.
I'm not saying you can make C as safe as Rust, but you can damn sure make it WAY safer than it is out of the box, with no perceptible performance penalty. 30 years ago you wouldn't want to do the stuff I'm suggesting because every cycle mattered. A lot of run time checks wouldn't slow down anything at all, since the code can be placed so it uses otherwise unused execution slots. Even if was a few percent slower if it is safer who cares? Rust isn't as fast as C, either.
Less and less code is performance sensitive every year as CPUs get faster and faster, and stuff that is (i.e. games and HPC) are not something where you care about security holes too much. Compile those with --safety-off.
But if I've allocated a and b and already verified that they are the same size, whats wrong with using strcpy?
Not performing a check before invoking strcpy that the destination string is long enough for the source string is a developer coding error, not a problem with strcpy itself.
But if I've allocated a and b and already verified that they are the same size, whats wrong with using strcpy?
The elephant in the middle of table table isn't when you first create this beautiful piece of software it is years down the track as it is maintained.
Some future developer changes the allocation of either a or b as they need to store a longer string. Your use of strcpy may several calls below where the change has been made and difficult to spot.
The single biggest cost with software is in the maintenance.
Because inserting a sizeof() check is too terrible to contemplate, since it might be redundant? It isn't like that will slow things down at all, since the generated assembly can easily fit in unused execution slots.
If you know the size of a & b at compile time you can use a pragma that will cause the compilation to abort if it isn't satisfied, then it won't add a few bytes to the code footprint (something I'm sure we should all be gravely worried about, in an age of 250MB smartphone apps)
Or, and I know this is crazy, simply use strncpy() which will work just fine if your belief that the destination is big enough is still true. Because no one will ever try to maintain your code, and you won't ever come back to it a couple years later and not remember that a and b need to be of identical size. Because you're perfect and have memory like an elephant, and safe programming techniques are for all the stupid people who aren't you.
It doesn't guarantee strings are null-terminated, unless you yourself make the buffer one byte larger, set that byte to 0, and are sure it's never going to be used.
Use strlcpy or strcpy_s (C11) instead.
I think you have a critical misunderstanding of how programming works. They're not "inserting bugs in their code", the bugs are caused by oversights the programmer didn't think about. If you use a language where those oversights are not possible, because say everything is typesafe or array length checking is always done or whatever, the programmer can't make those mistakes.
Not how programmers work... I don't think it is possible to make a language where programmer oversights are not possible. For example, if a programmer thinks that only states a, b, and c are possible in a section of code, but there is a state d that he didn't think about, then there is a bug that the language can't catch. Back in the DOS days (and to a lesser extent VAX 11/780 VMS) I would sometimes create a small special purpose application or driver and I always thought about possible states of inputs and the desired states of outputs. I usually put in a check for an unexpected input state, which I would use to trigger an output of a bad input state. I gave up on any programming for Microsoft products after Windows 3.1 because it was impossible to keep track of the constantly changing APIs.
That's fine. Use Rust.
And make sure you *never*, not even once, use an "unsafe" function in it.
Otherwise, you're just recreating C code poorly.
Now, how much of your code can be done? I imagine all of Office should be fine. But Windows, without unsafe Rust functions? Good luck!
The second you are into "dereferencing a raw pointer", memory safety of the whole shebang is at risk. Unfortunately, that's an inherently common requirement in operating systems, drivers, hardware interfaces of any kind, etc. and used greatly for performance tweaks too.
It's not that you couldn't do the same in any C variant either, whether by coding style, explicit compilation checks, or whatever. It works out the same.
As soon as you have to poke memory that you don't know the origin of, and trust what's there, and hope you got the address / size correct, and then interpret the data in that location in some fashion, you're in trouble. And, unfortunately, that's a inherent part of every OS.
Rust has a safe mode and an unsafe mode. If you find yourself enclosing entire chunks of code in unsafe then you're not using it properly. And even if you have to use unsafe (e.g. to call C), the risk of doing so is isolated to that little snippet.
By way of example, I have a project with over 100,000 lines of code and it contains exactly 4 unsafe keywords which are simply to allow some OpenSSL structs to be moved between threads.
Compare and contrast to C/C++ where everything is unsafe.
Semantic checkers exist and I am certain the Microsoft will be using them the problem is that there are a million ways to create one of these memory corruption errors and new ways of exploiting the are being found all the time.
Long story short its pretty much impossible at this time to do an automatic check that is certain.
Though, if the problem is still there undecideable then Rust can't decide it either. You solve that by only allowing the subset that are decideable.
Some compilers do have pretty good checking for the things that lead to common errors, that's the text that scrolls by whenever you build a large project...
You can have all the memory safety guarantees in the world.
The second that you are able to poke around in / peek at a memory location under your control ("dereferencing a pointer"), then all those safety guarantees go out the window. Because now I can - accidentally or not - overwrite the size of a variable, or write data past its data's upper bound, or make it leak into other nearby memory areas, or access an area that I shouldn't and - if anything is watching at all - trigger a memory access violation (e.g. a null pointer deference).
And in OS terms, that's like saying that your bank is secure, so long as nobody ever want to gets inside. You can't interface with hardware (which will present itself at arbitrary addresses that you need to dereference from, say, the PCI discovery structures), you can't write drivers, and you hit massive performance problems because you end up having to pass information around *everywhere* rather than just refer people to it.
Rust has an "unsafe" mode / command / keyword for exactly this. The second you use it, all bets are off (it's "official" and they know you have to use it, which is why it exists, but they literally say that you have to flag it as unsafe because then YOU have to check your code is right, not the compiler, and if it's wrong, that's not Rust's problem, and they can't stop you interfering with the other "safe" Rust code at that point!).
If it was easy to write an OS kernel, filesystem, hardware device driver, etc. without dereferencing pointers and trusting/interpreting the data therein, then we would have moved on from C before the UNIX era finished, let alone now.
For applications, sure. If they use sensible formats and do everything right they may never need to use an "unsafe" function. But the bits that actually make your computer work are dereferencing third-party pointers that are just handed to them all the time. Every time you see a C-style (cast). That wouldn't work. Every time you receive nothing more than a memory location from hardware and need to use it by pretending/assuming it's something else (e.g. DMA accesses, PCI hardware discovery, framebuffer locations, etc.). That wouldn't work.
Guess where most of all the problems come, for someone writing an OS, especially if it includes third-party hardware support by other-people's drivers?
Did you know, for instance, that 3DFX drivers for Windows 95, etc. literally allowed DMA of the entire memory range of the machine? So by installing the driver for your graphics card, someone writing a game that runs as even a lowly unprivileged user could have queried the graphics driver in such a way that it allows complete unrestricted, unmonitored access to every byte of the computer's memory. Nobody noticed until years later (mostly because looking at driver code is hard, purely because of the safety you need to reimplement everywhere that would normally be in the compiler but with holes poked for what you need to do).
And the second you start using "unsafe" functions, you are actually able to break all the guarantees of "safe" functions throughout the rest of the program.
If memory safety was easy, Java would be secure.
Rust has an "unsafe" mode / command / keyword for exactly this.
But I think that's the point, isn't it? You can pick that up easily with static code checks and do reviews if necessary. Programming is supposed to be done by consenting adults.
Not that I think it's an either/or approach. Some stuff will always be written in C for precisely the reasons you list, but, if a lot of the other stuff can be written in something "safer" with no penalty, why not take advantage of it?
I agree with you Lee, that using 'unsafe' is unsafe but the presence of the keyword makes it easier to mitigate the risks.
So, for example, the devops toolchain can be configured to look for 'unsafe' and flag that code up for further review or extra testing. Or the dev team can be structured so that only a core set of developers work on code that needs 'unsafe'.
These aren't perfect by all means, but they are a lot easier to implement with Rust than the equivalent for C/C++.
As Lee indirectly points out - even Rust can't prevent, say, some device with DMA from overwriting your trusted ram area. Bad design is bad. There's no "one weird trick" that's going to solve most exploits.
If we could solve all possible issues with a fancy lint program - then there's no need for programmers other than the one that writes the random number generator to feed it, and some marketing guy who watches it and yells "ship it" when he thinks it'll be good for his bonus.
The exploiters will just use a different door....there are plenty to go around.
"All I have to do is run faster than you, not faster than the bear" is something I once heard.
Unless you fix the language standard to remove these problems at source (badum-tsh) you'll end up with non-standards compliant compilers (looking at you Microsoft). Then you create more problems because devs used to working with a non-compliant compiler may do a lot of unsafe things if they move to a compliant compiler, thinking safety issues will be spotted when they aren't.
You might want to talk to companies who do the hiring. One of the reasons for H1-B visas is because the quality (not just the quantity) of CS graduates in the US is so poor. This is partly because the VC industry doesn't really care about quality, it focuses instead on time-to-market, network effects for scale and an exit strategy.
Things may change subsequently, which is why you see things like Google's coding guidelines, Go and Rust, when companies need to keep systems running, but until then its MVP (Minimal Viable Product) and quality can go hang.
One could wish you weren't so close to the truth, Charlie.
An attitude shift and where the money goes is required, and those are HARD.
Which is why all sorts of gimmicks are tried instead. Which is hard, but like the saying goes, "always time to do it over, never time to just do it right".
If there was a good way, other than earned reputation (too slow) to tell the good developers from the so-so, and reward them accordingly - this wouldn't be such an issue, people would strive to be in that good group, and things would take care of themselves.
I'm unaware that there is such a good way to tell. I only started getting the rewards due my own skill after a few gimmes and building a reputation. If there was a shortcut, I didn't find it.
Certainly no MBA or PHB is going to look past the next quarter's numbers in the current setup.
VC's are actually *more* patient than that crowd about that one - some will wait a year or more.
They have other flaws - money that needs returns and few obvious places and ways to achieve that in this economy.
I feel that the problem has more to do with unsafe legacy operating systems, particularly a huge monolithic one made in Redmond that has seen new features added to it relentlessly, with getting them to market taking priority over QA affecting its reliability and vulnerability to exploits.
So if Microsoft switches fully to Rust, or Kotlin, or Go or whatever is the flavour 'awesome language' of the month, then what does that say about Linux and all those other open source OSes out there? What about LibreOffice and such massive desktop applications? Should Unreal Engine be rewritten from C++ as well?
Colour me sceptical, but this whole story feels like knee-jerking to me. Anyone who has actually looked at and used Rust notices that it's basically just recoloured C++, with an even more obtuse syntax, almost fully inferred type system (you know, weak typing like in scripting languages) and instead of OOP the not very intuitive Traits system (which is being added to C++ by some enterprising C++ devs as we speak).
There was the joke going around that Mozilla invented Rust because they couldn't admit to their C++-foo being too weak and their codebase being unmaintainable due to every poor development practice in the book having been poured into it since the 1990s when it was Netscape. Programming isn't magical after all. It's still engineering and no matter what materials and tools you pick, you still have to put in the hard work.
Rust isn't recoloured C++. And it isn't weakly typed, actually it is strongly typed. Most of the time it infers the type from the function you are calling so you don't have to repeatedly declare it. Neither is the syntax obtuse, it's actually a lot simpler than C++ for a variety of reasons.
As for why Mozilla invented Rust, that is simple. They wanted to introduce parallelism into their browser engine without compromising security. That is a hideously complex task at the best of times without the implementation language adding its own problems. So they saw Rust as a good thing as indeed it has turned out to be.
Rust is totally recoloured C++. And by the fact that it uses inferred typing by default (strong typing requires a lot more typing than in C++ and rarely appears in example code) it uses weak typing.
If you want strong typing, use C++, or even better: Ada. That's a language which doesn't allow inferred typing. At all. Not even a typedef with the original type it was typedef'ed from. That gets you a nice compile error.
That the Rust creators don't even know Ada exists says a lot about the language.
This post simply displays your ignorance of Rust (and languages in general) I'm afraid. It has stronger typing than C++, but uses type inference (similar to "auto" in modern C++, but actually more like Haskell) to save the developer having to name all the types, which makes refactoring easier.
There was the joke going around that Mozilla invented Rust because they couldn't admit to their C++-foo being too weak
This says a lot about programmers' bias. Like drivers, we all tend to think that we're good programmers and that it's the others that write crap code. But this is simply wrong. For all its many benefits, it's not a secret that writing good, safe C++ code all the time isn't easy and this is partly due to the extremely ambitious design of the language and one of the reasons that Objective C was written. Not that I'm advocating Objective C, just noting that criticism of C++ because of its scope is long-standing.
If you start from the premise that your code isn't always going to be flawless then you should welcome any approach that helps improve it.
Speaking as someone who has actually done work on the Mozilla codebase (around version 3.6 or thereabouts...), I can say that the problem that Mozilla was having had absolutely nothing to do with C++, but everything with the lack of a proper build system (60,000-line script in the root), no clue about segmentation (build system tossed all header files in the entire source tree into a global namespace), zero documentation (aside from a few inline comments in the source now and then) and essentially a burning dumpster fire of which Mozilla pretended that it was a functioning codebase.
But sure, their problem was C++.
It is suitable as a systems language for microkernels, embedded systems and others[1]. You can get similar perfomance to C[2]. I wasn't involved with putting it in the hardware but an optimizing compiler would have been used.
[1] https://www.adacore.com/about-ada
[2] https://www.electronicdesign.com/iot/comparing-ada-and-c
(needs javascript)
I have seen Microsoft code and I find it very odd that they are simply looking at other languages rather than use C++ in a safe manner.
std::shared_ptr and std::weak_ptr go a long way to solving their problems and is in no way any less efficient than the similar system used in Rust.
For example, the C++ bindings to SQL Server... I don't want to see one function returning a raw pointer. I am sick of wrapping your shite in an attempt to make our own code safe.
Stop language hopping and do some sodding work Microsoft ;)
"Modern" C++ is a complete shit show. I know, I'm a C++ programmer.
These extensions to the language are merely hacks to cover over the cracks of what is a terrible compromise of a language.
Templates (and the incomprehensible error messages they generate), return value "optimisation", I could go on but it would make me angry.
std::shared_ptr and std::weak_ptr go a long way to solving their problems and is in no way any less efficient than the similar system used in Rust.
A long way perhaps, but not far enough. You can still screw things up with smart pointers, and have bad run time consequences. The point of Rust is that memory mis-use problems are identified at compile time, and reported as errors. You can't get away with taking liberties.
For example, the C++ bindings to SQL Server... I don't want to see one function returning a raw pointer. I am sick of wrapping your shite in an attempt to make our own code safe.
Whilst I can see the modern day pain of having to deal with C++ that doesn't deal in smart pointers, I rather suspect that the C++ binding for SQL server predate the idea of smart pointers in that language. They'd have to start again on the bindings, and break a lot of existing code in the process.
If MS really do get a taste for Rust in their systems coding (OSes), perhaps they'll start spreading the enthusiasm for it elsewhere. For example, why bother doing a modern C++ binding for SQL server when you could simply do a Rust binding?
Well I used the word "modern" but what I really meant was C++03 (plus C++0x smart pointers).
[b]For example, why bother doing a modern C++ binding for SQL server when you could simply do a Rust binding?[/b]
Mostly due to the fact that a developer would then need to still write their own language bindings to make the Rust bindings work with C++. If Rust does overtake C++ in market share this is not a problem, but this is very unlikely to be in our lifetimes.
I love the MS security people's inconsistencies and will truly miss them in the unlikely event MS starts using safer tools. Here you go:
(1) MS: stop using dodgy tools and unsafe practices! Pros don't do C++ anymore!
(2) MS: unthinkingly, reflexively install every (marginally tested) security update binary blob we vomit forth!
(3) MS; ignore the fact we used C++ to generate the untested dirty hac^H^H^H patch!
Rinse, repeat
20 years ago, when I was in dev, one of my favourites was to unblock a colleague who had spent one week wondering why his computing code would give different results, with the same dataset, depending on which arch it was running on (we had lots of them, little endian, big, different OSes).
Everytime, it was a fatal mistake of using and abusing fancy things like (++X) = function(X++);
Back in ANSI C 89, the language itself was not saying *in which order* the compiler should evaluate before = and after !
So, a lot of potential mistakes. This plus never-ending pointers fatalities.
At the end, I switched to ADA which is a lot more secure.
Ah yes, there was also that dude who spent 2 days not understanding the situation, putting shitloads of printf() everywhere, which never appeared at run-time.
Turned out, he was too lazy to read make(1)'s manual, and took a clumsily modified Makefile from the X-windows deamon for his 3 modules program.
Whatever he was modifying in the source was never re-compiled ! Ah good memories.
That has nothing to do with Ada, and everything to do with having an idiot colleague who thinks fancy constructs are better. Not only are they a great source of bugs (especially if you use them where behavior is undefined) but they make code harder for others to understand and don't result in the compiler producing better code. It is ALWAYS better to take the simple path.
I'd go so far as to advocate against ever using autoincrement or autodecrement anywhere except maybe in for (;;). There may be other places where most people can follow what is going on, but the compiler produces the same code whether you use a=b[x++] or a=b[x];x=x+1.
but the compiler produces the same code whether you use a=b[x++] or a=b[x];x=x+1.
Possibly, but that would depend on the compiler itself actually.
The autoinc/decrement was introduced because some architectures introduced specific increment and decrement instructions in their instruction sets, and using a++ vs a=a+1 could produce different assembly.
e.g. (pseudo assembly)
a++
LOAD A R1
INC R1
STORE R1 A
but a=a+1
LOAD A R1
LOAD 1 R2
ADD R1 R2
STORE R1 A
May not look much, but if you are iterating through a million increments of 1, that extra instruction and the two additional register accesses required can add a lot of overhead.
Of course, smart optimising compilers should recognise a=a+1 and output the assembly equivalent to a++.
>Back in ANSI C 89, the language itself was not saying *in which order* the compiler should evaluate before = and after !
But it was crystal clear in the K&R white book, but I suspect many programmers didn't bother with the appendix that formally defined the full language syntax and semantics.
Also C# and Rust are not really comparable; C# has a garbage collector, Rust doesn't.
Because Rust's syntax has such a thorough grip on memory ownership and mutability, there's never any need to explicitly clean anything up. The compiler can work out for sure when memory has gone out of scope.
If you care about "iteration speed" it pegs you as a very careless developer who relies on the compiler to catch typoes (too bad if instead of typoing a variable name and getting an error you typo it as ANOTHER variable name and leave a hard to catch error behind) or to slam out minor variations and move on the minute you get something that "works" (in your 15 seconds of testing you'll allow before it is time to move on to the next problem)
> a very careless developer
Or perhaps someone who uses languages with fast compilation speeds, which helps with things like wasm when you're not even sure if a given approach will work at all.
But, hats off to you for blindly assuming things about people you don't know.
javascript - stupidest choice EVAR for writing anything but simple web thingies in. Even THEN, it's so HORRIBLY ABUSED in web pages already.
nefficient, interpretive lingo, garbage collection memory management, piggy bass-ackwards "object" (read: stupidity) oriented as in "ooh look we have OBJECTS! Let's USE them!" without thought as to what that implies or results in... and so on.
If you have memory problems, the SAME KINDS of memory problems popping up ALL OF THE TIME, that means two specific things in a large organization with many developers:
a) LACK of PROPER STANDARDS
b) LACK of PROPER MANAGEMENT.
To fix the REAL problem, you need PROPER STANDARDS. We begin with how to handle memory allocation and object life.
1. reference counting - when someone hands you an object, increase its ref count immediately, then lower it when you're done
2. ALWAYS NULL OUT POINTERS AFTER YOU DE-REF THEM [this makes any use-after-free condition that MIGHT be added 5 years from now sh9ow up almost immediately in testing)
3. NEVER free memory in one function that was allocated in a different one, outside of the context of object reference counts.
4. NEVER touch the internals of one object (or function) with anything OTHER than that object
5. ALWAYS PERFORM SIZE CHECKING ON WRITE OPERATIONS TO BUFFERS (and don't get the size wrong)
other things like 'guard pages' around memory blocks [which would throw page faults if you exceed boundaries] can also help in debug code, but production code at LEAST needs to do what I just said, and probably a whole lot more.
If you enforce PROPER STANDARDS (like those) the "bite you in the ass later" memory problems should pretty much go away. You know, like maybe LINUX???
Replacing EFFICIENT with BULLCRAP, though... THAT is NO solution!
Despite there being true multi-process, multi-user operating systems around in the 70s, it was MS settling on a single-user, single-process OS that has laid the foundations for all that has come since - "bugs", bugs, and bugs included. Most bugs in windows can be traced back to having been built on top of an operating system which believed it (and thus it's user) had sole ownership of the machine.
By the time the hardware was capable of easily running multi-process, multi-user applications, it was too late.
Almost everything wrong with Windows starts there.
You just don't see the same sort of issues with Linux. Which before the penguinistas get too excited - is far from perfect.
In fairness to Microsoft, I believe it is true that you can't write a pre-emptively multi-tasking OS for the 8086 because not all of its instructions are restartable. It would also be pretty flaky because there is no memory protection. I think the 186 fixed the former problem and the 286 (designed after MS pretty much pointed a gun at Intel's head) fixed the latter, but by then there was so much software that *relied* on the flaws of the 8086 that it wasn't possible to actually product a real OS until a second gun was pointed at Intel's head to produce the 386.
Ever since the 386, MS have had a true multi-process, multi-user operating system to run on it (OS/2, then NT) but the plebs refused to run it. So MS pointed a third gun, this time at the plebs heads, by killing off the DOS-based versions of Windows and forcing everyone onto the NT kernel.
Casting MS as the lone heroes, valiantly fighting for securable and scalable operating systems against the forces of darkness, isn't a terribly popular pastime but it isn't *that* hard to do if you cherry-pick your historical facts.
"the first version of OS/2 was for the 286"
I had a chance to work with that. It actually multi-tasked very well, with _everything_ running in protected mode on a 286 machine (a PS/2 of course). You could format a diskette while compiling things in another window, as one example. I actually did it. I was impressed. Windows 3.0 released in less than a year after that experience, and I recall that you needed 386 'enchanted' mode to do the same thing with windows. But still, it too had that feature, which was a step in the right direction. And now we are here.
actually you can write a pre-emptively multi-tasking OS for just about ANY processor, but in some cases (68k was one of them) you had to jump through some odd hoops to make your program relocatable to any block of memory [which is really what you need to happen if you don't have virtual memory management].
The old PDP-11's had the ability to have multiple users, in some cases without memory management hardware [which would virtualize your memory space]. One particular package 'MU Basic' (MU stood for Multi User) managed that well enough, maybe 4 simultaneous users running BASIC on a system with only 32k words (64k bytes) of RAM on it. You didn't get much for each user, but it worked, it was mutli-user, pre-emptive, etc..
Not as good as 286 or later Intel CPU with the built-in virtualized memory capability (selectors vs segments, for example, which make memory relocatable, and also 386+ with page tables) but it COULD be done. And 68k's had to use "pseudo segments" that were used by Apple's OS for quite a while, as I understand it to make their code relocatable. Similarly on the Palm devices (which used a 68k) you had something similar to that.
Anyway... just sayin', you CAN write a pre-emptively multi-tasking OS for the 8086. I just wouldn't WANT to.
It all depends what you want to do
Lots of people mix and match code e.g. C (or maybe even assembler) for "close to the metal stuff" and high level language for other stuff.
..Depending on your language you may have to use some "unsafe " style declarations when making use of the "low level" dlls (or whatever objects you are using)
But in decent coding practices potential nasties tend to be in certain areas only .. just because you can do all sorts of pointer / memory tricks in C does not eman you should be doing that all the time - you try and limit it (helps readability a lot too)
Back in the old days when I was working on iSeries machines the OS and compilers conspired to allocate and clear memory for thread activations. You had to positively code for sharing memory between processes. Back even in those days, this was called "legacy" by other developers in my world, and now the legacy is no longer a legacy although their is now a new language/platform to play with implementing the same techniques.
I always considered C and its variants quite a low level language positively designed to be relatively uninhibited in its behaviours and rather an odd choice for normal business applications. Maybe a good choice for people interfacing with hardware, but peculiar for someone that only needs a GUI and database to work nicely.
Cue flame war from people that insist all modern code should be in Visual PROLOG or... hold on whatever happened to all those 4GL development platforms in the early 2000's that claimed to make coding itself irrelevant...
I remember reading a paper that included a statement in its summary along the lines of "I'm sure it is possible to write safety-critical software in C, in the same way it is possible to shell peas while wearing boxing gloves".
IIRC it came from a safety group within the NCB (National Coal Board), so not that recent.
Just came back to this comment, and although old, i'll add this as a response to the @AC comment... even if only for posterity.
At the time I had a friend who was writing safety critical software, and had to write everything directly in assembler as compiled code, even from the same source files, would not reliably create the same object. Non-deterministic compiler optimisations ruled it out entirely...
Granted this is a bandaid... but i seems to me they already know that pointers are unsafe and have know that for years... So why don't they just build in the "unsafe" into c/c++ standard and require everybody to use "references" instead of "pointers" unless you ask for a compiler exemption by placing "unsafe" keyword in the code to mark a pointer block... then the C/C++ compiler from microsoft and gcc will just refuse to build the main exe and mark the "safe" flag in the exe...so that everytime you run the code...it pops up a warning that says, "this code was created using old C++ pointer techniques and maybe unsafe to run on your PC blah blah.."...
"require everybody to use"
That sort of thing is for a SHOP STANDARD, and *NOT* for a bunch of *EGGHEADS* to "decide for us" (because they're SO much SMARTER) and then CRAM IT DOWN OUR THROATS like that.
This is the wrong kind of thinking.
How about _THIS_ instead: SHOP STANDARDS that are developed by PROPER MANAGERS who do REGULAR CODE REVIEWS and DELEGATE RESPONSIBILITY to SENIOR PEOPLE to make sure this happens CONSISTENTLY throughout the applications.
works for me. This is 'Captain Obvious' territory.
Rather than being technical driven, this initiative is probably an attempt to find a more financially and resource efficient tooling than the alternative of put investment into cleaning up it's existing code base and providing appropriate training to the 'cheap' developers Microsoft contract to do the work.
If there were no compromises to be done for security, developers would already write (or mostly let the computer write) ultra secure code.
Believing a language change is a panacea stinks, plus it doesn't even take into account the training of Microsoft's 30k+ developers.
This isn't to say Rust hasn't a bigger role to play at Microsoft - maybe it will, but we would already know if security was as easy as a language's switch.
Our friends at npm have been a top malware injector, and they are Rust lovers.
But I wonder how many of those people have left these sort of bugs in their code? Even if they're good enough and diligent enough not to now, would they like to guarantee none of their older code is still in use somewhere which might have bugs? Have they checked every open-source code-file they use to ensure it meets their own standards?
That's the point. Errors are inevitable. When you look at even the most skilled and most conservative engineers, mistakes happen - billion dollar satellites explode on launch or fail due to sometimes quite simple errors.
And your point is, exactly...what?
If it is that Rust will remove the "inevitable" errors, then what are you on about? If it is that the "inevitable" errors will occur even with Rust (or whatever panacea-language-du-jour one comes up with next week), OK, but again, what's your point?
I agree that bugs are inevitable. The corollary to that axiom is that no amount of tool-fiddling will remove all bugs. Some bugs may be eliminated simply by tool-fiddling, but then others will mystically appear in their place. Seems to me the best way to minimize the bug load on a particular software endeavor is to apply a high level of consistent engineering discipline to such an endeavor, regardless of the tools used.
You'd almost think software testing didn't exist to pick up the bits the coder missed.
Then again the modern paradigm doesn't really consider proper testing to be a thing. And QA, what's that?
Safety languages are like safety scissors; a blunt tool handed to those incapable of dealing with the risky version, at the expense of much reduced ability and performance.
And it's not like the risky areas of ' unsafe' languages aren't well understood, it's always the same handful of basic errors.
Alternatively, the less time the developers have to spend looking for the bugs that Rust fixes, the more time they'll have to spend fixing the bugs that Rust doesn't fix. Fun sidenote, jmp(and derivatives)/goto are perfectly acceptable to use in place of if/while/for and function calls, correct? Or do you let the compiler build those patterns because it's safer and lets you focus your attention more easily?
"Programmers" that would use gotos, jumps and derivatives when there are other control structures available† fall foursquare into the "not disciplined enough, not smart enough, or who simply can't be arsed to design and write code properly" category.
†With the possible exception of PL360, I don't think there are a whole lot of structured assemblers. For those who must, or want, to work in assembler, jumps and branches are a necessary evil. I would expect that even those folks would have been bitten enough times so as not to write spaghetti with them.
>Alternatively, the less time the developers have to spend looking for the bugs that Rust fixes, the more time they'll have to spend fixing the bugs that Rust doesn't fix.
Actually, it probably is more correct to say that the Rust compiler forces developers to allocate time and effort to resolving some classes of programming errors there and then, before they become bugs and potential vulnerabilities.
Since training your million monkeys to properly use cutlery far exceeds their intellectual skills.
Writing extremely close to error-free software is *NO* rocket science. I found it quite easy and beneficial, when I saw that its possible (thanks to Donald E. Knuth), and actually not all that difficult. It takes some amount of devotion, pride of what you're doing, acceptance of blame and willingness to improve your own habits to match your own skills. I've been doing this successfully for 24 years working in a software company that produces commercial business software. (but admittedly, there are very few folks around me that do likewise, the amount of clue- and learning-resistant code monkeys is quite high in the software business).
Learning old dogs new tricks can be just as hard with old code monkeys, so the real problem is with **management**, which is responsible for a complete lack of **USEFUL** education&training of young software engineers. Pretty much all of what is offered, is not going to make folks more skilled and more competent programmers. Unless you train yourself, it just doesn't happen in most software companies.
Two of the effects that are making it hard for organizations to get out of the hole which they've dug themselves in:
https://en.wikipedia.org/wiki/Peter_principle
https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
and managers either being simply unable to recognize and recruit folks that are more compentent than themselves, or managers that are highly afraid of hiring staff more competent than themselves.
Look at the extremely poor state of misery of the Google Android Stagefright Media Framework. Security Princess Parisa Tabriz seems to have turned two blind eyes towards significant areas of extremely poor software development within Google. Since 2015 Google keeps on playing whack-a-mole with serious Stagefright Bugs that are essentially all caused by the very same incompetencies / lack-of-skills of the irresponsible developers who created them, and their irresponsible managers who should have called in for a complete code review after the first completely braindead bugs were admitted in 2015. If you look at the July 2019 Android Patches, the folks a Google (ir)responsible for Android, have been playing whack-a-mole over the past 4 years straight.
#define x =
#define double(a,b) int
#define char k['a']
#define union static struct
extern int floor;
double (x1, y1) b,
char x {sizeof(
double(%s,%D)(*)())
,};
struct tag{int x0,*xO;}
*main(i, dup, signal) {
{
for(signal=0;*k *x * __FILE__ *i;) do {
(printf(&*"'\",x); /*\n\\", (*((double(tag,u)(*)())&floor))(i)));
goto _0;
_O: while (!(char <<x - dup)) { /*/*\*/
union tag u x{4};
}
}
while(b x 3, i); {
char x b,i;
_0:if(b&&k+
sin(signal) / * ((main) (b)-> xO));/*}
;
}
*/}}}
You can write code, which accesses memory via pointers just like the old C does. This is called backward compatibility and happens to be the source of most C++ memory corruptions bugs. You can, but you don't have to. Actually, all the books tell you not to. Use strings and containers and you're good. It isn't that hard to figure out. container[passed_the_end] doesn't do a memory check, while container.at(passed_the_end) does. You just have to want to do it. ;-)
Pointers are essential for some types of software, like writing the code that manages strings and containers. The point isn't whether a particular technique is good or bad but rather whether it should be used in a particular situation. It was well known back in the early 80s that you didn't write end user applications in systems programming languages -- obviously you could but it would be messy and risky. Unfortunately when usable PCs first appeared there wasn't a lot of language support for them so apart from the built in BASIC the only usable compilers were for languages like C and Pascal**. This pretty much set the tone for everything that came after -- C++ was primarily used as a kludge to help programmers write graphical code but it was still "C and a bit".
C is a systems programming language, its essentially an assembler on steroids, so its as safe or as risky as the programmers who use it. What problems you get with it come from its libraries. Its likely that these people, like many programmers, have never used a language outside an environment that includes the startup thread ("crt0" in old systems) or libraries so they fall into the trap of assuming that libraries are an inherent part of the language.
So, what language are you going to write this new language in? Relatively few languages are able to bootstrap themselves up using only tools written in the language.
One of the biggest problem I have seen is copying data on the stack and not checking the size. Because the stack goes from Higher to lower address, but the data copying goes from lower to higher, any overrun corrupts data on the stack from programs that called you.
A technically simple solution is have stack space be assigned the other direction (from lower to higher address) thus any overruns does not impact prior functions on the stack. This would prevent MANY crashes and actually avoid the security holes caused by stack overruns.
For most applications it would only mean a re-compile assuming you could work out the hardware requirements.
While many of the problems in designing languages were known in the 1960s, C ignored that wisdom. C++ came along and exacerbated the problems.
http://ianjoyner.name/C++.html
It is about time companies like Microsoft did something to educate their developers better.
They aren't terrible languages. Some systems, like embedded, can't be programmed in much else.
The problem is they are one size fits all, which makes not very good at certain things.
Rust, or any other managed language, will never conquer the world as much as C/C++ ever did. They're all purpose built now days, which makes them very good at certain things, and useless for others.
In college I remember hearing rumor of Java implemented in hardware. Well, it's 2019 and they can't even convince Java developers to move to Java 2. Lots of things come along and say they'll do it better and safer. Many schools don't even teach C/C++ anymore, yet here we still are.
I'm skeptical about the promises of any new language that tries to market itself as the C/C++ killer.
The problem with trusting any tool to implement your safety is you have to trust that tool implicitly.
Java? Flash? PDF? Intel's ME(management engine)? Spectre/Meltdown?
It's easy to say Rust is safer. I'm sure it looks that way. But what's more dangerous, idiots writing in C/C++? Or the whole world jumping on Rust because of the promise of greatness?
Investigate, yes. But only time will tell if Rust can live up to its promise.
45+ years later, and C is STILL here.
Cobol, Pascal, Fortran, etc.? Time was not kind to them.
apparently not a lot of people, since it's WAY down the list (#33) on the latest TIOBE index.
https://tiobe.com/tiobe-index/
There's your proof, right there.
Noted, the "next big wave of Android development" Kotlin is even LOWER, way down there at #43. Well it's in the "top 100" making it SLIGHTLY relevant. And that's what I think of RUST, too: SLIGHTLY relevant.
Worth pointing out, COBOL and FORTRAN are at 27 and 29, respectively.
Rust MAY be safer but it will not have the massive collection of libraries that other languages and organisations have built up that are vital to their work, So people will have to use the FFI and they will find their leakage problems are not solved.
I'd bet a far faster and more sustainable way to sort out these problems is to use many of the code coverage and analysis tools that are available and actually spend some time using it to check over the existing code for errors that are glaring to these things these days.
Trying to make new languages to solve old problems will invariably lead to more annoyance - I've got about 40 different Python implementations on my system - DLLhell++. I have no doubt RUST will come up with some similarly annoying but equally cul-de-sac moment that they didnt foresee that has been solved in many other languages.
Buffer overflows weren't much of a problem on IBM mainframes. Why not?
In UNIX, MS-DOS, and most modern operating systems, a text file is a long bunch of characters, with a delimiter character - whether it's LF, CR, the string <LF><CR> - marking off one record from the next.
On IBM mainframes, I/O was handled differently. Records weren't like C strings, they were like Pascal strings. So a text file would return a record as the printable characters in that record, and nothing else, with the length coming from an out-of-band field, like a variable-length field in a database file.
Hence, the maximum length of a record might be 255 or 32,767 or 65535 characters, and it was absolute, a delimiter not showing up in time could not cause a buffer overflow.
Do that, and whatever language you use, you just have to provide a buffer of the standard maximum size, and the problem is totally solved.
If drivers have to devote their minds to the actual motions of their hands and feet as they drive (think QWOP), there wouldn't be enough mental capacity left to perform the higher-level judgment calls needed to stay safe on the road, especially in emergent circumstances. There's nothing wrong with having tools to help deal with common problems so that you turn your attention to higher-level problems.