Doh
I had to install this today on a new server installation today
The latest version of the application we use required installed this by default
It never gets any easier does it!!
A critical bug in the optimizer in the just-released .NET 4.6 runtime could break and crash production applications, we're warned. "The methods you call can get different parameter values than you passed in," says Nick Craver – software developer and system administrator for Stack Exchange, home of the popular programming …
"What if it gives dosing information to a patient? What if it tells a plane what altitude to climb to?"
Not to downplay the severity of this bug but no one in the medical or aerospace industries will be using a version of .Net released last week in a production environment. And, in any industry this should show up in testing because you should be testing release builds, not just debug builds.
"There's a reason Sun put that "no nukes" disclaimer in Java."
However Java is being used in a lot of semi-critical server side systems now. Don't know if thats better or worse than it being done in a compiled language such as C++. What I do know is that the rush to use lovely untyped Javascript on the server side is only going to end in tears.
There's a reason Sun put that "no nukes" disclaimer in Java.
That was legal arse covering (cowering?).
Maybe Sun/Oracle wants to say "our JVM and the sometimes mind-withering runtime libraries are not to be used for high-assurance tasks". That may be so. The Java language however, is not concerned by this. While Java is not exactly the best language for high-assurance programs, restricted Java is bound to be *better* (in the sense of more testable/easier to write and check) than restricted C/C++ code (like MISRA C) because the code can be checked more extensively.
But yeah, I would use an Erlang and its VM at least for the high-level parts of the nuke control ...
"No way .Net can be debugged to the point it can be trusted at the controls of a plane."
Your choices:
A) the gcc toolchain with an Ada front end and a target which may or may not see much use elsewhere
B) the .NET compiler, used by millions, tested by loads of users, and a target CPU which no one in avionics uses
(I'm aware of Greenhills. I'm even aware of XD Ada. I'm ignoring them for now).
If I were you I wouldn't start from here. I certainly wouldn't want to fly anywhere from here :)
Irongut wrote:
Not to downplay the severity of this bug but no one in the medical or aerospace industries will be using a version of .Net released last week in a production environment. And, in any industry this should show up in testing because you should be testing release builds, not just debug builds.
No one in aerospace is using .NET for flight control systems, not now, not ever.
This is the Ballmer/Nadella Microsoft ... so messed up that it can't be fixed anymore.
Windows10 is a huge mess. Only a fool would install Windows10 just because "it is free" ... and it won't remain free anyway because the Microsoft new vision is "Windows a service" which means that instead of having free Service Packs they will be asking customers to pay for upgrades.
"Attaching a debugger, says Craver, changes the behavior and usually hides the issue."
The only one I've every experienced directly in my professional career was in a mainframe screenscraper --- when logging was on, the submillisecond delay caused by writing the log entry was enough for the MF to respond, and to hide the fact that the procedure didn't wait for the response if it wasn't ready; with logging off the procedure just fell straight through and returned an empty response.
Going back to sometime last century while I was still very much 'grasshopper', the non-zeroed-out memory that got allocated to my noddy program caused my crap string-handling routine to die in all sorts of entertaining ways but in the nice sanitised debugger environment it worked perfectly.
Utterly flummoxed until I had this fabulous epiphany and bothered to re-read my code. Crazy idea but then I spotted my deliberate mistake, it's important to test oneself...
Happens a lot in C/C++ when running a program in a debugger causes slightly different memory layouts and so if you're program writes off the end of some memory normally and hits something critical it may not when being debugged or an uninitialised variable now just happens to be pointing to a fairly sane value.
Not just you or the other thread above - many a bug involving uninitialised variables or inadvertently depending upon stack layout or thread synchronisation is only revealed in optimized builds, so a common recommendation is debugging on debug builds, testing mostly on release builds (excepting those tests that can only be performed on debug builds, e.g. due to additional supports). Basic precept is you should test what you expect the users to actually run; they won't much care that it's a compiler bug. So the "sensationalism" comment above was perfectly justified too.
This post has been deleted by its author
This post has been deleted by its author
If you want the last word in speed you use C/C++ and/or asm regardless of what platform you're on.
Often the language, libraries, database, ... in use are decreed from on high, not down in the weeds where the leads and grunts work. In my history, I never used the same language twice in a row (and quite often not twice over my life!), except in a classroom context. For more than a few such projects, even where I was deciding the components, it was a come as you are party. Buying into the appropriate weapons wasn't always possible. Not that big a deal, but annoying nonetheless. For the record, I much prefer C with assembler optimizations. Something like, say, "Big Data" it may be more "efficient" to use what the rest of the team expects to use rather than drag out my toolboxes of statistical code with defined contracts & enforcement, performance, and so forth.
"That's not how we are going to do it!" Oft heard from managers.
Maybe they like the tooling that goes with it, or perhaps they like a nice meaty stack trace instead of the code going off into the weeds when some enterprising skiddie finds a hole in their input handling code.
Talking of code wandering off into the weeds: Once upon a time a class I was in was given a C++ assignment they could complete on Sun (Solaris) workstations or some DOS boxes with Borland C++... Most went for the latter because their code didn't cause a SEGFAULT on the DOS boxes, it was very funny to watch.
If you want the last word in speed you use C/C++ and/or asm regardless of what platform you're on.
I don't know what this even means.
"I have a fast hammer for people who are serially misjudging their skills and resources, so all the world must behave like nail?"
And anyone using assembler on modern CPUs needs to have their head examined. Seriously, go seek help.
> And anyone using assembler on modern CPUs ...
Depends, REALLY does. Just look at gmplib.org
I am aware that optimization is not the most popular task, but why the heck 4 (so far) downvotes on
> If you want the last word in speed you use C/C++ and/or asm regardless of what platform you're on.
Way too often I see crap code run massively parallel, a massive waste.
Yes, I do HPC.
The co-founder of Stack Overflow, Joel Spolsky, is a former Microsoft employee, and is therefore more familiar with Microsoft technologies. He therefore chose to write SO in a language he was most familiar with. While .NET may not run as fast as C++ applications, it does have many other advantages, such as cheaper development costs. For a new startup, far better to get up and running quickly with a C# solution, then deal with performance issues later when they become an issue, than develop an optimised C++ solution and risk going bust before you have a product to show.
Anyway, who said they were obsessed with optimisation? Every programmer should be taking steps to optimise their code, regardless of what technology they are using.
This flaw was in the JIT compiler, correct?
That's disturbing. You could write your bug-free application, build it with stable and well-tested compilers/libraries, and run it through a complete test and validation suite before release. Only to have it fail spectacularly in the field.
Sweet.