
WTF?
"That particular cock-up was caused by transatlantic boffins who mixed up their measurements in inches and centimetres."
Need a bigger facepalm. How long has the world had metric standard units for now?
With a $2.5bn price tag, a 350-million mile journey and 2 million lines of C and some C++ code, the only bugs NASA wants its Curiosity rover to find are those possibly beneath the Martian surface. And it may not be a particularly glamorous job, but software analysis outfit Coverity was the company tasked with "ensuring that …
They don't know why Beagle 2 failed. They don't know if it landed safely and couldn't deploy its antenna. They don't know if the atmosphere was thinner than expected and it broke up on impact. They don't know.
This is the first I've heard of any confusion between inches and centimetres causing the failure. If it was then that is something they would know because all they would have to do is compare the measurements since every measurement they used is written down.
Actually they've worked out what must have happened to Beagle 2.
It was built using the very best traditions of British engineering practice.
But the unexpectedly high heat during atmospheric entry meant that it exceeded the working temperature of the adhesive on the sticky-back plastic.
This post has been deleted by its author
Yeah, and can the US please follow the rest of the world and put ALL of its traffic lights where you really need to stop, rather than at the other side of the junction. Number of times in Irving, TX I've stopped at the lights, until I suddenly realised I was in the middle of the approaching road.....
Traffic signals? Aren't those things only suggestions?
Sounds like an inside joke my wife and I came up with while driving around Margarita Island, Venezuela on our honeymoon. They had the usual traffic signs, speed limits, traffic lights and such, but folks there were rather -- shall we say -- casual about paying attention to them. My wife and I began referring to them as "The Suggestions Of The Road", as opposed to the Rules Of The Road.
The locals in many of the smaller towns on Margarita solved the problem of people ignoring traffic lights and stop signs and speeding through intersections by rolling sections of tree trunks into the centers of intersections, forcing people to slow down in order to go around them. Worked like a charm.
@AC 14:28 (Why do people use AC? It's not like El Reg has a 'Real Name' policy!)
It (probably) has to happen sometime and I will agree that the cost will be huge if done overnight, so don't do it over night. Have a transition period. And that period will take many years.
Are there better things to spend money on? Not really. Oh wait, you're going to say "hospitals" and "teachers" aren't you? The money doesn't go to them though, it goes to the PFI providers (just ask the hospitals going bankrupt). And we probably need fewer hospitals anyway - concentrate all the knowledge down into a few centres of excellence to improve patient care; or would you prefer that children needlessly die because their surgeons do so few operations that they are verging on the incompetent?
Maybe you'd rather spend it on the military? Oh whoops! Won't go to the front-line troops. It'll be trousered in PFI. Again. Or wasted on jets that are massively over budget and decades behind in development. Or compensation for cancelled contracts that were failing to deliver, or...
Police? Maybe, but in a few years time that'll be a huge PFI hole as well.
Trains? HAH! Do not start me on that corruption which is supported by our spineless regulator.
I could go on.
So in summary...spending the billions of finalising this nation as metric is one investment that might actually do something and not just enrich the money laundering, tax-evaders in the City.
"And whilst they upgrade, can we please have our signs and limits change to km? Puh-leeze!"
No. The metric system is shit for that sort of thing. Indeed, it's not actually much use for anything other than programming computers because computers are dumb as all get out. Even then, many parts of the Imperial system are actually better suited even to computers.
Basing your measurement system on the genetic accident of how many digits you have on your hands is no way to go about things.
"Basing your measurement system on the genetic accident of how many digits you have on your hands is no way to go about things."
Whereas basing your measurement system on the genetic accident of how wide your thumb is makes *perfect" sense?
Any metric is superior to Imperial by the simple fact the scale is a consistent linear progression. Of course the current decimal metric is inferior to a duodecimal measuring system.
"How long has the world had metric standard units for now?"
Yes but since when is the United States part of the world?
Oh right sometime in the century 3000...
Futurama:
Leela: "Look, I know there are no car chases, but this is important. One of these two men will become president of the world."
Fry: "What do we care? We live in the United States."
Leela: "The United States is part of the world."
Fry: "Wow, I have been gone a long time."
no, perl is more of an anarchist utopia-- the general complaint is that there's too much freedom, too many ways to do things, and not nearly enough whips and chains to save developers from themselves. I suspect the OP was thinking of something like Erlang or any other language where claims about "correct by construction", "provability", and "mathematical formalism" are major selling points.
Aww, you guys with your 'perl' and 'brainfuck'. Such charming naievety brings a tear to me eye. No, programming bondage and discipline is something else; claiming perl is b'n'd is right up there with thinking that 50 shades is a realistic portrayal of a sadomasochistic relationship.
No, I was thinking more of Coq formal proofs and Haskell and Ada. The sorts of nasty, serious hardcore that will never be lighthearted fun, where there are no safe words to make the type system stop and when the amount of verbosity and ceremony actually leaves scars.
Brainfuck indeed. You have much to learn, young one. Tell me, have you ever tried... category theory?
"..how many lines of code are in the Coverity software, and what do they use to search for errors in that."
Thats the problem with proving software - how do you know the tests or the proof is correct? In theory you could end up in an infinite regression of testing - prove the software, prove the proof, prove the proof of the proof etc etc. I guess at some point you just have to draw a line and say "its as good as its humanly possible to get".
boltar says: "Thats the problem with proving software - how do you know the tests or the proof is correct? In theory you could end up in an infinite regression of testing - prove the software, prove the proof, prove the proof of the proof etc etc. I guess at some point you just have to draw a line and say "its as good as its humanly possible to get"."
It is actually not as bad as that. Proof systems usually have a very small set of primitive rules and combining forms that are verified by hand, and then all proofs are build up from this set of primitive rules and combining forms. This means that errors in the code that generates the proofs can not generate faulty proofs (at least not without triggering run-time errors). Basically, the worst a programming error in the proof generator can cause is failure to produce any proof, but faulty proofs can not be made. Assuming, of course, that the small kernel of primitive rules is correct, but great effort has been made to ensure this.
When proving behaviour about programs, a much larger problem is whether the formal specification of the programming language actually corresponds to the behaviour of running programs: The compiler may not conform to the specified standard. Hence, you often verify the generated machine code instead of the source code. That way, you don't have to trust the compiler and you don't need a formal language specification for the high-level language. What you need instead is a formal specification of the machine language, but that is often easier to make. A problem is, though, that the machine language does not have information about the types of values (are they integers or pointers, and pointers to what?). So you sometimes make the compiler generate typed assembly language. The types can be checked by the proof system and help verification of the correctness of the code relative to a specification. Obviously, few "standard" compilers generate typed assembly language that can be verified this way.
No, those were the corrected lines. The original lines were:
Rover.PrimaryTarget = "Appleseed, John";
Rover.AI = new AI("Germinate");
They had been intended as a stub for testing, but some bright spark figured that they were just what a (nearly) barren planet needed and left them in without authorization.
While there is a lot of "religion" in choice of programming language, I find C a particularly bad choice for writing zero-defect software: There is not enough information in the types to catch even simple mistakes (such as writing x=y instead of x==y) at compile time, memory deallocation is unchecked and unsafe, lots of behaviour is specified as "implementation dependent" or "undefined" in the standard, and so on.
As a result, you have to throw a lot of complex analysis after the program just to catch errors that in most other languages would have given a compile-time error message or which could not even occur in these languages. And to make the analysis tractable, programmers are forced to use only the simplest parts of the language, as the more complex parts are too difficult to analyse. Of course, this allows Stanford researchers to write a few scientific papers and Coverisity to earn a few bucks. But that seems like a very costly solution.
I don't suggest using one of the newer mainstream languages, because while they have better type systems, they are not suited for small computers running real-time software. But there are plenty of languages designed for ease of verification and control of resources. Some of these even have compilers that are verified to generate correct code, which I don't think any (full) C compiler is.
Agreed but I am about 100% sure they would have used a safe subset (knocking out impl. dep. behaviour & more) + a whole lot of other procedures & doubtless other static checkers. Their figure of 1 bug per 1000 lines suggests that strongly. Also they probably define a bug more stringently than normal software dev process would.
Odd claims like checking for null pointer dereferences, erm, you can statically do that I'm pretty sure, with suitable restrictions, much like const propagation I guess.
I'd very much like current software dev processes - which are horrible - to grow up and become a boring engineering job. Yes, boring. Less 'excitement' for me, fewer 'thrills' for the user. Less crunch time, more having-a-life. Also would winnow the actual able guys & gals from egoists and bullshitters and plain bad.
(this article does read suspiciously like an advert, no?)
May I draw your attention to "Structured programming" by Linger, Mills & Witt.
It describes the underlying tools used by the teams that built the the software for the Shuttle (and whose work *defined* what the term CMM5 means).
AFAIK most of their key innovations were in *procedure* rather than actual software tools, although their change management system could give cradle-to-grave histories on every line of code in the source (not sure if this is SOP for *all* modern CMS's these days).
*sticking* to the process when deadlines loom is another matter.
I'd also recommend Harlan Mills "Software Productivity" for a very neat way to establish how many bugs are *likely* to be left in a program. Implementing it however may be quite tricky (but I bet it would make a hell of a product).
Well, as they are running on the Wind River VxWorks RTOS, it could be that the choice of <insert your favourite language here> is not supported. I have not programmed in VXWorks, but looking at the design spec's for Curiosity, I would imagine that getting as close to the hardware interface as possible is a requirement. Would <insert your favourite language here> support real time control of the the devices?
Also, JPL programs in C as a language of choice. I have worked with them (briefly) and although I was not working in C, it quickly became apparent that all their examples and code base was in C. Well, actually, some of the examples were in (shudder) FORTRAN.
Mine's the one with the card deck and manual hole punch in the pocket.
Here's the guideline for ESA's Galileo System (a monstrous "industrial policy" effort gone badly wrong, but that's another problem)
Galileo Software Standard (GSWS) defines 5 different software development assurance levels (SW-DAL), which determine the situations in which software that has passed a given DAL can be used:
Level A: Software whose anomalous behaviour would cause or contribute to a failure resulting in a catastrophic event.
Level B: Software whose anomalous behaviour would cause or contribute to a failure resulting in a critical event.
Level C: Software whose anomalous behaviour would cause or contribute to a failure resulting in a major event.
Level D: Software whose anomalous behaviour would cause or contribute to a failure resulting in a minor event.
Level E: Software whose anomalous behaviour would cause or contribute to a failure resulting in a negligible event.
Programming languages allowed according to [GSWS-SWENG-1180]:
Ada, Assembler, C: any DAL
C++: Allowed only for DAL D and E (except if you can get a waiver)
Java: Allowed only for DAL E (except if you can get a waiver)
So yeah, C is nice. Though the GSWS also says, no dynamic memory allocation and other casualness. Also, independent validation and verification by a second team if above DAL D etc. It's a telephone book of requirements..
I'd agree with certain elements of such software not being written in C, but i'm mystified by some of your comments....
"There is not enough information in the types"
I'm not sure what you're trying to say here - I certainly can't parse it. Short of explicitly casting pointers to things to stupid values, C is perfectly well aware of what types things are.
"to catch even simple mistakes (such as writing x=y instead of x==y) at compile time"
If y is assignable to x, then how do you know that 'x = y' is not the intention. If y is not assignable / convertible to x then an error will (or can) be produced with any half decent / current standards compliant C (or, indeed, C++) compiler. This assignment versus equality problem is common to many languages, but is impossible to classify as an error (because it's not, if the types are assignable/convertable) - this is part of the reason that every C compiler i've used, within the last 10 years at least - many long before that - warns about it.
"lots of behaviour is specified as "implementation dependent" or "undefined" in the standard, and so on."
They mostly tell you what not to do. This is useful. It is perfectly possible to write well-defined C programs - alas in the wrong hands (and even the right ones) it's all to easy to write ones that aren't. C is not alone in that, just jolly good at it...
choke.. arf, arf, arf, arf, arf......
C is 'easier to statically analyze' because by definition the compiler is unwilling to prevent whole classes of static faults, thus leaving [whole classes of static faults] available for a seperate static analysis program to detect.
You *might* like to check the MISRA coding rules.
They were developed for *automotive* applications like engine management units brake and gear change systems.
IE *lots* of meatsack testing it and if it fails someone *will* end up going "squish".
Dynamic memory allocation is *explicitly* ruled out as "unsafe" (at any speed).
Well I'm interested. Anyone else use coverity? All the static analysis tools I've used fail to find anything terribly complex. A tool that could spot data consistency errors across threads would be nice.
Also, the language debate appears to be winding down so... C++ is the best language ever because recursive template FFT implementations are the fastest.
Cheap eastern labour at CERN.
"The cost [...] has been evaluated, taking into account realistic labor prices in different countries. The total cost is X (with a western equivalent value of Y)" [where Y>X]
source: LHCb calorimeters : Technical Design Report
ISBN: 9290831693 http://cdsweb.cern.ch/record/494264
Yes, but is probably not what you want. So set the compiler to warn about such things, and further set the compiler to treat all warnings as errors. If you really, really want that construct as written, then put in a pragma or some other thing so for that line, the compiler does not treat that as a warning. Your program will compile, and there will be a rather obvious sentinel to the next poor sod who has to maintain your wonderfulness that you did something a little unconventional (because we all know that real C programmers can't be arsed to write comments explaining that they're doing something unconventional, now could we?)