He's blunt. He's aggressive. He's offensive.
He's right.
And thank heavens he still cares enough about his baby to kick ass when needed.
Linux Lord Linus Torvalds has unloaded as only he can in a post to the Linux Kernel Mailing List. At issue is some new networking code that popped up in what was hoped to be the final version of Linux 4.3. Torvalds' first words on the code were: Christ people. This is just sh*t. Torvalds is grumpy because some new code has …
He's not corporate either, he has no need to be polite.
In professional atmospheres, I've witnessed a whole lot worse than "what is this shit?". Privately, I've heard even high-end corporate types use such languages in private meetings - they are only more polite for fear of some HR-comeback if they're not. In everyday personal life, I sure give such verdicts on crap that I'm forced to use.
I'm more interested in why anyone thinks this is worthy of an article. "Elite software developer gets pissed off at crap software code foisted upon him".
What gets me is that a few messages later is a re-pull request without that code - so it's entirely unnecessary, unjustified and only creeped in (if the new pull request is to be believed) because nobody commented on it when it was posted to the usual mailing lists. This is a heads-up to his maintainers, and the people posting patches, and those reviewing patches. "We don't do this".
And guess what? I bet nobody's going to try pushing that shit into the kernel again.
"I'm more interested in why anyone thinks this is worthy of an article. "Elite software developer gets pissed off at crap software code foisted upon him"."
I suspect the 'newsworthy' part of this is actually how restrained Linus was in his rant this time. After all, nothing caught fire and women and children didn't run screaming.
It's small wonder that GNU/Linux is going the way of the Blackberry! It's run by people who act like little children! Nay, worse than children.
Incivility is rarely appropriate and never constructive. Much better ways exist to get one's point across, without resorting to such tawdriness! And anyone defending such antics deserves equal blame.
Besides, in a few years, GNU/Linux will go the way of the Blackberry. And small wonder, that.
"Incivility is rarely appropriate and never constructive. "
Oh shut up! Yes, I do agree but sometimes you have to vent, clear the air make people aware of your emotions THEN most of us will apologise and get the thing fixed.
Sorry but despite us have having this noble air of civility, we're all just monkeys with technology. The killer instinct is just below the surface in every single one of us and it's only the brainwashing we get in our familie, schools and social circles that helps us to keep it under control. Once in a while it bursts out, we break something, shout and scream and then we get back to normal.
Sorry but despite us have having this noble air of civility, we're all just monkeys with technology. The killer instinct is just below the surface in every single one of us and it's only the brainwashing we get in our familie, schools and social circles that helps us to keep it under control. Once in a while it bursts out, we break something, shout and scream and then we get back to normal.
Although I don't like what you wrote, I'm forced to agree with it.
You "politically correct at all cost" types make me wanna puke. How is this incivil? he's insulting the code no the author. The code IS crap and he's pointed that out with the force it requires to nip it in the bud and stop some moron copying that approach again later.
>> It's small wonder that GNU/Linux is going the way of the Blackberry!
Wow how uniformed can you be? You need to go get a clue.
Guess what OS over 99% of the worlds 500 top supercomputers run? Guess what OS is the parent of what most of the world's smartphones run? Guess what OS Google and most of the other servers on the internet run? Guess what OS your own home networking gear is probably running?
It's open source development. Licensed under GNU GPL. Pray tell how it can go the way of Blackberry? It is not even a company...
Sure other people have formed companies which have created products based Linux like Redhat, VMware and Canonical to name few.
There's a rather good argument that he's wrong. Essentially using the compiler built-in function is more efficient and in a way more readable as everybody understands what this code is about. (checking for an overflow)
Overflows are one of the hard parts of C(++/#) and even Java.
Here is the full argument by someone who professionally finds integer overflows:
http://blog.fefe.de/?ts=a8c95274 (In German of course, this is about computing after all)
There's a rather good argument that he's wrong.
I don't think it's a zero-sum game in this case. The code he's complaining about is not particularly readable; I'm with him on that. And using language extensions is suspect, in my book, but then I've never been fond of Gnu's "almost but not entirely unlike C" language.
On the other hand, his proposed alternative, while very clear, duplicates the addition and so risks skew - in a future edit, someone might change one line but miss the other.
The broader point, that checking for over/underflow in C is a pain in the ass and easy to get wrong, and at best makes the code more verbose and harder to read, is well-taken. This is one of the real problems with C: making it safer is possible, but expensive both for the author of the code and anyone reading it later (and some studies put reading and understanding code at as much as 40% of the overall cost of typical software development).
beaten into Microsoft software people "Make money at all cost!
Well, Windows 10 says the beatings didn't work.
I've no doubt that if MS had been taken private by somebody like Silverlake (a private equity outfit focussed on IT for those not familiar) then W10 would have been very different, would have been a winner, and we'd (happily) have been reaching for our wallets because no way on this planet would it have been free.
Maybe I'm alone on that, but on balance I would have payed for W10 if it had worked. So a choice of a pure tiles'n'apps or W7 menus according to my choice, ideally some new whizzy tech to justify the price (that is, not crapware like Cortana), no spyware, better security (including a complete ban on security ground-zeroes like Flash) and all for a cost of around thirty quid.
But Microsoft don't see this. W10 will be rammed down our throats (and when pressure doesn't work, they'll just end-of-life W8 to force us), but speaking for myself, I can smell a couple of things. One is the smell of decay (I thing somewhere Redmond way), and the other thing I can smell is mint. Linux Mint. I'm not there yet, but Microsoft are pushing and pushing, and eventually my patience will give way. I just needed to replace a domestic laptop, and I've ordered a Chromebook. I know that's Google spyware, but the point is that it's the first non-Microsoft computer in this household in nigh on thirty years.
Mint is not the highest quality Linux distribution. So don't judge all of Linux by Mint if you ever end up running it. Although Mint has a lot of features that do make it new user friendly. Mint is akin to your first bicycle with training wheels on it. Once you get your balance you might want to try out a distribution with a more mature orientation. By then you'll know which to choose.
Now as far as spyware goes everyone is spying on your as soon as you access the web. So I don't know how much Google spying on your some more really matters. Have a cookie.
I think this tired argument about Windows being "undeniably insecure" is wearing thin.
In reality Windows is now pretty secure, sure it was insecure 10 years ago, but now as each vulnerability has been patched the malware writers are having to attack things like Java and Flash as it's increasingly difficult to attack Windows itself.
The old claims about FOSS being inherently secure because it is open and that means more eyes were able to look over the code has been debunked lately. I am thinking of the likes of OpenSSH/Heartbleed and TrueCrypt and the number of viruses that are appearing on Android.
The truth is, (shock horror), that whilst few end users were using Linux and thus it was not worth the malware writers attention it seemed secure. Now that finally it's going mainstream in its Android form and there are substantial amounts of end users with Bank Accounts to hack, suddenly we are seeing the true facts that Linux like Windows is written by mere mortals and there are just as many bugs as Windows had.
GOTO is fine in very select circumstances. i.e. (and the forum code will make a mess of indentation here)
int init_structure(struct my_structure** s)
{
int ret = 0;
*s = malloc(sizeof(struct my_structure));
if (!(*s)) {
ret = errno;
goto exit_init;
}
*s->substructure = malloc(sizeof(struct another_structure));
if ((*s)->substructure) {
ret = errno;
goto exit_init;
}
free(*s);
*s = NULL;
exit_init:
return ret
}
Yes, that GOTO could be factored out quite easily in this situation, but suppose you had not one, but three or four sub-structures to allocate? Then an IF-based solution becomes messy. GOTO actually works out clearer to understand. Easy to read code is easy to debug code. There's no spaghetti here!
What's not okay is jumping between labels in other functions or using it to replace if, while and for statements. The C language has things like setjmp and longjmp for doing exactly this, and it's one feature that to my mind is just asking for trouble, so I never use that feature. That's what leads to a big tangled mess that no one can understand.
setjmp and longjmp are essentially the C version of exception handling and save doing if-then checks on returns of nested functions. But they're really only for use when things are really in a mess and unrecoverable - as exceptions should be used rather than as a general function return method which is what seems to happen in C++ rather too often due to poorly written libraries and ignorant coders.
This post has been deleted by its author
This post has been deleted by its author
It's not a leak. If the sub-structure allocation succeeds, the goto just after it is there to skip round the free. The free is only called if the first malloc succeeds and the second fails, and in that case, you want to free *s but not the sub-structure.
Having said that, the confusion caused by a supposedly simple example of how to write good code with gotos is a shining example of why most people steer clear of them.
Vic is correct, OP is leaking.
No I'm not - I misread the sense of the allocation test - it changes betweent the two mallocs. The goto is on successful allocation for the second one. I should look more carefully next time.
Although I don't make hard-and-fast rules about the use of goto, I think this is a case where it really shouldn't be used...
Vic.
This post has been deleted by its author
In C, there is never a strict need to use goto. Anything that does need goto can be rewritten without using it - this can be proven mathematically. C without goto is still turing-complete.
But it can sometimes be cleaner - a single goto can take the place of a chunk of code repeating twice and taking brackets five levels deep. Some consider the goto to be a tidier option.
This is wrong.
Even very good c or c++ developers make errors that can be security holes. All you can say is that they are less likely take them, which isn't really good enough. These languages often silently hide these flaws, ripe for exploitation later. Lots of tools help identify the flaws, but it's a lot of effort.
It's the 21st century and we still have languages that demand perfection from flawed humans. No wonder this site is stuffed with exploit news.
This post has been deleted by its author
This post has been deleted by its author
Not true. C requires constant vigilance for buffer over/underruns, arithmetic overflows pointer aliasing etc etc, all of which can occur in C code without any compile time or run time warning. Other languages have features that reduce the impact or interrupt control flows when these things happen. Unfortunately most of them aren't very suitable for writing OS kernals.
"And I don't agree with using GOTO. Ever. In the last 23 years of C* and C++ programming, I've never felt the need to use one."
Never use how long *you* have been doing something as a way to show how bad ass you are at it. Especially when you then go on to say you haven't done it in the last 15 years...
There are groups of people that have been doing something they are sure is the correct and totally right way of doing things for thousands of years that are undeniably wrong.
Have you ever used break in a loop? Well you just used a goto to the outside of the loop. THE HORROR!
Have you ever used break in a loop? Well you just used a goto to the outside of the loop. THE HORROR!
I've heard that argument many times before, but it doesn't hold up.
Sure, a break statement in C is semantically equivalent to a goto whose destination label immediately follows the loop. That doesn't mean that the two are equivalent in all ways: the break statement is self-documenting. You know as soon as you see it what it's doing and where it'll jump to; with goto you know none of that without examining the code and finding the label. The code with the break statement is more readable and more maintainable than the equivalent using goto.
So, even when it's doing something unexceptionable like jumping out of a loop the goto should be considered harmful when compared with break. In other cases it can be far more harmful.
>That doesn't mean that the two are equivalent in all ways: the break statement is self-documenting.
if(failedatargument):
goto packupandgohome;
It's almost like you can give the label a sensible name so that it describes what the code at the label does.. almost as if it's called a label for exactly that reason.
"You know as soon as you see it what it's doing and where it'll jump to; with goto you know none of that without examining the code and finding the label."
That all depends on how much code there is between the break and the end of the loop, and whether the blocks within that section have been indented consistently and correctly - it might just be faster to find the label than have to count the opening and closing braces that follow the break.
(that's not to say I'm condoning using gotos and untidy bracing!)
break can be hard to spot where your code winds up too, it depends on how big the loop is or how deeply nested it is.
There are other features of C that can make code hard to read. Sometimes there are good reasons why something is written a particular way. One bit of code I'm not particularly proud of in terms of its looks is this:
https://bitbucket.org/vrtsystems/pyregblock/src/1bb36223780687b07c94820ada8052f25171f0ab/regblock.c?at=master&fileviewer=file-view-default
No, not pretty, abuses the C pre-processor something fierce, but it works (on x86, AMD64 and ARM at least), and gets the job done. Crucially, the macro is written once, then instantiated several times to give me code that operates on 8-bit, 16-bit, 32-bit and 64-bit integers.
This was done to avoid writing the same thing multiple times. C++ was another option, but I wasn't sure how to go about gluing Python's C API and C++, whereas macros at least got the job done.
"And I don't agree with using GOTO. Ever. In the last 23 years of C* and C++ programming, I've never felt the need to use one."
Then you're a fool. It's there for a reason, and one day you'll find it. Never say never.
"*I haven't actually used C in over 15 years, and seriously hope I don't ever have to again. It's a terrible, insecure language that should have been donated to a museum a very long time ago."
Languages aren't insecure, people are. Again, use the right thing for the job and ditch the prejudice. There's no such thing as a language that's good for everything, and one that tries to force its users to write secure code by removing all the best features is going to burn sooner.
we live in an age where 24 inch wide screen monitors have been the standard for any office for at least the last 10 years. So we really should move on from limiting lines to 80 characters
Long lines are hard to read -- regardless of our ability to display them intact -- and that has an effect on code clarity. Somewhere around 80 characters is the upper limit for optimal readability, so while I dislike hard limits I would hope not to see many lines longer than that in a decent codebase.
Actually, the human-factors people tell us that line-length in the range 60:72 is best; readability is (slightly) degraded at 80. (By this I mean line-length exclusive of indentation: a line using columns 24-84 is basically as readable as one using columns 0-60.)
And I REALLY HATE web-presentation that forces lines longer than 120: their stuff is TOTALLY unreadable. (Some of IBM's documentation fails this way; they should know better ;-( )
*2) we live in an age where 24 inch wide screen monitors have been the standard for any office for at least the last 10 years.*
I wish, At most of the sites I've worked at in those same 10 years large monitors are only for managers and users - developers have to make do with the cheapest 14 or 15 inch monitors available :(
"*I haven't actually used C in over 15 years, and seriously hope I don't ever have to again. It's a terrible, insecure language that should have been donated to a museum a very long time ago."
C is the minimal no-BS model of a load store machine, which is why all the big kids play with it. All we are really doing is loading registers then doing ops then storing results, regardless of the language used.
You could say all the other languages are just wrappers around C.
If you cannot avoid shooting yourself in the foot, perhaps you should not be allowed to play with guns.
By using *s all over the place instead of just assigning the final result to it, you make your code unreadable, error prone, and inefficient.
What's worse is returning a separate error indicator, with a function name that makes it absolutely not clear how that error indicator is to be interpreted. Instead of the obvious way to return the pointer. What's good for malloc is good for me.
int init_structure(struct my_structure** s)
{
enum { eAlign = 7 } ;
size_t aligned_size, total_space;
char *base;
aligned_size = ((eAlign+sizeof(struct my_structure)) &(~eAlign));
total_space = aligned_size + sizeof(struct my_substructure);
errno = 0;
base = calloc(total_space,sizeof(*base));
if(!base)
return errno;
*s = (void *)base;
*s->substructure = (void *)(base + aligned_size);
return 0;
}
One of the worst parts of python lint, 80 character lines. Hateful. Otherwise fab language.
As for the 'C' without goto; the indented code method is the way I always did it too.
Readable, obvious, more testable - easy for humans to track and most importantly:
One function exit, just one - allows easy test/debug instrumentation macros etc.
80 character lines is not bullshit, it is still relevant today. More text than that on one line is not as legible, and encourages people to make convoluted statements that would be clearer if set on multiple lines.
With today's massive monitors, this doesn't mean that you have lots of wasted space, it means you actually have space for contextual information around your editor, eg man pages, other source files, other resources..
One of the worst parts of python lint, 80 character lines. Hateful. Otherwise fab language.
Waah (its actually 100, but whatever). Put max-line-length in your pylintrc and dry your eyes.
Actually there is another reason for 80 character lines:
If you use nice American letter paper (8.5 inches wide) and print out at 10 characters per inch (common for nice line printers, like an IBM 1403), you use 8 inches of space for the output lines.
So, if you DO document things and print them out (a good idea in my book), 80 character lines and MONO spaced type are nice to have. Sure there are smaller fonts, but as you advance in age (like me), you tend to prefer the nice 10 character/inch type on nice letter paper.
In the older days, you used the same output, but used blue (green) bar paper which was 14 inches wide and 11 inches long, and it had lots of room for the scribblings you made while doing the developing. (been there, done that!).
So, 80 character input lines are a nice thing to have.
❝ [...] and it had lots of room for the scribblings
❝ you made while doing the developing.
Oh god! I read that and had a flash of memories hit me. I usually used the 8x11" white dot matrix or daisy printer paper rather than that green lined stuff, but when I ran into a difficult to figure out problem, or I was going to be away from the computer for a while, I would print out my code, and spend hours looking at hard copy, adding notes, rewriting in the margins, crossing out lines of code or variables... Sometimes just having a hard copy that you could flip through the code by pulling pages up accordion style to compare different sections made everything clearer. Sometimes seeing it like that made me realize I could combine several chunks of code into one elegant short piece of code. I really should get back into the habit of printing code like that — so many great solutions and ideas were spawned from it.
I had forgotten all about that until I read your comment. The wave of suddenly remembering all that at once was a weird feeling (in a good way).
I'm pretty sure the line length limit in the FORTRAN f77 I used as an undergrad in the 90s was lower than 80 characters. You lost the first 6 or so characters on every line (okay, they weren't really lost, but they had specific meanings so for most lines of code they weren't used), and I'm pretty sure everything after character 72 was ignored, no?
"That's as much as you could fit on ye olde punch card."
IIRC the last columns on a card were for the card number - with possibly a "continuation" indicator in column 72? The numbers were usually spaced in 100s so that added cards for corrections could be slotted in using 10s for the first change - then finally 1s before part of the pack had to be duplicated and renumbered.
It was not unusual to drop a block of cards on the floor - then using the card numbers to get them back into order. There were three common errors:
1) Taking too big a handful of cards out of the tray - and the centre "blew out".
2) Forgetting the back end of a full tray of cards was balanced over the edge of a work surface. At the critical unloading point the cards left in the tray had a critical centre of gravity - and the tray did a backward flip scattering its contents over the floor.
3) Forgetting to put the "lid" on the high speed card reader's vertical output hopper. The read cards were sprayed into the air.
This post has been deleted by its author
This post has been deleted by its author
This post has been deleted by its author
80 character lines is not bullshit, it is still relevant today. More text than that on one line is not as legible
That's not quite true. Line lengths of about 120 chars are fine which is why paperbacks use them. But 80 chars is generally enough for most lines.
The 80 chars come from the terminals used in the 1970s and the practice of distributing patches via e-mail where additional line breaks could cause problems. It's stayed around because readable diffs are so important.
80 chars isn't a hard limit in Python and I don't know any people who use something like lint to enforce it. PEP 8 is the main set of rules, wisely reminding us that "foolish consistency is the hobgoblin of tiny minds".
That's not quite true. Line lengths of about 120 chars are fine which is why paperbacks use them. But 80 chars is generally enough for most lines.
Hmm, only nasty cheap paperbacks. In print typography, it is recommended to have between 50-60 characters per line, including spaces. See Emil Ruder's "Typography"..
Today's massive monitors mean that you're dragging your eyes across acres of screen real estate miles away from the rest of the concepts related to the bit of concept your eyes happen to be pointing to, and you forget the rest of the context because it's so far away. Why do you the the established convention in newsprint for centuries and centuries is columns that you can read by scanning downwards with the whole column width under your eyeball?
First of all * has lower precedence than -> so it will mercifully not even compile without parentheses around the second *s.
There are three routes to exit_init. Instead of reserving all the goto statements for failure cases and only dropping through for success, confusion is assured by executing the second goto when both allocations succeed, but disguising it as another failure case by returning whatever errno happens to contain. In the drop-through case it has failed to allocate, but returns zero suggesting success rather than an actual error number.
There is no test coverage for any of the three routes. Nevertheless, it does appear either to allocate correctly, or fail without leaking. Until one day someone will notice an unexpected return value, decide to go ahead and fix it without writing and running any tests, add a supposedly missing exclamation mark to the second test, and thereby manage to break two out of the three previously working exit cases, including the all important success case.
[Anonymous to save me having to turn down a flood of job offers to work with such fine C code, or in case my analysis is not quite correct.]
This post has been deleted by its author
Style is a matter of taste. You may find the assignments "extremely sloppy", but to many others they are a familiar idiom that does actually serve a purpose.
I am quite happy to provide you with the braces you crave, so long as you always give me a space after if/while/for, and never before a function call.
I have seen (and removed) so many NULL checks before free (or delete) that it no longer surprises me. I like to think that those developers who were not sure used the brain cells to remember something more useful instead.
We know that the local variable ret is just a relic of a previous revision. By all means refactor it away if it bothers you enough to risk changing working code and taking the time to retest it.
(this was written in reply to Anonymous Coward, but the Reg Reply seems to have misplaced it)
Generally agree, but please add "return is not a function".
That said, I suspect that you never ran across a system where free(NULL) did bizarre and hateful things. Programmers who have used such things get gun-shy. Much of the code I have written ran on a fairly wide variety of (allegedly) POSIX. systems, but I learned pretty early not to assume that every implementation was flawless. I wrote a simple "acceptance test" for the functions declared in string.h and ran it against the most common systems at the time (Vax VMS and BSD, DOS, SunOS). Not one passed. Don't even get me started on SunOS sprintf() and toupper().
There's a Chesterton's Fence argument sleeping here, but I will not fully awaken it.
As for braces, I agree in principle, but we are discussing Linux kernel code, where braces on single-statement if clauses are forbidden. Yeah, even though using them in the "goto fail" code recently under discussion would have allowed the compiler to catch it.
And I've written FORTRAN on 90-column cards, for a compiler that IIRC Don Knuth worked on as a student. He also co-wrote a paper regarding "Notes on avoiding goto" that remains about the most sane discussion of the topic. (It also got him a witty comment from Eiichi Goto)
I wrote C code for VAX, DOS and SunOS too, but without becoming gun shy. Then I learnt to use free(NULL), isupper, toupper, and sprintf exactly as specified in C89. These days we add platform-specific conditional compiling for snprintf and format specifier macros to cope with size_t, long long, and integers with particular numbers of bits.
I suspect that you never ran across a system where free(NULL) did bizarre and hateful things
Since 1989, conforming C implementations have been required to handle free(NULL) correctly. Anyone writing C could should know whether the implementation they're using is actually C - that is, something that conforms to some version of ISO 9899 - or some other language which is either an ancient historical artifact or misnamed. And they should write their code accordingly, not with a bunch of cargo-cult constructs learned in other environments.
And yes, I wrote code for pre-standard implementations; I started writing C in '87, and it took several years for conforming implementations to become common. I've tracked down and documented bugs in a few implementations. I know of common ones that still can't get it right (hello, Microsoft!). But I write code for the language and adjust as necessary for the implementation.
" I suspect that you never ran across a system where free(NULL) did bizarre and hateful things. "
Back in 1970 we were supporting a relatively new O/S. Every so often we would arrive in the morning to find a large pile of overnight system dumps waiting for us to analyse. We soon learned to ask the question "Who has just employed a new programmer?".
The existing programmers had all established their own styles - and we had fixed the bugs they exposed. A new programmer found other holes - usually in error checking or other assumptions.
One classic case was in doing double buffered I/O. It was found that the O/S call didn't expect the application to change the parameters in its control block while the operation was in progress. The fix was for the call to copy the parameters into protected O/S store.
Comprehensive parameter checking is often neglected in implementations of functions - of which Free(NULL) is a classic example that can cause havoc.
>Comprehensive parameter checking is often neglected in implementations of functions -
>of which Free(NULL) is a classic example that can cause havoc.
Case in point; The SDK for a microcontroller platform by a big name silicon vendor corrupts the heap if you call free() with a null pointer. This SDK was released a few months ago.
This post has been deleted by its author
Not everyone has exactly the same notion of readability, or even cares that much about consistency. It doesn't always automatically slash the odds of screwing up. It might not actually be that important to trump or overrule any individual's taste to ensure that the whole team writes code that we can all read and understand and modify easily enough. Most of us are comfortable enough working with code that follows any of the more mainstream coding styles.
I wonder if applying a const-correctness guideline to current would have been enough to stop the backdoor from compiling.
The do { ... } while (0) idiom works wonders against multi-line macro mischief and mayhem. I'm not actually against braces, but I really can't remember the last time I saw a defect caused by omitting them. It's hard to go wrong if you write straightforward code using an editor that tracks indentation correctly.
"Wasting time and space with redundant local variables. (Just return errno at the end.)"
From here:
"The setting of errno after a successful call to a function is unspecified unless the description of that function specifies that errno shall not be modified." However, free() does not mention its interaction with errno...
i.e. a POSIX compliant free()
could potentially corrupt errno
. (I've known implementations that will reset it to zero.)
This post has been deleted by its author
This has no less than three compilation errors: a missing semicolon, and missing parentheses around both *s before ->.
Once those are corrected, the logic seems clear enough, and capable of passing tests. So we can move on to look at style, performance, and portability.
Using a local variable (register) to hold an output result is a common tip to make code smaller and faster and a little clearer. The register needs copying to the output parameter *s just before each return.
Most style guides mention that there is no need to check for NULL before calling free. The check is here either because a developer did not know that for sure, or perhaps just so that others who might read the code do not need to know that for sure. We can probably rule out an attempt to tune the failure case by saving a function call at the expense of an extra test in the success case.
It is hard to contrive a situation where it would be useful in practice, but some C dialects do permit empty structs, for which a malloc implementation can return either a pointer to nothing, or NULL. That will cause the success case to fail.
if(*s) free(*s);
The test is unnecessary. C (ISO 9899) has always required free() to do nothing if the argument is a null pointer. Yes, you have the overhead of a (leaf) function call, but that's a negligible price to make the source more straightforward and readable, and remove a possible source of error (like, say, someone not noticing the if-body isn't a compound statement, and trying to do something else inside it without correcting that).
If you know machine/processor code there is always, and has always been, a goto, Then you can play around with the "return" (again a goto) to produce a subrutine or what ever you want to call it. The goto will never disappear. What has mixed you up, nothing "wrong" with that, is the sentence "Goto considered harmful" or you are taking about some makro language like Algol, a nice language, I used 40 years ago, perhaps there was a goto there too. Then again, why the hell, did I goto into this.
If the contributers to the kernel start behaving like those foolish little children calling themselves students in the USA who seem to believe that any opinion they find offensive somehow infringes their human rights and require "safe spaces" so they don't have to confront them, then the project is doomed. Linus might be an ass sometimes but he's the boss and if you don't like the way he operates or can't handle being treated like a functioning sentient adult rather than a baby who needs to be coo-coo'd to, then go play elsewhere.
"Linus might be an ass sometimes but he's the boss and if you don't like the way he operates or can't handle being treated like a functioning sentient adult rather than a baby who needs to be coo-coo'd to, then go play elsewhere."
There are plenty of bosses who would love to have an obsequious toady like you in their employ. Your idea of an ideal leader would be Stalin, right? Perhaps you'd feel more at home in North Korea, eh?
It is actually Torvalds who is acting like a child; that an (ostensible) adult would have these kind of tantrums is astonishing. And he can get away with acting like this because he is constantly pampered: see for example the very first post in this thread, where another lickspittle-in-waiting (one "Rol") wrote that this kind of self-indulgent tantrum signifies that "he still cares enough about his baby to kick ass when needed." One has to be quite emotionally / psychologically crippled to confuse these kinds of verbal attacks with "caring".
He has poor leadership skills, and shows an inability to control himself. And people who can not control themselves should not be in control of other people.
I recall reading something that lamented the fact that it was Torvalds and not Alan Cox who ran Linux. But then again, Cox left Linux for a long time because of Torvalds. How conducive to Linux's progress was that, do you think?
@ Turtle
I used to work for a major international chemical company as part of a root cause team. When investigating why we were getting out of specification failures, it was discovered some idiot had not checked calculations as they were supposed to through sheer laziness. That little boob cost the company £10M straight off the bottom line but luckily didn't explode the plant. It's a grown up world out there where lazy actions can cost lots of money and potentially lives so a verbal arse kicking to slackers is quite justified.
"There are plenty of bosses who would love to have an obsequious toady like you in their employ. "
AFAIK Linus doesn't employ anyone and the kernel isn't a company project (ok, you can debate what Dead Rat and Canonical get up to but thats another argument...). You take part of your own free will under no duress knowing exactly what things will be like if you fuck up in a major way. Don't like it? Don't do it.
"He has poor leadership skills, and shows an inability to control himself. And people who can not control themselves should not be in control of other people."
Yet oddly he's kept this project on track since 1991. Lets see you manage that. No doubt your approach would be something like: "Oh, this code didn't quite work, shall we take it offline for a 360 where we can have a heads up and drill down into your performance metrics and facilitate a new mindshare knowledge transfer from the other team stakeholders?" Blah fucking blah.
"But then again, Cox left Linux for a long time because of Torvalds."
Did he? This is a direct quote from his Google+ page:
---------------------------------------------
I'm leaving the Linux world and Intel for a bit for family reasons. I'm aware that "family reasons" is usually management speak for "I think the boss is an asshole" but I'd like to assure everyone that while I frequently think Linus is an asshole (and therefore very good as kernel dictator) I am departing quite genuinely for family reasons and not because I've fallen out with Linus or Intel or anyone else. Far from it I've had great fun working there.
---------------------------------------------
I see absolutely no reason for him to dissemble this time given he's been quite open in the past about disputes. Perhaps you should check your facts before clicking on reply.
Actually, God helped Linux by giving it Linus as creator.
What all the ".. he is acting like a child types .." do not get is that Linus is the man blowing the whistle in the WW1 trench, knowing that the actions of each individual not only affect the result of the operation, but also the fate of all others. Developing kernel code is as much a moment of truth, as leaving the trench, requiring decisive leaders not allowing weakness or mistakes.
Leaving one such line, would soon turn the whole kernel code base into a pile of junk, since with each new release more of such mistakes would sneak in without the oversight of men like Linux Torvalds.
Let's not overgeneralize. *This time* he didn't name names. On the other hand, he still did make some pretty strong inferences about "whoever" wrote the code, and "whoever" isn't hard to discover. That's well beyond just criticizing the code.
On the other other hand, I've been on too many projects that *didn't* lay down the law this firmly. Developers are a sneaky lot, and they tend to have their own agendas. They'll keep sneaking in code that they know is crap, if it lets them mark more of their personal tasks complete. If nobody is watching, or nobody responds strongly enough to put the fear of God into them, the result is a codebase that slowly rots into irrelevance. I do think Linus and (even more so) certain other Linux kernel developers behave in some pretty toxic ways sometimes, but as we try to improve that situation we still need to remember that bringing the hammer down once in a while is strictly necessary to maintain any kind of quality. It's all in how it's done, not whether it should be done at all.
This make me ask: why does the person who committed the code still have commit rights?
Maybe it's debatable as to how possible compiler optimisations are reflected in source code. I've always thought it was the compiler's job to figure this out using minimal sugar in the source. But this kind of debate is obviously misplaced in the kernel and probably indicative of other issues: maybe it's time to part ways.
This make me ask: why does the person who committed the code still have commit rights?
No-one has "commit rights" anymore, this is the purpose of git. Linux sees pull requests, and he can see pull requests from anyone.
Regardless, whoever this was should still have their metaphorical "commit bit". Just don't present code like that again - plus, Linus use of invective about the crap quality has ensured that this is widely discussed, and the reasons why Linus doesn't like that code disseminated, and he doesn't have to tell any other junior neophytes from the cult of GCC not to do similar in future.
"Developers are a sneaky lot, and they tend to have their own agendas. They'll keep sneaking in code that they know is crap, if it lets them mark more of their personal tasks complete."
This is happening more and more often in the Linux kernel. The Microsoft Mentality ("Ship it now, fix it later") is still widespread and many coders have migrated over to Linux (and BSD) without shaking it off.
The vast majority of this kind of crap shows up in hardware drivers (generally written by coders working to manglement deadlines) and without Linus policing it the kernel would rapidly become unusable.
Some argue that drivers should be external to the kernel, but that would make things even worse. The binary blobs we have to put up with at the moment are bad enough.
Just be glad that the "Paid by production" subcontractor model(*)(**) doesn't get past Linus either.
(*) "Never write a 3 line function when you can write 6 pages of incomprehensible code instead."
(**) There are a lot of coders who will do this even if not incentivised by the modern equivalent of "a penny per word", thanks to way they were taught.
There are a lot of coders who will do this even if not incentivised by the
Downvoted for "incentivised"! ;-)
Though I'm not sure the charge applies to the open source development. I think buzzwords and fads are more likely explanations for convoluted and error-prone coding.
It goes without saying that drivers should not be in kernel space. Even if this is only an organisational distinction.
"Downvoted for "incentivised"! ;-)
Though I'm not sure the charge applies to the open source development."
The vast majority of shitty driver code is submitted by people who are _paid_ to write driver code for devices produced by their employers.
Usually the "non-professionals" have to pick it apart and rewrite it into something that actually works.
Sorry, Stuart, you got the error handling wrong - you want it to return 0 if the substructure alloc worked. If it didn't, you want to free 's' and then return 'errno'.
In situations like this, 'return' is your friend. On the Amiga, the OS doesn't track your allocations, so in any error situations, you have to free allocated stuff yourself. The structure I tended to use was a big 'if' sequence, testing for success. Somewhere in the middle, everything is setup properly, and I 'return' from there. After the various levels of 'if' comes undo code for stuff successfully setup before that 'if'. Worked pretty good. I did cheat in one case and change the indentation to 2 instead of my default 4, though. The alternative is to just call an interior init function that is structured the same way.
I love Linus' no-nonsense way of thinking.
I presume that the Sarah Sharp sympathisers will be once again up in arms over this latest episode..
Personally I prefer a boss that lets you know where you stand instead of the namby pamby approach where nothing gets resolved and just lingers on and on creating nothing more than frustration for all involved...
Anyone with excellent vocabulary skills can be twice as damaging as someone who just uses "profanity.." Actual swear words have no importance, what is important is the reasoning behind them.
Why would the "Sarah Sharp sympathisers" (it ain't just Sarah Sharp, come on) be "up in arms" about this.
For once, Linus isn't being personally abusive. Very few people care about the actual swearing - no-one likes being sworn AT. If someone tells me what I've done is f***king stupid, I get the message, actually. Telling me that I'm f***king stupid makes me instantly think "f**k off, I'm not, I made a mistake" and not exactly keen to bother with their "critique" in future. I think it's a marked improvement, frankly.
"Why would the "Sarah Sharp sympathisers" (it ain't just Sarah Sharp, come on) be "up in arms" about this."
Because they like to take this at face value, do not care about the rest of the details.
Personally I do not believe that a softly, softly approach is a good solution when faced with difficult situations. Linus takes his job seriously and expects those around him to be the same, this is a well known fact "before" you take on the job....If you don't want to be in that situaiton then you don't take on the job. Up until now I would say that Linus has done far better than anyone else that I can imagine regardless of his attitude or usage of big bad swear words....
A softly softly approach would more than likely lead to disastrous results as that shitty code would have eventually been accepted in order to keep the peace...
The last block of code in the article is missing a ")" before the "goto".
That aside, Mr Torvalds is right in this specific case, and in general too. A lot of the constucts used to work with, and around, the preprocessor are just ugly and could quite happily be dragged out behind the back of the shed and shot.
"In any professional organisation he'd have been fired for bullying"
I take it that by "professional organisation" you mean a company. In any company the coders would have annual - or even more frequent - reviews. There would be more formal disciplinary approaches. People not meeting standards would eventually cease to be paid.
The Linux kernel team isn't a company. Some contributors may be being paid by various companies to take part but even so, unlike a manager in a company, Linus has no influence on this. The only influence he has is to accept or refuse code according to his standards* and to make clear why code is being refused. With so many contributions coming in he can't simply afford to be deluged with sub-standard code and with such a large and loosely aggregated population of contributors and would-be contributors he needs to get the message out loud and clear.
So how, given the realities of the situation, would you deal with substandard code?
*I have to say I don't agree with all his decisions, lack of raw devices being one.
In any professional organisation he'd have been fired for bullying
It's only bullying if he picks on specific individuals. All the evidence I have seen suggests that Linus just cares about the quality of the code, no matter who it comes from.
I've worked in hardware and software development for more than 30 years and I have never been anywhere where this would be taboo, provided the criticism was justified and that it was directed against the work rather than the person.
I think if your corporate culture is hyper-protective to the point where any kind of conflict is suppressed or avoided, then your precious employees are going to have a shock when they interface with the real world.
I just read "Although, some people can program in an assembly language and understand the intricacy of the spacecraft, most younger people can't or really don't want to," in the accompanying story on the lack of FORTRAN programmers ... these are symptoms of the same problem - many programmers are a lazy bunch who generally just get by - check the box, mark it as accomplished and move onto the next project.
I had one guy once who actually believed that if it complied without any errors then if had to be good code - I fired his ass, criticism can only do so much.
'Compiled without errors' only means you defeated the compiler. The next round is against the system, who will lay the smackdown on your newb code with his dreaded SIGSEGV attack. Train hard... after you defeat the system, you're up against The BlackHat. If you're lucky, you can survive the BlackHat match and go on to face The Changed Requirements
>a good FORTRAN programmer can write FORTRAN in any language
I once had the misfortune to work with FORTRAN code developed by a bunch of naval architects. Most of the variables were women's names. Or girls' names, as they would have put it. Not a trait that was language-dependent.
> You just wiped out most of the astrophysics community.
Yeah, I have seen those kind of "writing" by a PhD student.
1) Borrow olden code in FORTRAN to do fluid dynamics around a blackhole
2) Not sure what it does, how to call it but it fits right in where I have these call parameter things. Also runs well on the basement vector processor (remember this was a bit earlier in my career)
3) Produces pretty graphs which look good on the projector. "Nobody else will use it anyway"
4) ???
5) SORTED!!
Next grad student will pick up the code at 4)
You should see Ph.d.-candidate C++. It's much worse.
Or Java written by someone who belives "structure is good. The more structure the better, no matter whether it is relevant or not to the situation at hand." I've seen so-called scientific visualization stuff with more *files* than there are lines of code in my C package.
The webbie crowd have decided to get into kernel development.
This is the same problem that every fucking web monkey I've ever had the displeasure of dealing with has. Given "way everybody does it that we all know works and is bulletproof" vs. "NEW11!!!!1111 way of doing it that'll look rilly kewl on my CV",guess which one they go for? Every. Fucking. Time. Without. Fail.
I started my programming career in telecom where your code had to work 24x7x365, no excuses. And so my code is what I refer to as "kindergarten code". It's always as simple as possible, even if it takes a few more lines to accomplish. It should always be easy to understand with no fancy tricks, no showing off allowed, no cramming 20 lines worth of code in to one convoluted undecipherable mess of a line.
Someday, someone else is going to have to look at your code, so don't be an asshole, make sure they have a fighting chance of understanding what you've written and why.
"Someday, someone else is going to have to look at your code, so don't be an asshole, make sure they have a fighting chance of understanding what you've written and why."
And if you don't, the day will come when you're trying to understand some piece of code and shout, "Who the Hell wrote this crap!?", then realize it's your old code.
"I started my programming career in telecom where your code had to work 24x7x365, no excuses. "
One of my cousins was working in telecom in the late 70s/ early 80s on FACOM mainframes.
He spent nearly his entire time having to pull a Linus on coders who thought it was acceptable to write pages of incomprehensible (and fragile) gibberish instead of simple elegant functions. The problem is that he inevitably had find it _after_ it had been deployed and was causing problems in the billing systems.
Such coders are at the core of many of the PR problems you see at various major utilities and they're also found writing stuff which is critical to safety-of-life functions.
This post has been deleted by its author
The C language definition people have codified rules to make C as fast as FORTRAN and Pascal, but without VAR declarations and strict type checking.. And the C compiler people have implemented the standardised shortcuts to make C as fast as FORTRAN and Pascal.
You can't make assumptions like "a+1 > a" or
hlen + sizeof(struct frag_hdr) < hlen + sizeof(struct frag_hdr) + 8
Hence the "magical built-in compiler support", for magical built-ins like overflow_usub.
Now I'm not commenting on the specific code: probably there is other information that means the expression is safe in that line, using those veriables, at that point. But thst's the way of madness, and how unreliable code gets written.
And personally, I think it would be a bit unfair to blaime the coders for writing obscure compiler-specific code, when the language specification and compiler writers have devised a language where it is not safe to write clear simple code.
There is a small amount of push back: one suggestion is that it would be nice if C compiler developers could offer the option to turn off "unsafe optimisations", based on some simple studies of where their C compiler break expectations. Obviously, this would come up against the old C/unix expectation that it's the users responsibility to be excelent and avoid bugs, not the computers (and that anybody who needs help to avoid language warts is a looser anyway), so I've not seen that's it got much support.
Your are correct it's about proper checking for integer over/underflow.
For unsigned variables you have to check the carry flag to see if under/overflow has occured.
if( (foo - 1) > foo){
// underflow foo has wrapped to high range
}
if( (foo + 1) < foo){
// overflow, foo has wrapped to zero
}
mtu -= hlen + sizeof(struct frag_hdr);
1) Use of the error-prone "-=" only liked by crazed C-fashionistas
2) sizeof(struct frag_hdr) should prolly be constant, why is it computed here?
But most important:
Feels like "mtu" should be constant in this context. That's what "mtu" means. We are modifying it?
2) sizeof(struct frag_hdr) should prolly be constant, why is it computed here?
It's not, sizeof is processed at compile-time. That expression is a constant.
Feels like "mtu" should be constant in this context. That's what "mtu" means. We are modifying it?
We aren't, we're determining it. MTU is customarily negotiated between the ends of the connection.
2) The size of a struct can be determined at compile time, no need to store it in a variable. Hardcoding the value isn't a good idea, as it reduces platform independence, maintainability, and readability.
3) I'm not familiar with the code in question, but 'mtu' is probably a local variable, initially set to the MTU size and decremented as a packet is processed. You could use a 'packet_size' variable instead, but then you'd have to look up the MTU size every time you check for overflow, whereas this way you just check if 'mtu' is negative or not.
3) I'm not familiar with the code in question, but 'mtu' is probably a local variable, initially set to the MTU size and decremented as a packet is processed. You could use a 'packet_size' variable instead, but then you'd have to look up the MTU size every time you check for overflow, whereas this way you just check if 'mtu' is negative or not.
Part of the point of this discussion is that mtu is an unsigned integer, so it doesn't become negative after an underflow. It does assume a stupidly large value after underflow and one could check for that, but such a check would be (marginally) more complicated and would result in (considerably) less readable code.
> Way to prove you have no idea what you're yapping about. Any chance of you not commenting on articles/comments about C with your usual drivel now?
Says an anonymous coward who is quite likely wetter behind the ear than a baby wipe. GB2 XBox, jerk.
Also, who has talked about "hardcoding"?
wow, we got all the balls in the world in this thread.
you guys all talk a big game about how you would go "yes, linus, may i please have another?".
if he were actually your boss you would either complain or you would just walk off the job.
what are any of you actually complaining about? moving bounds checking and overflow checking to the compiler using special functions? if your difficulty with that is that it is not in the C spirit of things i'll certainly cop to that. the C way of doing things is to never bounds check and leak resources everywhere and let your stack get smashed because "real" programmers certainly never check their math more than once.
this is why you are all being replaced by Indians who do the work for 1/10th the salary and 0/10ths of the ego.
It has to be 100% compatible with every single C compiler on the planet, because they're all going to compile it.
It is not a place where you should - or even can - use funky compiler features.
It matters if it doesn't compile using the esoteric C compiler developed for a specific rare CPU, or causes unexpected side effects due to rare compiler bugs - or difference in interpretation of the C standards.
The Linux kernel is probably the only large library that is used on every CPU currently manufactured - as well as many that aren't.
>It has to be 100% compatible with every single C compiler on the planet,
>because they're all going to compile it.
Linux only really compiles with GCC, maybe clang (not sure what the state of that is) and maybe some stuff like the Intel compiler. It does not compile with every compiler on the planet by a long way.
>It is not a place where you should - or even can - use funky compiler features.
Except that the Linux source tree has support for doing just that. I.e. having macros etc to fill in features that aren't supported on certain compilers/GCC version/platforms..
>It matters if it doesn't compile using the esoteric C compiler developed for a specific rare CPU,
> or causes unexpected side effects due to rare compiler bugs - or difference in interpretation
>of the C standards.
Linux doesn't compile on any esoteric compiler. You're talking out of your bottom.
>The Linux kernel is probably the only large library that is used on every CPU
>currently manufactured - as well as many that aren't.
There are CPUs manufactured today that don't have C compilers.. so how does that even work?
It has to be 100% compatible with every single C compiler on the planet, because they're all going to compile it.
No, it doesn't. Only GCC and maybe the Intel Compiler can compile the Linux Kernel. I'm not even 100% certain about the Intel Compiler.
Clang can't compile the Linux kernel, MSVC can't and PGI can't either.
Here's a bit of advice: if you have never, ever, tried compiling the Linux kernel from source, don't post self-aggrandizing bullshit pretending that you have. You'll get caught and called on it by those of us who do this on a regular basis.
Also, the Linux Kernel is not a library. It's a relocatable.
"this is why you are all being replaced by Indians who do the work for 1/10th the salary and 0/10ths of the ego."
Seeing as you brought up indians: The vast majority of the offenders in the "write as much as possible to do as little as possible and see how many latent bugs you can include" camp are indian subcontractors.
You get what you pay for and they can make up the low upfront costs in consultancy later when you find the code can't do what's needed.
@ Donkey Molestor X
"this is why you are all being replaced by Indians who do the work for 1/10th the salary and 0/10ths of the ego."
But are probably better than you at starting sentences with capital letters.
Seriously, consider your use of the word "boss". Nobody who submits code is managed by him. Either they do so off their own bat or they're employed by other organisations where he has no managerial role. The number of individuals submitting is far, far greater than any conventional manager would have to deal with. How would you cope with the situation?
"Educate"
How? There are thousands of contributors and many thousands of contributions. Would you have time to give individually written feedback educating them.
" and inform."
Well, they've just been informed, haven't they?
"If their skills or approach does not improve then remove them."
How? Remember he doesn't employ them. He doesn't call them into his office for annual reviews which will be sent up to HR.
This post has been deleted by its author
Yes educate. Put people and processes in place to give feedback on best practice and why certain coding approaches are incorrect or suboptimal. There's no need to be a dick about it.
And if your version control mechanism doesn't allow you to block updates from persistent offenders, then there is a flaw in the process. Fix the process. There's no need to be a dick about it.
"No need to go all postal in the process."
The _only_ times I've seen Linus go off at individuals is when said individuals had done more than enough to deserve it.
He doesn't do it for a 1st (or even a 2nd or 3rd) offence. You have to be a serial submitter of poor code to get the public scorning and it's done because certain individuals are such precious little flowers that public humiliation is the only way the message sinks in.
This isn't a company. He can't fire people. They can't even be banned from submitting. They can only be named and shamed if they're continually wasting Linus' (and others') time.
?What's that "+ 8" doing there? Just added a little bit more to be on the safe side?
It's part of the specific check, added in the new code, (probably for security to avoid unexpected over-run errors), only loosely related to the original code.
The old code didn't check that mtu was big enough before subtracting.
The new code checks that (a) there is no overflow when it does the subtraction, and (b) the result is still big enough (8) for whatever comes next.
Both the new versions of the code include the "at least 8" or ">7" check, but the first version of the new code used an obscure new compiler-specific method of avoiding overflow in the check. Without seeing the rest of the code, it's not possible to say if the new Linus version works correctly, which is why the compiler writers provided the obscure new compiler-specific method.
An option for the compiler writers would have been to provide a compiler that did work as expected without requiring a new compiler-specific method, but they didn't do so, because "it's not required by the C standard".
I have no idea if the Linus rant was actually directed at the compiler writers for providing such a platform, rather than at the poor coder who was forced to work in that environment: it will be interesting to see if there is any defence / follow up.
Thanks for having a go, but that's not really any clearer. What exactly is it that's coming next that is 8 big and how do we know that 8 is going to be enough? If it's leaving room for a specific thing then there should be something defining the size of that thing, surely?
Still looks like a magic number to me, yet this is Linus' fixed version of the code. So what's going on?
... to read the thread in LKML (just go to the link to Linus's post and follow from there). The person who submitted the code immediately responded (no need for detective work), the network subsystem maintainer (Dave Miller) followed, the commit was reverted, a patch without the offending compiler wrapper was re-submitted.
It is obvious that despite the (characteristically) colorful language the criticism was understood by everyone involved to be professional and not personal, the reaction was professional as well, and the entire situation was handled intelligently and efficiently.
I suspect Linus knows very well that the somewhat impersonal nature of email provides for additional tolerance of colorful vocabulary, and the strong language is probably both a personal trait and a tool. When he makes a technical point he does it forcefully, and this makes him more effective in the absence of personal interaction.
I have to give credit where credit is due.
Linus Torvalds is focusing on the issue, not the individual. He is not embarrassing people publicly or in front of their peers. Linus is doing his criticism the right way now.
Hopefully this will lead to more companies being willing to have their employees work on and with Linux.
Not really, its clear who the individual is, because they're in the thread which the childish rant is from.
Following the thread, where the other coders speak like adults, it appears to have been part of code which was put in to be discussed after being looked at, but nobody bothered reviewing the changes.
"> Get rid of it. And I don't *ever* want to see that shit again.
No problem, I'll revert it all.
I asked Hannes to repost his patches to linux-kernel hoping someone
would review and say it stunk or not, give him some feedback, or
whatever, and nobody reviewed the changes at all..."
While I completely agree that the submitted code should be struck down and never see the light of day again the way it was done was wrong. Having a go at your staff is rarely, if ever, the way to get the best out of them. Sure Linus didn't actually name names but anyone who cared could go and look up who was being torn off a strip so this is as good as a public humiliation. I personally wouldn't contribute to a project with someone like that at the wheel and I can't believe I'm the only one that feels that way.
"Having a go at your staff is rarely, if ever, the way to get the best out of them."
What staff?
"I personally wouldn't contribute to a project with someone like that at the wheel and I can't believe I'm the only one that feels that way."
The number of kernel contributors suggests that there are plenty who feel differently.
"
if (mtu < hlen + sizeof(struct frag_hdr) + 8)
goto fail_toobig;
mtu -= hlen + sizeof(struct frag_hdr);
"
It's still "RUBBISH" :P
what if "hlen + sizeof(struct frag_hdr) + 8" causes an overflow and ended up <= mtu but the intended value (if given more bits to store) is actually > mtu?
c.f. if (A < B + C) ...
Because "sizeof" gives a "size_t", the entirety of "hlen + sizeof(struct frag_hdr) + 8" should also be "size_t" and the comparison should be of "size_t".
Assuming a 32-bit system, you're in trouble with networking code for which this overflows.
And if it does, then (it's unsigned, remember), the overflow will be to a small value
and the test will fail for that reason.
If it's a 64-bit system and there is any danger of trouble, your network has long since gone past what it can deal with. And you're requiring far larger messages than either the OS or any conceivable hardware (for at least the next decade) can deal with.
Whilst I completely agree with your sentiment, you have to realise that the last UNIX kernel worked on by those luminaries was tiny compared to the current Linux kernel. It did not even have a huge amount of networking code in it.
After UNIX Version/Edition 8, They moved on to Plan 9, which was written in a completely different way (much more like a micro kernel), and left UNIX in the hands of the UNIX Systems Division in AT&T.
It was written in a very portable way, but even the kernel itself from USD was intended to be compiled by UNIX's own Portable C Compiler. Back in the day, it caused more problems and fragmentation for vendors doing their own ports using their own compilers than anything else! It was only after the standardisation attempts of SVR4 and OSF/1 in the late '80s and '90s that the vendor kernel code bases started converging again.
No wonder the shareware OS enjoys a rounding-error market-share. Just saying.
Well, to be fair Microsoft only released Windows 10 on desktops a few months back and is yet to release it for phones. It'll be a long time before it reaches the popularity of Android and it still has a long way to go before it can match Linux in the data centre.
(Yes, it can run a web server, but until very recently, the footprint was huge. The "nano" version of Windows has not been around anywhere near as long as what the equivalent Linux distributions have.)
Freescale pretty much do. They take an old stale version of the kernel (at least 12 months old), pile their patches on, and release it to their community.
The kernel is different enough to be a "fork", and only a small portion of the patches wind up going back upstream into the main-line kernel.
This creates headaches for the system integrators using Freescale chips, as they have to trade-off having the Freescale kernel tree that supports some of the SoC hardware better, versus a newer kernel that has features needed to make non-SoC hardware work better.
Or they wind up having to support frankenkernels like the one I did for Jacques Electronics, which used the Freescale 2.6.28 kernel, but had a back-ported version of ALSA System-on-Chip framework taken from kernel 2.6.33/2.6.34 (then latest).
Some need to man up, and dislodging their heads from the sand without whining would be useful too.
Let's handle it logically.
A writes code
B has more experience than A
B review that code and shit hits the fan
A starts whining about his precious code being shred to bits.
Seriously? What do we need now, kindergartens for adults?
There is a huge difference between a personal assault and an experienced criticism to a sloppy job. Time to grow up?:)
Cheers
Alessio
io_uring
is getting more capable, and PREEMPT_RT is going mainstream