Maybe...
...next year?
Linux creator Linus Torvalds spoke about the challenge of finding future maintainers for the open-source kernel, at the Open Source Summit and Embedded Linux conference under way this week online. Torvalds does not do keynote talks these days, but he was willing to sit down with VMware's chief open source officer Dirk Hohndel …
If you think about all the instances of the Linux kernel operating right now - from phones and embedded devices via servers and network devices in data centres, through to high performance compute clusters, isn't the desktop operating system aspect (i.e. endpoint client) almost of measure zero? Nice to have but it hardly defines the project.
Is it not amazing that a self organised group of programmers managed this huge project?
Linux creator Linus Torvalds spoke about the challenge of finding future maintainers for the open-source operating system,
Seriously? People aren't queuing up to be told:
"SHUT THE **** UP!
"It's a bug alright -- in the kernel. How long have you been a maintainer? And you *still* haven't learnt the first rule of kernel maintenance?
And I don't _ever_ want to hear that kind of obvious garbage and idiocy from a kernel maintainer again. Seriously.
Fix your ******* 'compliance tool,' because it is obviously broken. And fix your approach to kernel programming."
I'm shocked, shocked, I tell you.
Linus' potty mouth notwithstanding, let's not forget that some of this shit is hard - my last tussle with the kernel was way back in the 2.mumble days where I had to write a couple of drivers for some hardware in my lab - my C chops were a lot better back then but even so, hacking on the kernel was not for the faint of heart (one bit of hardware was sorta-kinda supported, which made life a bit easier, but the other wasn't so I had to work from datasheets and whatnot and write everything from scratch) - still gives me nightmares.
You want documentation? Well, there's the code itself ... other than that, good luck!
I probably wouldn't even know where to start now.
There's also the 'some of this stuff is boring' aspect - whilst you'll probably have no shortage of people queuing up to work on the latest shiny shiny or the $ARCHITECTURE_DU_JOUR, there's no escaping the fact that the 'boring' stuff in the kernel will need occasional care and feeding as well, and that's not as sexy by half and people will be less inclined to do anything once the current maintainers have moved on[*]
[*] - I'm one of those developers who actually enjoys working on the 'boring' stuff. It's a dirty job, but someone's gotta do it ...
It's all just ones and zeroes. If it isn't a one then it's a zero. How hard can that be?
Getting the ones and zeroes is the easy bit ... it's putting them together in the right order that's the trick.
As the late, great Eric Morecambe once said: I'm playing all the right notes, just not necessarily in the right order ...
;-)
As the late, great Eric Morecambe once said: I'm playing all the right notes, just not necessarily in the right order ...
The following simple technique can be used to address that. Decide how big your program needs to be. If you get that right then the technique is absolutely guaranteed to produce the desired bug-free program.
Allocate a block of memory of the necessary size and initialize it to all zeroes. Then:
1. Execute it as a program. If it does what you want, then job done.
2. Otherwise, treat it as one single very wide multibyte binary number and increment it. (first time round 000...000b->000...001b, second time round 000...001b->000...010b etc)
3. Go to step 1
That's a very poor starting position. Functional programmes tend not to have all their bits set to zero. Start with it filled with random data. That way there is a "chance" it will work first time. In the unlikely event it doesn't you will hit one of the many possible solutions sooner than starting with all zeros..
That's a very poor starting position. Functional programmes tend not to have all their bits set to zero. Start with it filled with random data. That way there is a "chance" it will work first time. In the unlikely event it doesn't you will hit one of the many possible solutions sooner than starting with all zeros..
I guess that is true. 00 is quite often a NOOP instruction - it is on ARM, MIPS and Z80 anyway - so an all zeroes initial condition could be a valid, but very dull, program on those but probably not on anything else.
Best plan would be some sort of very complex quantum computer that could try all possible bit patterns simultaneously.
Best plan would be some sort of very complex quantum computer that could try all possible bit patterns simultaneously...... Smooth Newt
What about a best plan being some sort of very complex quantum computer that has already tried all possible bit patterns simultaneously, has processed all metadata and now shares the resultant deliberation on the ways forward to a global market fully reliant upon mass multi media presentation of current stealthy operating system utilities with extremely rare, attractive remote virtual facilities servering and servicing novel abilities.
A Simple Start for that Program to Realise the View and See that All of it is True, is to Treat it as True in a New Way of Doing Everything Differently and Better with SMARTR Partners rather than denying and doing vain battle against its obviously apparent existence and virtual presence. And share the news that it can no longer be denied from common knowledge.
The question then being ....... If everything is new, what would you like Programs and ProgramMING to Present you so Everything can move Forward in any Number of Novel Noble Directions Equipped with All of the Right Means and Memes?
Or would you prefer and defer all those sorts of almighty decisions to Programs and ProgramMING which Instruct and Advise Earthly Assets in the Secret Levers of Almighty Command and Immaculate Self Control which deliver Unassailable Lead Events ..... Mega Metadata Beta 0Days?
Nuclear Armageddon Morphs into NEUKlearer HyperRadioProACTive IT Operations which are Terribly Silent and Much More Deadly in Deployment than ever was Discovered Underground Testing?!.
"What about a best plan being some sort of very complex quantum computer that has already tried all possible bit patterns simultaneously, has processed all metadata"
Such a hypothetical machine would, by definition, know everything that there was to know. Gut feeling is that it would probably immediately suicide out of extreme boredom, just to find out what (if anything) was on the other side.
Linus' potty mouth notwithstanding, let's not forget that some of this shit is hard
Some of what my team do is hard too, but if someone spoke to a member of my team in the manner described by the person to whom you responded, I would take a dim enough view to invite them to a non-discretionary meeting to adjust their attitude.
People do their best work when they're supported by those above them in a hierarchy, and where mistakes are learned from rather than punished. Every shit boss I ever had or have ever seen has had the same poor attitude to 'subordinates' as described by the OP. They all had their "reasons" and they all had excuses, and while Linus is undoubtedly smarter than most crap bosses, he's still got a crap boss attitude.
That he has achieved a lot is beyond question. Could he have achieved more? Maybe.
In his book "Just for Fun" he says:
"I don’t proactively delegate as much as I wait for people to come forward and volunteer to take over things... I try to manage by not making decisions and letting things occur naturally. That’s when you get the best results.”
Also on being a manager:
"...while my Linux management style, such as it is, was earning high marks in the press, I was an undeniable failure during my brief stint as a manager at Transmeta. At one point, it was decided that I should manage a team of developers. I flopped. As anyone knows I'm totally disorganized. I had trouble managing the weekly progress meetings, the performance reviews, the action items. After three months it became obvious my management style wasn't doing anything to help Transmeta, despite the praise I was getting from journalists for the way I was running Linux...."
My take is that likes to be an engineer or gatekeeper more than a boss.
What did anyone with a clue expect?
A Corporation is almost always an oligarchy mixed with a plutocracy, usually with at least a hint of gerontocracy.
The Linux kernel developers are as pure a meritocracy as we have on this dampish rock.
People like Linus are wasted in the traditional management role. I've been saying for decades that in this day and age, where Technology is so important, corporations should have separate Management and Technical tracks for advancement to more senior positions. Management takes care of management, and techs take care of the increasingly more complex technical side. Occasionally, but very, very rarely you can find someone who is both capable and has the capacity (and competency!) to wear both hats.
I don't expect traditional management to grok this in my lifetime, because quite frankly they aren't equipped to understand the concept.
What sport do you and your team play?
Writing software for HF Algorithmic trading for a bank you have heard of, mostly. There's quite a lot of money on the line so every lesson is expensive, but regardless of the cost of a mistake, if you punish it instead of learning from it you're much more likely to have someone repeat it.
If the person that screwed up is around to tell the story then everyone joining the team learns to avoid the error. Our team is all in the one boat - if we crash into the rocks everyone goes under but if we make it to port then we all get shore leave.
Or are they a team as in horses?
No, but they are led by a donkey.
People do their best work when they're supported by those above them in a hierarchy
Indeed. But the problem there is the hierarchical structure. In an effective collaborative environment, one person can have a good old rant about what someone else has done, and as there's good peer relationships, the problem gets resolved and everyone moves on to the next challenge.
It's a (long) while since I even looked at the kernel dev process, but despite an initial appearance from the outside of it being a pyramid with Linus at the top, that wasn't how it worked.
Though quite how systemd got its claws in remains.a mystery to me. Unless RedHat have gone to the dark side......
"Though quite how systemd got its claws in remains.a mystery to me."
Let's be perfectly clear here ... we are talking about the kernel. The systemd cancer is not now, and never will be, a part of the kernel. There is absolutely no need to run the systemd cancer on a Linux system, not now and not into the future.
"Unless RedHat have gone to the dark side......"
Well, yes. They have ... just how long was your last nap, anyway?
I seriously doubt the folks who will take over after Linus gets hit by a bus will be in any hurry to include anything anything to do with the systemd cancer in the kernel. Nothing that it does belongs in there. There are many reasons why init is separate from the kernel, and nothing they can add to the systemd cancer will ever change that.
With that said, I strongly suggest that anybody already using Linux and GNU FOSS solutions also look into the BSDs. All it will cost you is a little time ... and options are always good.
This post has been deleted by its author
Whilst I agree that most people will turn away for fear of being treated like shit, on the other hand though, for the more thick skinned amongst us, kernel development is just obscure.
As a somewhat aged techie myself, I can understand where his anger comes from and why younger techies might see it as just raw aggression.
Id love to be able to develop at the kernel level, but the resources to learn are scarce and generally impenetrable.
I've written code for over 20 years in various different languages but wouldn't know where to begin with kernel hacking.
For those old enough... remember, Linux kernel hackers are Klingons. Klingon programmers do not comment their code; if it was hard to write, it should be hard to understand. Klingon programmers don’t do documentation, same reason. Klingon programmers don’t do defensive programing, they do offensive programming, and always win. Klingon programs don’t take prisoners. Qapla!
It isn't enough, sadly.
I take your point. In-depth documentation should exist, as should sample code. When I retired, I left behind the specifications, documentation and example pseudocode which I estimated would keep the development team going for a good 8 months, surely time to recruit a replacement? I believe I was massively over-optimistic.
But the problem is this. Once upon a time I started programming, working on a machine code program on a 16 bit microprocessor that took up under 2kW. I could comprehend the minimal documentation and understand the entire thing. At the time I could also dismantle and completely refurbish a typical British single or twin motorcycle engine right down to the gearbox. Easy.
Now consider new bug looking at a program doing something extremely complicated in C, with all kinds of hooks and relationships with other things. How do you get your head around it in order to make a start? Working on a tiny bit is all very well, but moving to the next stage is hard because now you have to understand how it fits in and you need to absorb ever more documentation. It's an enormous uphill and unsurprising that people want to be in at the beginning of something where the initial complexity is manageable.
When I look under the bonnet of my simple, normally aspirated car these days, I'm confronted with complex hardware I never had to worry about before - catalytic converter, EGR, the brake system being a large complicated valve block with connections to the computer, fuel vapour recovery, another computer and hydraulic system connected to the gearbox. It's obvious why it takes so many people to design and engine when once upon a time Ernest Turner or Joe Craig would nip in the design office after a liquid lunch and knock out a crankshaft or cylinder head.
tl;dr the problem is not the documentation. It is (as Alan Turing identified in the early 1950s before his tragic death) the proper structuring of the bureaucracy that manages everything. The question has to be whether Torvalds has created a self sustaining and managing bureaucratic framework to hold the whole thing together.
"The question has to be whether Torvalds has created a self sustaining and managing bureaucratic framework to hold the whole thing together."
Indeed. This is called "succession planning." Many fine organizations have fallen at this hurdle. Linux will need someone with excellent organizational and interpersonal skills to make it through; this is not a problem that can be solved with code.
I did that. The reply at the top was from Reddit in 2006 and said
" I assume the bus would receive a profanity laden email laying out how badly it fucked up"
However, being a little more serious, has LT ever gone away on holiday for a month or two to see how things work in his absence?
Set up your kernel debugger (preferably on a pair of VMs if you can), choose the area you want to change, read the code, start making modifications, repeat until bug free. If you've been coding for 20 years you'd definitely be up to the task.
There's plenty of people to ask about kernel hacking, and if you need to do architecture specific functions, processor documentation is usually pretty decent. It's when you need to prod specific chipsets and add-on card that specifications are less available.
I'm shortly about to start some kernel mods for an open source OS to add functionality to an older system, and once I get the hang of that, something more complex. I know which bit of code it's running (because the module is named, and relates directly to a source code file), so the first task is a breakpoint. Then step through till it gets to the 'your system is too old' part. At that point improve the error message/check the logic to ensure it's rejecting the system using the correct method.
If I understand that bit I then know what new functionality is required, which means reading the documentation for kernel support functions and processor datasheets, all of which is actively available, plus some coding in C and assembly. I'm not expecting it to be easy, but I at least know what I need to do.
>If you've been coding for 20 years you'd definitely be up to the task.
Depends on what they've been coding...
> but I at least know what I need to do.
I would only believe that if you have completed the fundamentals and advanced OS design modules at university and achieved at least a B grade in both modules.
I had - got A's in these modules and a 1st overall - yet desgning and implementing a commercial fail-safe distributed OS for real certainly twisted my mind - I have no desire to spend the next 20 years going back into kernel space, so have respect for those who are prepared to take on the challenge.
yet desgning and implementing a commercial fail-safe distributed OS for real certainly twisted my mind - I have no desire to spend the next 20 years going back into kernel space, so have respect for those who are prepared to take on the challenge. ...... Anonymous Coward
That is certainly respect well earned, AC, for it is the kernel space which drivers and renders the physical environment a result of mega metadata base manipulation/reorganisation which leads and paints the Greater Bigger Pictures with engaging tales and exciting news for IT Services and Mass Multi Media Moguls to present and realise.
And as complicated as that may seem to be, it is extremely easily done whenever one has enlightened enlightening scripts to follow as one does whenever blessed with comprehensive instruction books akin to Ye Olde OEM Workshop Manuals.
Surely though, such has always been the way of quickly and radically fundamentally changing things for humans on Earth? Anything else would suggest there is no intelligent order to guarantee future events with everything then being dressed up for chaos and displayed in mayhem and delivered by Ignorati rather Illuminati.
Dude, it's just code. I had to write what may have been the first ACPI memory detection routine. While working for AMD on what might have been an AMD-specific problem when the relevant maintainer lived in Portland (Intel's HQ). On that old assembler that used to ship with the kernel. When I barely dared to call myself a proper programmer.
The point is, I have no idea if that code ever hit the trunk. That wasn't the issue. I needed to enable our customers, and that's what I did. If the code was worthy, Linus would have allowed it. If not, well, fine.
Same as yours.
"I've written code for over 20 years in various different languages but wouldn't know where to begin with kernel hacking."
If one of those languages is C, join the LKML and read along for a while (like usenet, I recommend following along for at least a month, maybe two, before joining in). When you see a place where you can contribute (and you will), just do it. "Doing it wrong" is rarely fatal ... besides, if you were perfect and had nothing left to learn, life would be excruciatingly boring.
Relax, have a homebrew, and start reading ...
Not idiots, really. More like, "savants". Trust me, if you wrote some verilog for them to look at, they would be giggling about it for weeks.
While I was doing software microprocessor validation at IBM, I started saying, "You don't want to be in the same county as an FPU I designed or that they programmed." The skills are just that different.
No problem.
You be nice to everyone and then take responsibility for when your sign-off appears on a 0-day, root-level compromise of the entire world's back-end systems.
Or when it BUG()s in the middle of a driver code because the driver author was too lazy to use the proper path to print debug information (hint: BUG instantly crashes the kernel, with no hope of recovery - no production driver should EVER contain the macro BUG for anything). Which was literally one of those patches commented as such. Yes, that one-line debug script would literally instantly stop millions of machines at a random time if it had hit mainstream deployment, which means data loss on an epic scale, not to mention loss of service.
Sorry, but if you can't handle criticism of such idiocy - however delivered - when you're in charge of even a small part of the world's most widely-deployed and widest-scope OS, then that's the least of your problems.
Especially when - in every case listed - such stupendously ridiculous code was pushed through several levels of review and maintainers and ended up nearly being pulled into the kernel before it was noticed how ridiculous some of it was (e.g. including their untested code in every single kernel config by default because they didn't know how to make a patch, and nobody bothered to check, etc.)
You really, REALLY don't have a clue, do you?
There are actually LOTS of people out there that would LOVE to be maintainers--the problem is that almost all of them would run the kernel into the ground in no more than two releases. Often this is because of incompetence. Sometimes, it is because of a narrow focus on a particular issue to the detriment of others. Some of it is a straight up **** the end user attitude by various companies. (You know I'm talking about.) And some of it is full-on prima-donnaism. (Again, it should be pretty obvious who at least one of these currently is.)
The only way to maintain the viability of Linux is to keep this binary excrement out of the kernel. On good days, the task is Herculean. On bad ones, I'm sure it feels Sisyphean. He cannot do it alone, and so he has to trust the maintainers to some extent.
And when he finds out that some of them have been negligent? Sure, he can fire them. But that means either that he has to take on the work himself, or somehow replace them.
Oh, yeah--tell me again how much money he has to attract new talent?
So, he uses public shaming. Yeah, I know. Every study ever done says that's the wrong way to fix things. Except--the real world is a whole lot messier than some academic study.
And--he has user to protect.
As a guy who has been contributing to the kernel on and off for over a quarter century[0], I think I'm allowed to comment. Essentially, the vast majority of the folks bitching about Linus getting sweary HAVE NOT been subject to the swearing. In other words, they are upset on behalf of other folks[0].
We only see (saw?) the swearing after a developer ignored input from Linus, and (usually) several other people, over some boneheaded personal mission from gawd/ess that has absolutely no place in kernel code. Usually there is behind-the-scenes email before the gripe goes public. It is always the developer in question who takes it public in the LKML And most of the time, after getting yelled at, said developer has actually (finally) admitted their error, (finally) fixed it, and the rest of us mutter "about time, bone head!" and life continues. The public swearing is (was?) always a last resort. I do not recall anyone bitching about private, out of band swearing.
And no, it has never made me feel intimidated about contributing to the kernel. Why not? Simple. Because when it is pointed out that I make a mistake, I fix it without bitching about it, that's why. It's really not a big deal. Unless the developer in question chooses to make it a big deal, that is. In other words, said prima donna / drama queen doesn't play well with others, and insists on doing it his/her way. Frankly I'm surprised that Linus has been as tolerant as he has been all these years ... If it was my name attached to the project, I'd have really lit into a few of the idiots ...
And you commentards who haven't actually spent any time contributing to the kernel, your opinions on the subject are completely worthless. Come back after you've walked a few miles in our moccasins and I might find your commentardary on the subject worth listening to.
[0] My, doesn't time fly when you're having fun!
[1] Which is a cardinal sin as far as I'm concerned ... As an adult, I don't need or want anybody else being upset on my behalf, thank you very much.
Essentially, the vast majority of the folks bitching about Linus getting sweary HAVE NOT been subject to the swearing. In other words, they are upset on behalf of other folks
So your considered view on all those middle class folks at the BLM racist rallies is what exactly? They're not black so they shouldn't comment? Really?
In other words, said prima donna / drama queen doesn't play well with others, and insists on doing it his/her way.
Are you talking about the developer or Linus here? Its really hard to tell.
And you commentards who haven't actually spent any time contributing to the kernel, your opinions on the subject are completely worthless.
I look forward to near deafening silence on the subjects of taxation, economics, finance etc from the vast majority of commentards then, whose closest experience of any of them is user level rather than developer level (working in the City).
Sorry, you just don't get to gatekeep what people talk about or whose opinions have a value. Pull your head out of your arse and stop trying to copy Linus - you don't have the talent and I for one wouldn't put up with it even if you did.
I'm honestly not sure you've thought your post through very much before writing it. Try to expand your line of reasoning into other aspects of your life and see if it still works for you. It won't.
Nah. I was just sharing a laugh. I find the concept of thumbs, as implemented here on ElReg, to be bloody useless ... except sometimes they can add a little amusement when they tip you off that somebody who takes commentardary entirely too seriously has been furiously hitting refresh, just waiting for your next comment to downvote.
Want another laugh? Seems some brilliant person has written a macro that'll downvote your old posts serially. I just watched my downvote total go up by about 50 in the space of about a minute. There is absolutely no way that someone can do that manually with the ElReg interface.
Even funnier, they are downvoting my old, original posts from over ten years ago ... posts that were made before ElReg had even implemented the concept of thumbs. I'm laughing ... how on Earth does this person expect to benefit from this utterly pointless exercise? Shirley it's not going to out itself as the serial downvoter, expecting great applause from the peanut gallery ...
So I've pissed off somebody, about something, somewhere, and yet they can't be arsed to tell me why ... but they will expend time and energy to write code to downvote old posts of mine? That's just plain sad, that is. Poor thing.
Edit: Chalk up another 50ish downvotes in my old posts in about a minute. One wonders how hard the poor dear was pounding on its keybr0ad as it launched its cute little macro.
So I've pissed off somebody, about something, somewhere, and yet they can't be arsed to tell me why ... but they will expend time and energy to write code to downvote old posts of mine? That's just plain sad, that is. Poor thing.
There seem to be quite a lot of downvoters who downvote because something that was said hurt their feelings and possibly impacted their cosy pre-established world view, not because the post itself doesn't have merit.
We don’t care if Black Lives Matter thugs shout at each other or not.
What we cate is how they behave towards the rest of us (I.e. whether it is OK to vandalise statures and generally break the law)
The same with Linus. It is not for me to comment on the Linus’s swearing at kernel developers, but I can comment on whether the new version of kernel fits whatever needs I have or not.
Bullshit. If anyone with a slacker discipline than Linus would take over from him the kernel would be thoroughly fscked within a few releases.
As a gate keeper he should be strict and kick out people that don't adhere to the scrict coding rules that he demands from contributors.
If he wasn't so strict and giant security holes would be the result of that you'd be probably one of the first to point the shitty finger of blame at him for not doing a good job.
Don't like how Linus manages the people contributing to his project, branch off and maintain your own code base.
Good luck with that.
People still line up to join the Marines and other elite organizations, even though they know that a lot abuse will come their way.
The point is that it's meaningful, constructive abuse. It's designed to help them get better at what they do.
It's said that the Red Army used to have a saying for reluctant recruits: "If you don't know how, we'll teach you. If you don't want to, we'll make you".
That's not for everyone, but for the few who really want to excel.
Here's something which some employ and enjoy and would attribute to their exceptional success to generous excess, Archtech ......
Proper Preparation and Positive Planning Prevents Piss Poor Performance Permitting Prime Prize Plum Penetrations and Perfect Private Protocolled Pursuit of Public Parametered Projects and Pirate ProgramMING Productions for Pumping and Pimping in Presentations to Populations Puzzled by Progress and Prisonered with Pathetic Past Postings rather than Pioneering with Pilots Practised in Plush Promising Programs.
..... and not at all dissimilar to that which the Red Army realise is positively advantageous and mutually beneficial.
You must be positively predisposed to predominantly postindustrial psychoanalyses ... or perhaps you are a practitioner of postmillennial perturbations predetermined by probabilistic precautionary philosophic perfectionism? .... jake
Probably, jake, and pretty phishy it is too.
Ha! They might at that...
Torvalds seemed quite open to the idea of using Rust, which I think is most encouraging. I've been hoping for the thinking on such a move (but on a bigger scale) would start. Using Rust ought to deliver development timesavings, which would make better use of a maintainer's donated time. And it ought to eliminate a bunch of bug classes, so that should be another long term time saver.
>Using Rust ought to deliver development timesavings, which would make better use of a maintainer's donated time.
From what I see, Rust will force some decisions to be made earlier in the development process and so reduce time spent in test and debug due to Rust largely "eliminating a bunch of bug classes". Otherwise, I agree it should mean maintainer's time is better spent focusing on the code logic.
They might start with Rust but today's generation of new programmers will end up trying to rewrite the kernel in Python ... doesn't anybody think that rewriting the kernel in any language will result in a new world of bugs?
Saying that the next generation will want to use a "not C" language, one NOT intended by its very nature to be the language of OS kernels [think early days of UNIX and the 'B' lingo, which became 'C'], is rather CONDESCENDING in my view. It assumes that future kernel programmers won't be able to comprehend the need for a lingo that's ALREADY very close to the assembly code, and can even be hand-tweeked to generate efficient assembly, if you understand enough about how the compiler works and the way it generates code.
It would also be assuming that assembly isn't being used, assuming that garbage collection and excessive validation are allowable, and that complex operations should be "programmed inside the language" where they're almost GUARANTEED to generate less-than-optimal solutions for MOST problems that require things like threads and process control in general.
We have SEEN the results of "this kind of thinking": WINDOWS
How long does it take to open up a "File Open" dialog box these days? Then go back to Win '95 or even Win 3.1, and that less "functional" file open dialog box that DID! NOT! USE! OBJECT! ORIENTED! HELL! on EVERY! STINKING! FILE! ENTRY! popped up SO fast you're like "what?"
And that's my point: when EVERYTHING is being done using "we have fast CPUs now" as an excuse, with bureaucratic top-down "everything is an object as a member of a collection" kinds of thinking, you end up with GROSS inefficiencies that nobody knows how to fix any more... because we're NOT using a lingo like 'C' any more... one that reminds us of how the CPU actually WORKS, because it's low-level enough to be close to the machine code!!!
[and it has the decency to support "native integer" types, particularly UNSIGNED integers]
Seriously, though, give "the next generation" some credit. We, the experienced kernel programmers of the world, should MENTOR them, and turn them into proficient C coders instead!
[then they'll look back and say "why the HELL dd I ever *FEEL* (the 'F' word) that we could program a kernel in RUST ???]
I think part of the enthusiasm for Rust is that its basically C with an angry preprocessor that throws chairs at people when they do stupid things with poimters. You definately *could* write a kernel in Rust (something you couldnt do in ,say, Go or f**ing javascript.
Sometimes your "stupid things with pointers" is the hack that makes magic happen. I don't need, nor do I want, my compiler telling me what I can and can't do in this type of coding. Kernels aren't the same thing as common or garden applications.
>Sometimes your "stupid things with pointers" is the hack that makes magic happen.
Along with those ASM inserts to use the esoteric task switch and memory protection data blocks and instructions that compiler writers tend to ignore.
I suspect another reason why Linux will continue with C is that it will be a big job to move to another language; although this task might attract developers who aren't interested in being maintainers.
" it will be a big job to move to another language; although this task might attract developers who aren't interested in being maintainers."
Before creating and jumping onto a new bandwagon, a bit of recent history: Kotlin uptake isn't happening at the rates that Google might want, for Android development, and it is ALSO important to keep your existing staff from quitting over having to spend month(s) re-learning and _ESSENTIALLY_ becoming like N00Bs again for a while...
Yes. I'm CONFIDENT it is true: Experienced coders like the results of being experienced, knowing what the code should look like, avoiding what would otherwise be the drudgery of going over insignificant details and nuances of "new shiny" lingo, and getting FREQUENTLY OUTRAGED at perceived UNNECESSARY errors and warnings. Those tedious 'learn new instead of using experience' things while STILL getting the fixes and new features done on time keep programming from being *fun*... and if it's not *fun*, and you are volunteering, you tend to *QUIT*.
Learning is fun if you're NOT under the gun. Taking a month (or more) off from anything resembling productivity and getting things accomplished... NO THANK YOU!
So if Java coders (for Android) are holding back on a switch to Kotlin, even for NEW projects, maybe not such a good idea to switch from C to Rust.
(being old enough, I have seen a lot of "new shiny" lingo hype, most of which sputtered and flamed out quickly, and the TIOBE index is FULL of them)
We have SEEN the results of "this kind of thinking": WINDOWS
Last time I checked, the Windows DDK was C all the way. It would be strange if the Windows Kernel used a completely different language. (The DDK changed names a few times, but I believe it is still C)
How long does it take to open up a "File Open" dialog box these days?
How is that relevant to kernel development?
That said, I pressed CTRL+O here now. The common file open dialog box popped up in less than a second. I perceive it as slow, but you know what: Had it been developed in a modern language, there'd be async calls to enumerate drives, put the right icons on the special folders, etc. It'd pop up instantly, but possibly not be fully populated right away.
You do know that multi threaded programming has been fully possible in C right from Windows NT 3.1, not to mention OS/2 in the 80s? (Although the file dialog is written in C++, and calls a lot of interfaces).
Microsoft are not stupid, and each new release of Windows has been optimised beyond the previous one (Yes, sometimes this is accompanied by a lack of flexibility or functionality, or increase in requirements in other areas).
If the file dialog doesn't populate bit by bit, there's a reason for it.
You do know that multi threaded programming has been fully possible in C right from Windows NT 3.1,
...and do we want a file open dialog that spins up several threads each blocking on I/O? No, we do not want that.
NT 3.1 included async I/O from the get-go too. But I suspect many of us stayed away from those functions because quite frankly they were a lot of hassle. Plus, the number of devs that do multithreading in a safe and effective way are in a minority. (also a management issue -- manglement rarely let developers do a job properly)
Things look a lot different now, because with modern dev languages we can finally do async operations in a more natural way while the coder staying productive. You know, if you really want to get down to the metal, there's always machine code. But you won't get much done that way and the result would be much worse.
There are many ways of getting Explorer stuck. I eventually remembered Mark Russinovich's exploration of one such phenomena: https://docs.microsoft.com/en-us/archive/blogs/markrussinovich/the-case-of-the-intermittent-and-annoying-explorer-hangs
I haven't tried Rust yet. I hear good things though and it would be interesting to see it used for kernel type work -- if only to be able to compare apples and apples.
Gosh, wonder why that could be?
I mean if you want an extreme example, there's the ncurses library maintainership hassle : https://invisible-island.net/ncurses/ncurses-license.html
Thankless job at the best of times, even worse when you can't easily fire someone.
From what I understand the new Mac's will boot only signed code by default, but you will be able to enter EFI config and change that setting.
The catch for using an ARM Mac as a Linux ARM dev platform is more likely to be drivers, especially for graphics. So you might be better off booting Linux in the built-in hypervisor which will have drivers for everything and let you run full Linux. Even though it isn't bare metal it'll be basically bare metal performance.
Apple owns, designs the freaking CPU. Just hope old school app installs will work let alone installing Linux. People tend to think like it is PowerPC. Back in the day Apple was busy adding great things like Altivec to the CPU. Now they are only interested in locking down their iThings.
As far as the programmers - aging, moving out of coding, retiring, dying - check, check, check, and check.
As far as the language - difficult to learn, often not taught in schools, negativity perceived - check, check, and check.
As far as the code - poor documentation, poor sample code, poor mentoring - check, check, and check.
As far as external drivers - potential panic from a date apocalypse - check.
Torvalds hit the nail on the head and 2030 might be optimistic.
There is an interesting article regarding COBOL demonstrating that COBOL programmers that retire are actually being replaced and that the average age has stayed the same. It appears that experienced programmers are learning COBOL as a career change choice and replacing those retiring.
By the way gnucobol is available for Linux.
C is still taught and I know of people bored with Java programming have looked to C programming and what can be done with it as a career change opportunity as well.
I have some 'C' code, written in 1985, on Unix BSD 4.1c, which has been compiled, over the ages, on Sun3.x, Sun 4.x, Solaris 5, 6, 7, 8, 9 and 10, Cygwin, various flavours of HPUX, IRIX, Ultrix, SCO and AIX, not to mention every version of Linux from the year 2001 to the present.
Not a single line of code needed to be changed, and there was only one #ifdef, to cater for the fact that Linux has no tell() system call.
I don't know about COBOL. but I challenge anyone to name another language that's as portable as 'C', and an operating system that's as consistent as standard Unix. The idea of someone writing Unix code in 'Rust' (whatever that may be) simply appalls.
>I have some 'C' code, written in 1985
Please:
int main(){return 0;}
Definitely worked in the early '80s. Today, on Windows 10:
jbowler@Jule:~> echo "int main(){return 0;}" >/tmp/crp.c
jbowler@Jule:~> gcc -o crp /tmp/crp.c
jbowler@Jule:~> crp
If 'crp' is not a typo you can use command-not-found to lookup the package that contains it, like this:
cnf crp
jbowler@Jule:~> ./crp
jbowler@Jule:~> echo $?
0
Ok, one typo. But I quoted the whole thing verbatim, no edits, nothing, no macros either.
awk
John Bowler (lifelong maintainer of other peoples crp code.)
I don't know about COBOL. but I challenge anyone to name another language that's as portable as 'C'
That sounds like a direct comparison.
COBOL is no threat to 'C', and vice-versa. They both have totally different ideal uses.
You wouldn't dream of trying to write drivers in COBOL, and anyone trying to write the sort of business logic you can easily do in COBOL in 'C' needs their pointers extracted.
COBOL IS DEAD! Long Live COBOL!
There are more functional lines of COBOL (and Fortran) working in big business today than the average kid who never used a dial telephone could possibly imagine. I do not know of a single COBOL (or Fortran) programmer who is currently out of work. I can't say the same for Java(script), VisBas, C++, C#, and what-have-you.
When my students ask me what other language(s) to learn, I've been suggesting COBOL (and Fortran (and C)) for about thirty years now. Not a month goes by that I don't get a "THANK YOU!" email from a former student, now making a real salary coding in one of them after dicking around with fad languages for a few years.
With more than a couple billion lines of code in current use (by some estimates), COBOL's not going anywhere soon. Same for Fortran and C. They might not be sexy, but they do real work, in the real world ... and that's where the big bucks are.
Remember, kiddies, the Web and associated languages are ephemeral. COBOL (and Fortran (and good old C)) is here to stay. Learn one (or all three), and you'll be employed for life ... or, as in my case, until you decide you want to retire.
Everything you say is true but the problem Cobol has is that the sort of applications its used in are, to be blunt, as dull as hell. Not many people want to spend their entire career manipulating data tables when there are far more interesting things to do these days compared to the 60s and 70s.
Over the years I've often had students ask me "what other programming language should I learn?" Invariably, I reply "COBOL!" ... I have had a lot of email thanking me for that advice. Good COBOL coders are worth their weight in core memory ... Always have been, always will be[1].
Somewhere, Grace is smiling an evil smile in the way that only she could :-)
Pardon me while I polish the case containing my nanosecond.
[1] Yes, I know, that's been depreciated over the years, but what hasn't?
Quote:
As far as the programmers - aging, moving out of coding, retiring, dying - check, check, check, and check.
As far as the language - difficult to learn, often not taught in schools, negativity perceived - check, check, and check.
As far as the code - poor documentation, poor sample code, poor mentoring - check, check, and check.
As far as external drivers - potential panic from a date apocalypse - check.
Torvalds hit the nail on the head and 2030 might be optimistic.
To be honest , you could be describing any one of the high skill low status jobs out there in today's world.
You know.. the stuff that actually makes the world work instead of a flashy website or cool animation.
I've had my fill of concurrent and system programming and as far as I'm concerned.. you can stuff it.
So I stick with the robots who wait around endlessly because you've forgotten to put a Sync() in the program... stilll that better than the idiot who starts the robot and does'nt check to sync() with the machine and punches a robot hand through a closed door...
Maybe we could automate Linus .... a swear bot and a punching robot should do the trick
That's just another problem of the monolithic monster kernel.
Typically, people start small. They have to get used to the process. With Linux there is just a landslide of things you have to know before you can get started. Consequently, almost nobody ever manages to jump that hurdle... :-(
Micro-kernels are a thing and have been for a while and where well developed at the time LINUX was a gleam in Linus's eyes. I'm sure that many reasons for LINUX not being a micro-kernel can be trotted out but I can't help thinking that its monolithic nature is contributing in large part to the problems being described.
The Linux kernel is "monolithic", but that does not mean it is not modular. It is. If you modify it, you have the know the interfaces related to your modification, but you do not have to understand all of the kernel in detail.
For example, adding a new device driver, or maintaining one can be done without understanding all the details of file systems or scheduling.
But you need the whole calabash of sources around and then the interface changes once more and you're *ucked.
I went through that while building VMware patches. You always needed the fitting version, otherwise you'll get errors. Eventually I just gave up on VMware. In other words, you have to maintain it and check if it still works and build it anew with every new kernel release. That sucks and is the main reason for having so many IOT devices with ancient kernels (and tons of security issues) around.
That is a different issue. The Linux kernel crew does not promise, in fact it aggressively denies, that internal kernel interfaces are stable. Only the user space interface is preserved.
A microkernel could have the same problem, depending on the attitudes of the implementers. They might not care about having stable interfaces between the component processes.
Personally, I think the Linux kernel should aim for some kind of stability of intra-kernel interfaces at source level, maybe breaking it only every third year or so. On the other hand, you already can use the long-term support kernel series, which already behave like that. You get breakage only when upgrading to a newer series, which you can postpone for years (https://www.kernel.org/category/releases.html).
But you have to code it in the same (high) qualify like the rest of the kernel code (knowing all the right ways how to do which stuff). E.g. AMD and other companies' open source drivers were rejected some times. If Linux had stable ABIs people could write drivers separate from the kernel code. Yes, the quality would be lower, but the enter barrier would be much smaller (people could use even any language). The quality part of this would improve over time as it would still be open source (even when not in the language you like).
Message passing micro kernels have been around since the late 60s so its not like no one has ever heard of them. If they were a good idea almost every OS kernel would be implemented that way. But they're not and their main problem is (ironically) overall system complexity and speed - or lack thereof. Message passing for basic OS functions might seem like a great whiteboard idea but in reality its a dog slow, overcomplex solution to a problem that doesn't exist.
Not necessarily dog slow. It depends up on how the message passing / switching between kernel actors/modules (or whatever you OS of choice calls them) is handled. Not too dissimilar to the normal issues around context switching from user space to kernel space for things such as system calls. They do have some inbuilt advantages as well. Device drivers tend to run completely in user space and so don't have the inconvenience (mostly) of having context switch in and out of the kernel for each buffer load of data. They also tend to fit better on top of distributed computers or multi-threaded CPUs.
Declaration of interest; I spent some time working with Chorus Systèmes getting their micro-kernel UNIX OS working in a performant manner on an ICL massively parallel (well for the time) computer which I doubt any of you would have heard of because only three were ever sold. Again, for the time, it wasn't too shabby on the speed front being much faster for large database operations than the other equivalent UNIX big iron boxes of the time.
Oh, c'mon, really?
This was done to death in the so-called Tanenbaum–Torvalds debate on Usenet back in the early 90s. All kinds of fanbois, sycophants, hangers-on, trolls, and various other interested parties weighed in on the subject, at very great (and boring) length.
The final resolution? Both monolithic and micro kernels exist. Both have pros and cons. Use the one that suits your needs at the moment, and vive la différence!
They are complaining about decreasing/lack of man-power. On the other side, they are binding man-power in maintaining various longterm kernels in order to somehow counter the atrocities of having a monolithic kernel.
Doesn't that smell funny to you?
To be fair, there are only 6 kernels in long term support (3.16.X was finally EOLed in June, and 5.6.X was initially released at the end of March of this year)). Maintenance on them isn't really all that great a burden, at least not from what I've seen.
I can't imagine why he has trouble attracting maintainers. Perhaps it's his language and attitude? He wouldn't get away with this at a large professional / valley company.
I considered getting involved a few years back but decided not to because the environment is so toxic. Why would I want to volunteer lots of my time to be potentially treated like crap, or watch others get belittled or treated like crap? A number of years ago he promised to change, but he only mellowed for a while then he got worse again. His problem is he never spent a good amount of time in a large professional company to round him out and have him mature.
Life is too short. Someone else can do it if it turns them on.
Don't believe the bullshit spouted by the curtain-twitchers and hand-wringers. The environment isn't toxic at all. If it was, I wouldn't have been contributing for over a quarter century.
Unless your skin is so thin it tears when you get out of bed in the morning, that is.