back to article Linus calls Linux 'bloated and huge'

Linux creator Linus Torvalds says the open source kernel has become "bloated and huge," with no midriff-slimming diet plan in sight. During a roundtable discussion at LinuxCon in Portland, Oregon this afternoon, moderator and Novell distinguished engineer James Bottomley asked Tovalds whether Linux kernel features were being …

COMMENTS

This topic is closed for new posts.
  1. Lou Gosselin

    No surprises here.

    Maybe Linus will finally overlook his pride and give up his stubborn decade long argument for feature-bloated monolithic kernels.

    Today, with so many drivers and so many different various linux kernel flavors, it's making less and less sense to stick with the monolithic kernel, where code glitches become impossible to isolate and recover from. I'm sick of having to recompile drivers over and over again against each new kernel version.

    Instead, an extensible microkernel approach should eventually be adopted such that the base kernel is as simple as possible. Also, drivers should be compiled once and usable on any kernel having the same major version number.

    Of course Linus' real reason for pushing the monolithic kernel with no binary compatibility is that it fits in with his philosophy of making it it as difficult as possible to release binary-only drivers. Linus has himself to blame that the source tree has gotten out of control.

  2. Anonymous Coward
    Gates Horns

    I Wonder...

    I wonder when the first Microsoft commissioned white papers that quote Linus will be issued? I wonder if they are stupid enough to use the performance gains of Windows 7 over Vista as proof of the superiority of Windows development and ignore the gigantic performance degradation of Vista over XP?

  3. Victor 2

    comments...

    "Linux creator Linus Torvalds says the open source OS has become "bloated and huge," with no midriff-slimming diet plan in sight."

    Linux is not an OS, it's a kernel.

    "Asked what the community is doing to solve this, he balked. "Uh, I'd love to say we have a plan," Torvalds replied to applause and chuckles from the audience."

    Yes, that has always been Linux' problem... no plan, no direction, no engineering, no thinking... only people adding more and more stuff, then replacing some of that with new stuff changing the whole ABI from one release to the next... I wouldn't applaud that, it's kind of sad actually.

    "He maintains, however, that stability is not a problem. "I think we've been pretty stable," he said."

    I beg to differ.

    The plan... it's simple: Kill the Batman.

  4. Anonymous Coward
    Anonymous Coward

    what problem?

    If the kernel is 2% slower per year, but the hardware is 2-10% faster per year... then there is no net problem, is there?

  5. Steen Hive
    FAIL

    @Lou Gosselin

    "Maybe Linus will finally overlook his pride and give up his stubborn decade long argument for feature-bloated monolithic kernels."

    Running drivers in user-space would help code-bloat and performance how, exactly? The drivers aren't going to get any smaller - probably the reverse given the relaxed requirements - and running drivers in user-space is probably the best way to kill kernel performance known to man. Case in point - xnu has to be hybrid to avoid being unusable, and it runs like a mangy dog nonetheless.

  6. Phil Koenig Bronze badge
    FAIL

    Code bloat vs Moore's law etc.

    AC wrote: "If the kernel is 2% slower per year, but the hardware is 2-10% faster per year... then there is no net problem, is there?"

    Why yes, yes there is.

    Personally I think it's a damn shame that with today's fire-breathing CPU's, there are many tasks that I could do far quicker on my 20-year-old Commodore Amiga than on some over-bloated modern monster with an OS that takes 1-2GB of RAM just to boot.

    In my personal version of utopia, designing products like that should result in jailtime for the coders.

  7. Seán

    Where is the journalism?

    What does RMS have to say Linus may think he's Jesus but RMS is YHW

  8. Adam Williamson 1
    FAIL

    lou:

    you mean, we have Linus to thank for the fact that all our hardware isn't run by black box, binary-only drivers?

    Thanks, Linus!

  9. Anonymous Coward
    FAIL

    Slowlaris

    So, the Slowlaris Kernel gets faster all the time, while the Linux kernel gets slower.

    We quickly need to find a new name for Linux, like Snailux.

  10. Ramon Cahenzli

    Windows comparison unfair?

    I wonder if it's fair to compare kernel sizes when the Windows kernel doesn't support nearly as much hardware and as many exotic devices as the Linux one?

  11. Wortel
    Grenade

    So,

    Linus answers honestly. Don't see a problem there. Some of you negative commenters forget this man enabled us to have a very flexible OS in every corner of our lives, with all freedoms, in as short a timeframe as 15 years. Have you forgotten how long ago it was that Microsoft started putting out an OS? what are it's limits? Yeah i'll stop there.

  12. Mectron
    Coat

    Wow....

    as a kernel gain more feature, it became bloated (and full of bugs).

    maybe now the linuxoid will think twice before bashing windows?

  13. This post has been deleted by its author

  14. Anton Ivanov
    FAIL

    Re: No surprises here.

    First of all, Linus is right. I have had at least 2 machines which were perfectly usable as media center client tipped over into unusability. They are now too slow (from 2.6.18 to 2.6.26).

    Last 10 releases is roughly since NFSv4 has fully gone in. The ghastly thing has brought a few regressions that had not 2%, but 92% performance drops. Even with most of the problems fixed there is a boatload of places screaming for optimisation as of the last time I looked at it (2.6.28). Iteration across all elements is used instead of hashing and so on. Add to that slowdowns from moving portions of USB, parallel, etc to userland (hello Microkernel fans) and the picture is more or less complete - it definitely needs a feature freeze for at least a year in many areas until the code is sped up and optimised properly. Microkernel has nothing to do with it.

    Using iteration to walk an ever-expanding permissions cache will be slow in microkernel. Same as in monolithic.

  15. Doug Glass
    Go

    Bloat's The Thing

    "....streamlined, small, hyper-efficient kernel..." Looks like Windows envy to me spawned by a perceived need to "catch up" to Windows and win the hearts of the unwashed masses.

    Before the age of indoor plumbing, the only water leaks you had were in the roof. With the advent of in-wall water piping came leaks, corrosion, and clogs which mandated a greater need for maintenance. Whenever you add features, you add problems; that's just the nature of the beast.

    If Linux actually expects to compete with Microsoft Windows, then bloat is the way to go. Well, unless you could convince the common user to accept less, which isn't likely.

    Linus T. is still living in dreamland.

  16. ratfox
    Thumb Up

    Good

    I like it when people do not spam marketing on the world.

  17. Sam Liddicott
    Linux

    it's only bloated if you build and load it all

    The kernel source is bloated, but it is only a template for a kernel.

    It's not a requirement to build and load it all.

    Many small and unbloated kernels are built from the bloated source.

  18. A J Stiles
    Boffin

    What everyone is missing

    The Linux kernel comes in Source Code form. If you're really desperate to squeeze every last trace of performance out of it, you can trim it right down to just the bits you need. And as recently as five years ago, that's exactly what people were doing. With 2.2 and 2.4, it was entirely normal to compile your own kernel: you compiled the filesystem and chipset drivers hard into the kernel, and built modules only for the hardware you actually had (or thought you might acquire).

    Now that Linux is mainstream, and now processors are riduculously fast (probably due to the demands made by other operating systems), it's simply got past the point where anybody can be bothered to strip it down anymore and reached the point where you can spend more time deciding what not to include in the kernel, than you actually save by leaving it out. It probably isn't helping that hardware manufacturers all insist to make products that are largely incompatible with one another, thus requiring separate kernel modules, either.

    But one thing is certain: If there's a gap in the market for a stripped-down Linux, it *will* be filled, one way or another.

    This is one thing Apple actually got right. By controlling the hardware on which OSX runs, they at least know what they need to put in the kernel and what they can leave out.

  19. northern monkey
    Megaphone

    @Anonymous Coward 2:20

    Aaaah - I hate that argument! Just because computers are getting faster does not mean we should write our code less well, less efficiently 'because we have room too'. Write the same streamlined efficient code you wrote for old, "slow", memory challenged machines and it'll run like s**t off a stick. Write bloated inefficient code because there's no need to bother putting the effort into good programming then it will run OK, but give it to a colleague with an older slower machine and you're stuck.

    They should put a contract on every copy of every programming language tutorial book, every computer science course, every training course, that requires the owner/attendee to promise to endeavour to write efficient code. If they can't promise that or don't see why they should then they should be shown the door.

  20. seanj
    Stop

    Re: Funny.

    Not a fanboy of either MS or Linux, use them both at work and have no real preference (maybe I'm just not techie enough!), but:

    "Linux getting slower and bloated while Windows (7) getting faster and leaner."

    Seriously? That's your argument?

    Faster and leaner than what? Vista? An OS that Microsoft wants condemned to the recycle bin of history? That's like gaining 6 Stone last week, but boasting I've lost 2lb this week, aren't I amazing! It just doesn't fly...

  21. Crazy Operations Guy
    Unhappy

    Why closed source software works.

    Projects this large, especially OSes, require that there is someone there to kill certain ideas before they become a problem. Its this sort of feel-good attitude that infests the open-source community that is killing it. What we have is developer that are trying to contribute by adding functions are extra features to things, but aren't good enough to make them lean and responsive. But no one wants to tell them that their code sucks and that they should do more practice and studying before it can be included. They don't want to do this because it will make them look like the bad guy, look like he is against the community and be flamed to hell and back.

    The community reminds me of how current society is going. This whole "we are all winners and we should accept our differences", fuzzy warm feeling political correctness. This sort of bland non-offensive, culture-neutral crap. Yes, people should be treated fairly, based on their merits, but everyone should be told when they make mistakes. We only get better if we know we made a mistake. I mean, we are all adults, we should be able to handle such minor things, if someone tells me that 'your work sucks' I am not going to take it personal, I am going to work and try to show him wrong.

    Really, its the community that is destroying itself. I was once a Zealot myself, but then I saw the dark side of the community. I saw the arrogance of the senior community members, constantly believing that their were always right and 'correcting' other people's work without giving them any information on how they can improve their code. I saw the constant in-fighting and power struggles amongst the project, each believing that they should be in charge of the project. I've seen the completely new programmers (Usually a fresh CS graduate, or sometime a high-school student) who've dove right in to projects messing things up code, trying to apply every rule they learned school (Usually rules that only apply to writing in BASIC or Java) and destroying some of the most elegant code I've ever seen, especially not documenting their code (Or sometime Over-documenting their code, take a look at the config file for Lynx if you want an example of what I am talking about). I've seen hard working, highly skilled developers sidelined because they don't just don't have the courage to say what they think. But the worst thing I saw was the near unlimited army of users that are constantly white-washing everything, trying to paint everything with puppies and kittens, completely ignoring the elephants in the room, these are the ones that pushed me to leave Open-source, and programming in general behind, to be the cynical, miserable bastard I turned into when I turned 21.

    I congratulate Linus on coming out and addressing what has been ignored for the last several years, I hope more people start to speak out and maybe open-source may once again be respectable in my eyes. Hes is just a few years too late. I wholly agree that the Kernel has become too-bloated, there is so much that doesn't need to be there. Thre is far too much support for far too many devices built-in, sure it will support some obscure system bus that was made some manufacturer for only 2 years, but who the fuck cares? When was it said that the needs of the few have to outweigh the needs of everyone else? Why has society done this too? Why must I censor myself because it may offend someone? When did we become slaves to lawsuits and fines, afraid to say even the smallest thing to prevent alienating a small group of people? When did we move from 'rule of the majority, protection for the minority' to 'rule by special interest, fuck the majority, screw the other 98% of society, they don't know what it means to be oppressed'. When?

  22. Anonymous Coward
    Anonymous Coward

    All Aboard the Minix Train

    The Linux train will be docking in the station soon, passengers wishing to continue their journey should proceed to platform 3 where the Minix bullet train is awaiting.

  23. tiggertaebo
    Grenade

    Always a trade off

    Microsoft learned the hard way with Vista that OS efficiency is becoming important to the "average user" again - Vista felt like treacle on hardware that XP felt like lightning on and without giving the user that all important sense that it was really doing more for it. Linux needs to be careful not to cross this line, if indeed it hasn't already - these days when selecting an OS for my older boxes I'm generally finding XP gives me a more optimal experience for the resources.

    I know there are some nice skinny distros out there that will run quite nicely on the older hardware but often this involves either compromising the user experience or the ease of access to the software I want - sure these are generally things you can work around but when XP is going to do the job and for a fraction of the effort why bother?

  24. Ken Hagan Gold badge
    Troll

    Re: Bloat's the thing

    ' "....streamlined, small, hyper-efficient kernel..." Looks like Windows envy to me'

    Hahahahahahahahahahahahahaha!

    And as for the earlier "Linux getting slower and bloated while Windows (7) getting faster and leaner.", have you actually *used* Windows recently, Mosh? As a software developer, I need to regularly flip between XP, Vista and Win7 on the same hardware and whether you are at the low or high end, XP is *way* faster every time.

    Linus is just being honest. *All* operating systems are getting bloated, even his.

  25. Anonymous Coward
    Anonymous Coward

    It's not all doom and gloom.

    When I upgraded my Lenovo laptop from Debian 3.1 to Debian 4.0 it got noticeably faster at booting.

    On the other hand, the keyboard and touchpad occasionally stop working now and I don't know how to bring them back to life without a reboot. It might be a hardware problem, and if it's software then I would guess it's more likely a problem in the X.Org driver than the kernel. It doesn't happen often enough for me to be sufficiently motivated to investigate further, and you can blame the speed of rebooting for that. :-)

    I see Debian 5.0 is out now. Do I risk it?

  26. windywoo
    Thumb Up

    If this had been MS

    It would have been marketed as feature rich and compatible, If this were Apple we wouldn't have heard anything and the fans would make excuses that its feature rich and compatible.

    Linux may lack a bit of direction, but I'd rather have people who create the OS for the love of doing it than because it makes them big bucks.

  27. Anonymous Coward
    Anonymous Coward

    Re: Funny

    Yeah, if the trend continues, somewhere around 2099 we'll all be switching to Windows.

    Quickfix: stick a pretty front end on the kernel compiler that automagically does all the complex stuff depending on your choices for the simple stuff; add some fancy hard ware analysis bits; slap the whole lot in a bootable CD image; bingo - custom lean kernels.

  28. Anonymous Coward
    Anonymous Coward

    interesting..

    I haven't noticed it myself, but if Linus says so, I'm inclined to take his word for it. However, his "slow and bloated" kernel is still noticably slimmer and quicker than any other kernel that I run- and hell, my netbook flies under linux, crawls under windows (though of course, some of this is down to Windows userspace bloat, also).

  29. Dr. Mouse

    @Sam Liddicott

    Yes and No.

    Compiling your own kernel with only the features you require will always be quicker than running the 'catch-all' kernels supplied by the distros.

    However, the problem comes when core parts are modified to support new features, and that code is not fully optimised. You need that chunk of the kernel in your own, hand-rolled kernel, but it is slower than the code in the previous release due to modifications, so your new kernel is more bloated and slower.

  30. Kebabbert

    Linux should aim for quality

    instead of quantity. The Linux code base is 10 millions line of code. ONE SINGLE KERNEL. The entire Windows NT was 10 millions LOC. I think Linus should reconsider, and have a plan instead of redesigning everything all the time.

    When he states that they fix bugs faster than they add code, so what? The code they bug fixed will soon be swapped out to new code that contains new bugs. It doesnt matter if they fix bugs, because that code will soon be swapped out. And again and again. This is the reason Linux has no stable ABI, and this is one of the prime reasons Linux is unstable.

    Even Linux kernel developer Andrew Morton complains about the declining quality of the Linux kernel. His words:

    http://lwn.net/Articles/285088/

    Q: Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem. Assuming there's a difference of opinion here, where do you think it comes from? How can we resolve it?

    A: I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix.

  31. Owen Williams
    Linux

    Wot he said (Sam Liddicott)

    Otherwise the Acer Aspire One wouldn't have you logged in in 12 seconds. And the kernel is stable. And the kernel is faster than Windows and MacOSX. Windows 7 and Snow Leopard are only now trying to speed things up. And how? By dropping features. With Linux the user can choose the features he wants to run with.

  32. Ally G

    What Ever Happend to....

    Recompiling the kernel for set machines for speed?

  33. Jason Bloomberg Silver badge
    Linux

    The bigger picture

    Any distro ( Linux, Windows or other ) consists of a number of core essentials; kernel, drivers, protocol stacks, file system handlers and applications. The problem seems to be in how that is divvied -up; what's installable at runtime, what's compiled into the kernel, and it stands to reason the more pre-compiled into the kernel, or essential to load at runtime, the more bloated an OS as a whole becomes.

    As Sam Liddicott says, it's entirely possible to build lean Linux kernels, and distros for that matter, and XP Embedded does the same for Windows. But that approach creates OS's for specific machines or a subset, not a fit for purpose OS for any and everything a user may have or want in the future, which is what a Desktop environment is.

    Disk footprint is entirely different to run time memory footprint and execution speed so the approach to bloat should be towards lean and mean kernels, with the user able to add or remove drivers, protocol stacks, file systems and apps at runtime. Drivers in user-space may not be acceptable as Steen Hive notes but that doesn't mean they have to be pre-compiled into the kernel. Lou Gosselin is right to put the blame on "feature-bloated monolithic kernels", and it's time to fix that; Linus has dug his own hole and needs to get out of it.

    Bloat is anything a particular user doesn't want or need but cannot be got rid of, so come on Linux guys and gals - and Linus in particular - show us how it should be done, don't just accept the "unacceptable but unavoidable" shrug of the shoulders and resignation to it.

  34. Tom 7

    At least with linux you can cut out the bloat

    and compile your own kernel if you so wish.

    Something Mosh Jahan will wish he could do when W7 has had a couple of service packs and is back to being Vista.

  35. TeeCee Gold badge
    Grenade

    Re: what problem? (AC 02:20)

    I see. Who are you astroturfing for, Microsoft or Intel?

  36. Doug Glass
    FAIL

    @Wortel

    Yeah, and it's a shame the marketing is so dismal to.

  37. Anonymous Coward
    Anonymous Coward

    How many psychiatrists...

    ...does it take to change a light bulb?

    Only one, but the light bulb must "want" to be changed.

    At least Linus admits the problem and states it openly; that's the first step towards a solution.

    Microsoft, along with many other commercial organisations, is unable to admit to problems like this, not because they are "evil", but because it becomes a financial issue with stock prices falling and investors and internal politics getting in the way of engineering solutions.

    Fortunately the FOSS world is able to say and do things that commercial organisations can't, and that at least puts the commercial organisations under some pressure to improve products.

    If Windows 7 is better than Vista, it will be in some part due to competition from Linux (remember the first Netbooks?); would Windows 7 have even been delivered without the threat from Linux, or would Microsoft still be trying to push Vista onto us?

  38. Rod MacLean
    Joke

    RE: Funny!

    Mosh Jahan wrote: "Linux getting slower and bloated while Windows (7) getting faster and leaner."

    Yeah, that's like saying "I saw Kate Moss eating a few chips - but Mealoaf is on a diet"

  39. gerryg
    Linux

    Would that be the same performance-lite kernel...

    ...that runs on 19 0f the 20 fastest supercomputers and the vast majority of the next 480?

    Oh, it's customisable, you say? You don't need to use everything it comes with?

    Who'd have thought it?

  40. Edwin
    Linux

    PEBKAC

    For years, we have known that compiling your own kernel is cool and results in a faster kernel.

    For years, we have complained that ordinary users won't use Linux.

    For years, ordinary users have feared Linux because it's not easy to install or use.

    More recent distros are much more user friendly precisely because the kernel is so bloated.

    So what do we want - a universal kernel that will run on pretty much anything, or some form of hideous hardware scanning autocompilers that recompile the kernel as soon as you plug in a new USB device?

    It's a little like Apple's iPhone business model: it sucks, but 90% of the planet likes it, so we happy few will have to live with it.

  41. James Penketh
    Linux

    @Crazy Operations Guy

    "But no one wants to tell them that their code sucks and that they should do more practice and studying before it can be included."

    Try and submit some sucky code to the linux kernel devs, and someone will definitely tell you that your code sucks.

    "They don't want to do this because it will make them look like the bad guy, look like he is against the community and be flamed to hell and back."

    When you get a major kernel dev, or even Linus himself, saying this, people tend to side with them. ;)

    Not that I've submitted code to the kernel, I'm not good at C. But I have made a fool of myself on LKML and the replies were a little scathing. Certainly taught me to double-check things before clicking "Post".

  42. fishman

    In the past

    In the past, someone would come up with some metric showing how the linux kernel had slowed down from previous releases. The next efforts were then spent on fixing whatever had slowed the kernel down.

    So all they need to do is develop a set of metrics that demonstrate and quantify the problem, determine where the problems lie in the code, and then fix it. Easy :).

  43. Anonymous Coward
    Flame

    @Mectron

    Yeah maybe so, but at least we can have an honest and open discussion about the problems in Linux, with a possibility of some action. Windows what you got? A lot of whingeing and a hell of a lot of praying and hoping Billy and the boys will fix your problem! If not, nevermind there'll be a new version of the O/S along in, oh lets say 7 years time?!!?

  44. The First Dave

    @Ken Hagan

    Correct me if I am wrong, but part of the advertising for Snow Leopard is that it is smaller and faster than the previous version, but I don't suppose that Linus really wants to get into a comparison with OSX?

  45. Sooty

    @Northern Monkey & Others

    "Just because computers are getting faster does not mean we should write our code less well, less efficiently 'because we have room too'. Write the same streamlined efficient code you wrote for old, "slow", memory challenged machines "

    This sort of thing shows you up as 'not a software developer' or if you are, please god don't let you be one that i have to work with.

    Old 'streamlined' software, was streamlined because it had to be, it sacrificed stability and maintainability in order to maintain execution speed and a small memory footprint. People don't write code that streamlined anymore 'on purpose' as it leads to problems. It's often easier to re-write large chunks of it than make the smallest updates. People were writing it like that because they absolutely had to, not because they wanted to!

    Would you rather your software crashed constantly, or whenever it did just gave you an unhandled exception, like in the good old days? Or would that the coder used a few, extremely innefficcient, checks on responses to make sure it either continued working, or gave meaningful errors.

    As a developer, i know it's much slower to read things dynamically from config files, than to hard code them, but i still read them dynamically, as it means i can update them in seconds, rather than searching for masses of hard coded values, and recompiling the whole lot.

    Remember the Y2k bug, 2 digit dates were used to streamline memory usage and processing power as it was expensive back then! If that happened again with modern code (significant change to the date format), i would hope it would just be a small update to the date type/class and a recompile.

    Perhaps kernel code is different, but any halfway competant developer of most other software will be continually sacrificing execution speed and memory usage, for maintainability and stability, not just because they feel like it. Functions, no chance, all that mucking about swapping values on and off the stack is inefficient! Try/catch blocks are horrifically inefficient, but the core functionality of most error trapping!

    There are reasons that companies/banks hire armies of assembler developers to make the smallest changes to their batch processing, as small inefficiencies can make hours of difference in those volumes, but it takes a long time to make negligibly small changes, and is mostly indecipherable to another person without taking a lot of time to investigte. Not to mention the smallest error causes the whole thing to fall over.

    Yes, some software is inefficient without any need, and that should be eliminated, but don't just assume that because newer software has more of an overhead, and runs slower than older stuff, that there is necessarily anything wrong with it!

  46. Anonymous Coward
    Anonymous Coward

    So ...

    Who here actually thinks that Linux is slower and less-efficient than windows? Even with a full-bloat kernel any Linux distro will match performance or exceed it against Windows every time, even on inferior hardware.

    Try file serving, database access, high-thoughput processes and processor-intensive loops and such like -- Linux tends to seriously outperform Windows every time.

    Now, since most real-world Linux custom installs have a custom-compiled kernel anyhow, this is a bit of a non-story.

  47. AndrewG

    The first thing I always do

    Is recompile the kernel to be modular and only use the hardware I've got installed..mind you, the source is now HUUUUGE but the source is supposed to cover everything it's installed on, and most (not all) of the main distros run a big monolithic kernal to make sure you don't have any problems at install.

  48. A J Stiles
    Thumb Up

    @ Edwin

    "So what do we want - a universal kernel that will run on pretty much anything, or some form of hideous hardware scanning autocompilers that recompile the kernel as soon as you plug in a new USB device?"

    Actually, that's got legs.

    Put a bloated catch-all kernel on the install disc, but also provide an advanced "super racing tune-up" installation option that will compile a brand new kernel with support for the auto-detected hardware and any more that the user selected (either from a menu, or just by having the user plug in their USB devices one at a time and auto-detecting them). After all, we know which modules we loaded in the first place, and which ones go with the new devices ..... well, they're obviously the ones we need to compile. Display a warning that this will take a long time and this is the last chance to bail out. Use a bootloader that supports multiple kernels, so you can start up in "super fast" mode (with your custom kernel) or "failsafe but slower" mode (with the stock one).

    Now, if the user later acquires a new piece of hardware for which they didn't compile a driver module but the Source is in the Tree, the required module can always be built at a later date. Even if it is some device that needs its driver to be compiled "hard" into the kernel, or requires a new Kernel Source Tree to be downloaded, it will only be necessary to boot failsafe and rebuild the custom kernel.

    This whole process can of course be almost fully automated, perhaps with a progress bar or even an amusing slideshow, for the sake of people who presumably have difficulty remembering how to spell "make".

  49. HarryCambs

    He should have thought about it tobegin with

    Having every single driver in the universe shoehorned inside the kernel definetly contributes for the vast majority of the bloat.

    The Linux community is still extremely offensive against drivers that are not bolted inside the kernel.

    So the problem was created from the conception.

  50. Jim T

    @Crazy Operations Guys

    Seriously, fact check. Linus and his lieutenants(sp?) have absolutely no problem tearing your patch to shreds, rejecting it because it's useless or just plain doesn't fit with the kernel.

    They have no problem with being seen as the bad guys.

  51. Doug Glass
    Go

    @Crazy Operations Guy

    Damn! That's good. But I suspect you'll be getting a lot of flack since you're posted real world experience as opposed to dream world fantasy.

  52. viet 1
    Linux

    small and lean isn't always efficient

    I've used linux for over ten years, starting with the latest 1.2 - 1.3 kernels, and slowly climbing the ladder until now (fedora 10, 2.6.27). 1.2 kernels were terribly inefficient, but we didn't expected much of computers in those times, so it got away with it. But 2.0.x was really marvellous, albeit limited in functionalities. Then we got the 2.2.x series, that was rather experimental in spite of its alleged stable status, and performances sucked (big locks, many in-memory data copy etc.).

    It was soon replaced by the 2.4 line. 2.4 is still rock stable today, and while it lacks many drivers for new hardware, it's pretty small and can run most of your exotic stuff (alpha, sparc smp). But in spite of being lean and stable, 2.4 is still full of the 2.2 conception quirks that are performance bottlenecks. Hence the need for 2.6.x, which originally aimed at streamlining the foundations of the kernel. Many O(n) algorithms were swapped out in favour of O(1) counterparts, and where it runs, 2.6, while generally much bigger than 2.4, runs way better and faster. That recently it begun to slow down a bit doesn't do justice to the extraordinary improvements it made over older kernels.

    So maybe it's time to feature-freeze 2.6, and stabilize it while launching experimental 2.8 for new stuff. But in my books, 2.6 is still the most efficient kernel to date, and one of the most innovative, competing only with 2.0 in that area (2.0 brought a tremendous amount of novelties to linux, support for elf32 to begin with, and much, much, much more, and is still maintained for some embedded applications).

    Now, I'm not a kernel hacker, but I've had the chance recently to play with NetBSD, I've used windows XP and Vista (not 7 yet), and a bit of MacOS X. While any of those can be marginally better than linux in a particular area, *after* spending an awful lot of time trimming it (just about like you could improve linux in the first place by tailoring a slackware to suit your needs), the conclusion is linux is the 4WD of OS. Runs everywhere, and pretty much goes over everything you can think of throwing at it. NetBSD ? Oh, you need to compile that kernel to get what you want (and I tell you, it's not for the faint of heart ; been compiling my linux kernel for ages without breaking a sweat, but NetBSD gives you chills down the spine). MacOS X ? Does everything, but random (x)thing will set you back another (y*10^3) $ - and forget about whatever computer you have, it's intel-apple only now. Windows ? ... I'm torturing myself to find a place where widows shines, and can't find none. It's a nightmare for everyone, from sys admins to users, albeit a familiar nightmare so they don't feel the need to wake up.

  53. Anonymous Coward
    Anonymous Coward

    Illogical (yes, Linus!)

    If something is unavoidable, then you are foolish not to accept it. There's no alternative.

  54. northern monkey

    @Sooty

    I am a software developer actually, in the HPC community. Cycles are expensive so we use them well. I'm sure some DB developers see where I'm coming from --- DB lookups should be done wisely, in the right places and if necessary cached - all too often you see people looking up the same bloody (non-volatile) thing in every iteration of a loop.

  55. Nexox Enigma

    Seems fast to me...

    I'm still running (custom stripped down kernels) on hardware that XP won't even install on, with X, and it runs pretty decently. Mind you, I can't use any of the popular web browsers, thanks to 128mb of ram, but I can go from power on to El Red (in Kazeahkase) in about 30 seconds. I don't remember any sort of Windows doing that on a PII.

    Not that the kernel couldn't use some optimizing, but it's a huge chunk of code that people actually depend on, and they've got to keep adding new hardware support.

    Linux did accidentally remove a null pointer check in a couple versions, which lead to ReiserFS breaking tragically over and over on my fileserver. A full fsck takes about 12 hours on my larger arrays, and I was less than happy to be able to run them 2 or 3 times a week. The kernel isn't at all perfect, but it's the best that I can find (Because OpenBSD and I just don't get along well.)

  56. A J Stiles
    Stop

    @ HarryCambs

    The point is, you're supposed to put fences where as little as possible has to go through them -- not just where they happen to look pretty.

    Drivers belong in kernel space (1) so that "ideal" hardware can be modelled (all things that do the same job should expose the same interface), and (2) so that hardware operations can be sanity-checked. Sure, you could implement a filesystem driver mainly in user space with just a straightforward bridge to the hardware in kernel space -- but how can you be certain then that it isn't going to attempt something silly? And then it becomes harder to enforce consistency of drivers between filesystems, which makes it harder to replace one with another.

  57. Paul 4
    Pint

    @Doug Glass

    No, hell get flak because he's not realy in touch with reality and is spouting random lines from HYS about being PC and how him saying to people "you suck" is good, but people telling him he sucks is bad.

  58. Henry Wertz 1 Gold badge

    2.7 series?

    "I wonder if they are stupid enough to use the performance gains of Windows 7 over Vista as proof of the superiority of Windows development and ignore the gigantic performance degradation of Vista over XP?"

    They already have! There's all kinds of hype about how much faster 7 is, in actuality comparing it to Vista. I don't know how much Microsoft drummed up and how much is from users, well, using it.

    Anyway, I do hope the developers speed the kernel back up if it's really slowed down that much. And I think they will. As for (size) bloat, I'm unconcerned -- they've added support for more and more hardware, and that is going to take more code. As several people said, I can build a custom kernel if I want to, slowness is a concern but size is not.

    "Yes, that has always been Linux' problem... no plan, no direction, no engineering, no thinking... only people adding more and more stuff, then replacing some of that with new stuff changing the whole ABI from one release to the next... I wouldn't applaud that, it's kind of sad actually."

    It's not great. But there are UNIXes that are more "plan first, do later", FreeBSD and OpenBSD come to mind. Good consequences are what you'd expect -- stability and consistency within a release series, good code quality, and so on. The bad, driver support and features are not thrown in as quickly.

    What I think Linus really should do is finally start a 2.7-series kernel for any serious changes people are wanting to do; instead of just making more and more 2.6-releases, work on stabilizing and speeding up 2.6 (it's very stable for me, but it'd be easier to fix regressions if it wasn't a constantly moving target..) and put new features into a 2.7 development kernel on the way to a stable 2.8. The older stable-series kernels (1.0, 1.2, 2.0, 2.2, 2.4) you could more or less get a driver for one version and build it on another (within the same series.) The incompatible changes were in 1.1, 1.3, 2.1, 2.3 development series... unlike now where each 2.6.xx may or may not have major driver-affecting differences.

  59. Anonymous Coward
    FAIL

    @Sooty

    "This sort of thing shows you up as 'not a software developer' or if you are, please god don't let you be one that i have to work with."

    What a load of old codswallop.

    Lean, well written and designed code is something all good software guys aspire to.

    Unfortunately, a lot of people of calling themselves "programmer" learned a bit of HTML at school and think they can program.

    Programming is an art and a very precise skill and too many people think they can do it without the proper practice, training and experience.

    The evidence m'lud:

    - Coders don't check function return codes;

    - They allocate memory expecting it never to fail so they don't take the trouble to detect NULL from malloc();

    - They don't check their buffers and overrun them;

    - They assume that the stack is unlimited and stick massive great structures on there

    - The use "handy" STL templates without checking if they really are efficient or appropriate for the application.

    Holy cow I could go on and on.

    I have seen complex systems that were written in C to run on DOS boxes that run for years without a reboot (seriously).

    When you have to write code that has to run ALL the time, like for military systems, or at the local phone exchange, then you can truly call yourself "programmer".

  60. Colin Wilson
    Linux

    Can anyone with a fresh install of windows...

    ...tell me what the memory footprint is before you load any apps...

    Linux boots for me into about 230Mb of memory tops, whereas my current install of Win XP runs at about 450Mb after a fresh boot with about 42 processes (it's fairly lean by most standards).

  61. ZenCoder
    Happy

    Lean Efficient Code?

    Real world programing requires trade off and compromises, and things are a hundred times more complex that what you see in a beginning programming book.

    They will show you an simple program, lean an efficient. But in the real world ....

    Well that program only covers 95% of cases ... adding additional code so that it works in all situations and the program is 10 times larger.

    Now defensively verify all your input and check all your function calls for errors.

    OK now you have to add system wide error handling to detect and recover from errors.

    Don't forget security. You'll have to replace all that fast and simple c style string operations with data structures which are not vulnerable to buffer overruns, filter all incoming data ....

    Oh and guess what ... that book taught you to program like your code was the only code running on the system, but you are coding in a multi-threaded multi processor envirtonment. So rewrite everything so that no two bits of code can ever step on each others toes.

    Even at the level of writing a simple routine ... you can't just write "good lean code" ... its all about design trade offs and compromises.

    When you go up to the high level design its the same thing. You can make the system more modular which will make it easier to understand, easier to test ... but then everytime a program wants to read from a file on the hard drive it has to pass through 7 layers of carefully tested and verified services then back up.

    You can cut out all the overhead but then you have one big layer that is so interconnected and complex that you can't really test anything in isolation.

    Or you can design everything in the most efficient way possible, but that design is inflexible. Every time you add a new feature you have to redesign from the ground up. Or you can build a flexible system whose design is less efficient but can easily be extended to accommodate change.

    Then there are business concerns. Lets say there is a way to make your OS twice as efficient only its going to take 10 times as many programmers, and an extra 5 years and will break compatibility with 3rd party software.

    Its all about balancing conflicting demands, design trade offs and market conditions.

  62. phil 27
    Thumb Up

    Glib response

    Small and compiled with only what you need? Gentoo (he says recompiling the kernel on his gentoo xbox...

  63. Steen Hive
    Thumb Down

    @Jason Bloomberg

    "Lou Gosselin is right to put the blame on "feature-bloated monolithic kernels"

    Actually he isn't right at all. World + dog knows that Linux hasn't been a true monolithic kernel since before I learned to mix metaphors. It is only monolithic in the sense that drivers run in kernel space - in combination with udev, etc., drivers are never loaded into the kernel unless required by hardware, so unless you can find a way to do away with device drivers for an ever-expanding plethora of hardware, protocols and legacy systems, you're essentially up shit creek.

    The so-called "micro"-kernel concept is a documented failure (xnu, L3, mach, excepting QNX maybe) in terms of both code-bloat, performance and maintainability when compared to kernels that contain other critical subsystems than IPC, Memory management and scheduling in the core code. Sure it's a great and elegant idea conceptually, but an abject failure at doing anything else except being a conversation topic among the University chattering classes.

    Now the argument of exactly what should go in a kernel can be argued over till we're blue in the face, and sure Linux could maybe do with a prune here and there, but devices are supported by code, not Scotch Mist.

  64. Anonymous Coward
    Anonymous Coward

    @ sooty

    Banks don't hire armies of assembler programmers, they hire armies of COBOL & RPG programmers, small numbers of assembler programmers. The investment in legacy support is huge but you won't find many large financial applications written in assembler (IBM 360 or otherwise). The folks who are hired also have to be good at JCL (not difficult), TSO (yes it is still used), CICS, and other ancient big-iron environments. ISAM, VSAM, DB2, and Oracle databases also co-exist in these environments and they all have to be able to use each other's data if required, and quite often you will find UNIX and TANDEM in the mix as well.

  65. SilverWave
    Linux

    Full Speed Ahead :) Think of it as evolution in action.

    Its great that he's not complacent, and hey, maybe if he highlights it someone will provide a plan (worth a shot).

    If someone did come up with a solution then we can have our cake and eat it too!

    Sneaky Linus ;)

    The Kernel is evolution in action - good stuff that is of value survives, the rest dies, rots and finally disappears.

    From what I have seen lately re the file system if you can _prove_ your case then ppl listen. Its the scientific method applied to software development... Which is why FOSS will win in the end... it is a modern scientific, capitalist development versus the proprietary, authoritarian, monopolistic, command economy model (and we know how that one will end).

  66. Chris 112

    Perhaps a Microkernel would be the answer?

    Can anyone say Minix... :)

  67. Homard
    Pint

    Resource Control/Accessibility

    I think user kernel modules are a bad idea ! The kernel *HAS* to be able to fully control access to hardware resources. With user kernel modules, just how can the kernel reliably be expected to do this without checking, and allowing/blocking everything the user module is doing ? I can see this being even more messy and resource intensive than having a kernel with all the hardware support compiled in. And also aren't user kernel modules just another attack vector to be exploited ? Or a serious risk of system crashes due to deadlocking of a vital resource ?

    So if you want to slim down that kernel, then recompile it for your system - at least you have the flexibility to do that, albeit with a bit of research required in preparation. But if you're concerned about performance, you most likely have the background to learn what you need to do this, and enjoy it at the same time.

    I think Linus is right though to observe that the kernel is getting larger, and losing some performance, but as has been stated there will be more error traps in the code, more features, etc. This should give a better end user experience with a more feature rich and stable system. So whilst there is maybe room for improvement (everything suffers from this !), I don't think the problem is severe. If there is a more effecient way to do something, equally stable in comparison with the current way, then lets include it. If it's bleeding edge, and not quite ready, keep developing it by all means, but it should not be in the stable kernel.

    Now I've used vista, and I didn't see any performance problems that have been mentioned, though the machine it was on was reasonable spec, and I didn't run xp on the same box as a comparison. There was masses of eye-candy as is to be expected, and I even liked the northern lights screensaver. I just didn't like vista, and some features really started to grate, particularly the endless dialogue boxes warning that I might harm the machine by running something. But it did work, just not in a fashion that I like. xp is better, but still not my favourite system.

    So I'll be sticking with my friendly bloated warthog of a Linux kernel, the Operating System and all the other fine software that I can run on it, whilst thanking all who contribute towards it for what they offer to any of us who want to use it.

  68. Michael Shaw
    Dead Vulture

    Re: faster CPU's / Write more efficient code

    it used to be (20 years ago) that you paid developers to write fast efficient code, and developers spent time to make their code efficient and fast enough.

    These days, its usually cheaper to buy faster / more hardware...

  69. Peter Kay

    BSD kernels? Pfft - easy.

    Pretty much all documentation on kernel options is in options(4). A few aren't but they're generally the more rarely used and experimental options..

    Get source. Optionally update from CVS. copy and edit config file. run config. run make depend. run make. copy kernel into place. boot. If it fails boot old kernel in single user mode. fsck filesystems. copy back old kernel. reboot.

    Or run build.sh and cross compile from just about any platform to any other platform. It's a heck of a lot easier than on other systems and properly documented to boot.

    Alternatively, run OpenBSD. Users are strongly encouraged never to recompile the kernel unless they have a good reason (such as deliberately commented out Perc RAID array drivers, testing >4GB memory support or hacking the i2c probe code to be less aggressive (problem on a very limited set of motherboards. Unfortunately mine is one of them.)). OpenBSD's user community can be extremely harsh though, particularly if you haven't RTFM first. Being insulted by Theo is a rite of passage.

    Or FreeBSD. That's strongly module based, a bit different and probably a bit more accessible.

  70. Anonymous Coward
    Linux

    Kitchen sink

    Good feedback on this article. I moved to linux as it allowed me to do what I wanted, as against MS. With the more recent kernels I have been having a few grumbles from my wife from boot up times on her machine... But thats probably more to do with the cruft I have snuck on here system. Heartbeat and cluster services, to name a few. Am now trying ro come to grips with dkms for a kernel driver that I need. Personally I shudder at the thought of drivers not being compiled for the specific kernel. Keep the hardware lean an mean, please!

    I have just had to download and install a driver for a logitech keyboard and mouse, for a client! 68MB! Non windows certified so you have to have another bl**dy keyboard and mouse to get it working. She unplugged to to move some cords, and the thing goes through the whole sequence again, requiring a separate keyboard. Oh and you can just plug on that you know is identified in as the prompt about a driver halts any autodetection of hardware, you have to pull the plug and put in the other keyboard and mouse. Don't get me started on HP printer drivers (~230MB!) plus another (69MB) required to install .NET. I have some clients who are still on dialup, and cannot get broadband, thank god it wasn't one of those!

    The linux experience is in a whole different league. Most things just work, and I know if I want to get things going faster I can compile the thing myself...( I never do though ) A nice friendly system for auto compilation and optimization would be nice.

    Tux, cause he's big monolithic, alive and happy!

  71. Wortel
    Gates Horns

    @Doug Glass 08:00 GMT

    Yeah, and you seemed to have missed the point. 'Linux', the kernel lest you forget, has developed much faster than any Microsoft product. And marketing? don't get me started. Marketing was invented by assholes with an exceptional ability to enhance the truth. Keep watching those happy happy joy joy Microsoft Vista television ads mate.

  72. Kebabbert

    Fun thing

    If I say "Linux is bloated and huge" people will say "no, you are a Troll". Then later when Linus T says the same thing, then what? Can not people think themselves? Must Linus T explain everything? If I say something true, it is not true until Linus T had confirmed. That is ridiculous. No critical thinking, no logical thinking. Dogmatic view.

  73. viet 1
    Flame

    @Peter Kay

    Not everybody's using an x86 ; my test gear was a sparcstation 20-712 that came empty, OS wise. I've got already a 10-512 that's happily running the latest supported sparc32/debian (etch) so I was willing to try something else, because of sparc32 being EOLed. FreeBSD : no sparc 32 support. Out. OpenBSD : no smp support on sparc 32 (at least this is clear from the installation web page). Out. NetBSD : no particular caveat, brags about being compatible with about every arch out there. OK, let's burn that iso. Hum. There are two conflicting statements in the INSTALL notes at the iso root... Can it go smp or what ? Check the mailing list archive : in -CURRENT, sparc32 smp is broken. But hey, -4 still does smp ! Let's burn -4. Install, boot, post-config, ok, seems to work, let's D/L some stuff. pkg_add -v windowmaker (yes, I know, serial console etc..., but that's beside the point). "Kernel attempting to lock itself against a lock", break to OBP. WTF ??? A couple of random crashes later, let's try to slim down the beast. Remove every obsolete bits in default pertaining to 4, 4c, 4d arch, config, depend, make... wait... wait... link fail ! Would you believe it, there's an undocumented ref in 'default.smp' to some 4d stuff in 'default'. Neither config nor depend gave me warning about it ! Goto config, etc.

    Verdict : it somehow works for some values of work. I've less random deadlocks (but I still got some on occasion). My linux 10 has served me well for years, acting as dhcp provider, dns relay, and pop3 from my isp to local imap in my home network. The NetBSD 20 wouldn't cope with that reliably at the moment. It makes for a fine X terminal, which is a pity. Superior BSD stability my foot.

    Flames, obviously.

  74. Anonymous Coward
    FAIL

    It really is funny...

    ... just how religious some people are with regards to software. I've read a lot of comments on here that just fly in the face of the facts.

    For instance, the fact that you can compile your own kernel if the source is too bloated - this is a non-argument and doesn't even begin to address the situation. The fact is, whether you Linux hardcore'ers like it or not, the majority of Linux users now would not have a clue how to go about this. Linux is still not a contender with MS in the desktop market but this isn't the geek only option it used to be. You need to get used to that and move on, develop a proper Windows Killer and not just copy it like all the smartphones copy the iPhone.

    This argument: "Linux getting slower and bloated while Windows (7) getting faster and leaner." Made me LOL. Saying that Windows is getting faster because of the improvement of Win7 compared to Vista? WTF? Have you forgotten how fast W2K was on new hardware and how XP was reviled as bloatware when it was released? Then suddenly when Vista was released the same thing happened again... the hardware wasn't up to the pile of poo that was installed on it. Now Win7's out it's the saviour of MS, the best offering since (the once hated) XP. But only because it's running on quad cores with gigs of RAM.

    It's a shame that people can't see that there is a major hardware/software imbalance here. As hardware improves, the software should evolve with it and run faster, but it's just not the case in today's bloatware infested world.

    It never fails to amaze me how fanboyism can kill rational thought in otherwise intelligent human beings.

  75. steogede

    Re: what problem?

    >> If the kernel is 2% slower per year, but the hardware is 2-10% faster per year... then there is no net problem, is there?

    My computer hasn't got any faster in the 3.5 years since I bought it.

  76. Charles Manning

    Efficient code

    Clearly most posters above don't know anything about Linux or efficient code for that matter.

    Linux might be bloated and huge relative to what it was, but that does not mean it is bloated and huge when compared to Windows.

    Linux is modular which means that only the modules you actually need get loaded. Thus, the wifi drivers and tablet driver for some odd-ball machine are not actually loaded unless they are needed.

    Code efficiency is very important for the majority of Linux devices (which are not PCs etc). Most Linux machines are phones etc and efficient code means better battery life and cheaper phones. As a phone software designer, try to ask the hardware guys to build 1Gbytes into the phone. Expect a lot of laughter.

    The limitations constraining performance go through generations: CPU speed, memory availability, memory bandwidth, etc. The design choices that make sense at one time don't necessarily make sense at another time and you're always playing off memory usage against speed etc.

  77. Russ Brown
    Go

    Modules?

    Stop me if I'm on the wrong track here...

    There is a difference between the linux codebase and what actually gets loaded when you run it. The difference is influenced by two things:

    1. Things not complied at all

    2. Things compiled as modules and not loaded at all

    Now, from what I understand it, you only need to compile in things that are required in order for linux to start up and start mounting your disks so that it can start loading modules. So this basically means filesystems and chipset drivers.

    Fine. So what's wrong with compiling *everything* else as modules? That way, you have access to *everything* that the linux kernel supports, while at the same time only needing to load the modules required to support the things you need? Other than the disk space used (which I doubt is significant these days) I don't see any downsides.

    That just leaves the requirement of compiling in things that are required to boot. These can't be modules because the modules are potentially stored in a different partition to the boot partition, which may be on a different controller and use a different filesystem. Fair enough, that is a problem as the number of filesystems and chipsets is only going to increase over time.

    So, how's about a new type of module - a "boot module" - which gets stored on the boot partition along with the kernel itself. I doubt it would result in any massive increase in storage requirement, since the kernel would be roughtly correspondingly smaller as a result.

    So that way, the kernel binary itself becomes as "micro" as it can, it loads only the boot modules required in order to boot, and then only loads other modules as and when they are required.

    Anyone see a problem with that?

  78. James Katt
    Happy

    Mac OS X Snow Leopard is FAST FAST FAST

    Linux has become bloated and huge.

    Mac OS X has become much slimmer and faster. Mac OS X generally is faster with every iteration, not slower. With Mac OS X Snow Leopard, it has also become much much slimmer.

    Perhaps, Linux developers can use Apple as an example of where to go.

  79. Anonymous Coward
    Anonymous Coward

    Linux *is* bloated..

    There, I've said it. Linux used to be efficient. Now, thanks partly to people bunging on extra features with little or no real thought as to the usefulness of those features, and partly to the insistance on using drivers built into the kernal, it's not.

    The problem Linux has is there is no overall person in charge. Both Microsoft and Apple have very definite plans for their various OSes, and both have made massive inroads to making those OSes more efficient. They have also worked to ensure that where they do add new features, they are useful and efficient.

    Now, some people above have suggested that the user can recompile the Linux kernel. True, they can. Why should they though, when Linux fanbois talk about how efficient Linux is? Also, for 98% of the population (who wouldn't know one end of a c++ compiler fron the other), recompiling isn't an option.

  80. Martin 75

    Recompile it for speed.... not

    There is a simple reason why people are not recompiling it for speed, and why it's bloating out of all proportion.

    Ubuntu.

    Linux used to be for the pro's. You spent months fighting with a command line to get X compiled only to blow your monitor up when you mis-typed the H & V Sync. You had to manually recompile your kernel to add / remove things to make it work. You ahd to know your shit.

    Then Ubuntu came along to make "Linux for everyone". Unfortunately, linux for everyone removed all of the technical challenges form the install, It is lowering the IQ of the collective Linux Userbase. Face it. 90% of the "OMG LINUX ROX!!!11!!" commentards are Ubuntu users wo run it for 3 months, feel superior, then quietly limp back too Linux.

    Real Linux users know it bloats, and have been recomiling their kernels for years. It's a non-issue. However it's a massive fail for "Desktop Ez-Mode Linux" as you end up falling into the traps that dog your "most Hated OS in the World".

    Welcome to your bed. You made it. You sleep in it.

    @Russ: You are wrong i'm afraid. BSD uses a modular kernel. Linux is Monolithic. It loads the whole damn lot.

This topic is closed for new posts.

Other stories you might like