back to article Drowning in code: The ever-growing problem of ever-growing codebases

The computer industry faces a number of serious problems, some imposed by physics, some by legacy technology, and some by inertia. There may be solutions to some of these, but they're going to hurt. In a previous role, the Reg FOSS desk gave three talks at the FOSDEM open source conference in Brussels. In 2023, I proposed one …

  1. Mike 137 Silver badge

    Thank you Liam

    An excellent article and right to the point. With my infosec hat on, ! wish more folks would echo this message:

    "a teetering stack of dozens of layers of flakey unreliable code, which in turn needs thousands and millions of people constantly patching the holes in it, and needs customers to pay to get those fixes fast, and keep paying for them for years to come."

    1. robinsonb5

      Re: Thank you Liam

      The phrase I've used for it in the past is "30 foot high Jenga tower".

      I have a small amount of hope that people tinkering with RISC-V cores on FPGAs will have more appreciation for lean, lightweight software than those of us using "proper" computers, since they typically have less than a gigabyte of RAM to play with, and often as little as 32 meg - not to mention relatively slow SD card storage.

      For my own toy CPU project* (not RISC-V - I was crazy enough to create a whole new ISA from scratch) I took one look at GCC and thought "no way in hell am I tangling with that" - so I wrote a backend for the vbcc C compiler instead, and an assembler and linker to go with it. The whole toolchain builds in a few seconds, and as a bonus it's lightweight enough to compile and run on an Amiga!

      [https://github.com/robinsonb5/EightThirtyTwo]

      1. Alan J. Wylie

        Re: Thank you Liam

        "30 foot high Jenga tower"

        ObXKCD

        1. Anonymous Coward
          Anonymous Coward

          Re: Thank you Liam

          A Tottering Tower of Turds

      2. Mike 137 Silver badge

        Re: Thank you Liam

        "people tinkering with RISC-V cores on FPGAs"

        Not just FPGAs -- there's a huge assortment of extremely versatile microcontrollers with 32-128 Mb of memory (e.g. the PIC 18F range), for which there are some dev tools implementing very efficient optimising C compliers (e.g. Wiz-C). The bloat starts once you want your code to run over an OS as opposed to on bare metal, as modern OS all seem to require massive bloated libraries.

        1. l8gravely

          Re: Thank you Liam

          The problem is that every library was written to help abstract out annoying to write code, or repetitive code. But every application or user needs just 50% of the library. It's just a completely different 50% for each and every project.

          So if you throw away abstraction, sure, you can run really quite leanly and on smaller systems. But once you use abstractions because (like linux) you run across a metric crap ton of different architectures and systems and busses, then you need abstractions and then you get bloat.

          How many of you grow your own veg and raise your own meat? I bet you all pop down to the local shop and pickup your food. Or go out and have someone make it for you in a restaurant. You've just abstracted out the light and lean process of doing it yourself! Sheesh!

          So yes I'm being annoying, and no I don't have an answer for software bloat beyond pointing out that different people have different needs, likes and wants. And trying to satisfy as many of those at once is what makes software (and even hardware!) such a problematic edifice.

          1. _andrew
            FAIL

            Re: Thank you Liam

            The thing about libraries, in the original construction, was that you only had to link in the 50% (or whatever) that you actually used: the rest stayed on the shelf, not in your program. The ability to do that went away with the introduction of shared libraries. Shared libraries were originally considered to be a space-saving mechanism, so that all of the programs that used, say, the standard C library, did not have to have a copy of it. Even more so when you get to GUI libraries, it was thought. But it was always a trade-off, because as you mentioned, every application only used their favourite corner of the library anyway: you needed to be running quite a few applications to make up the difference. But now it's even worse, because lots of applications can't tolerate having their (shared) dependencies upgraded, and so systems have developed baroque mechanisms like environments (in python land) and containers and VMs so that they can ship with their own copies of the old and busted libraries that they've been tested against. So now every application (or many of them) hang on to a whole private set of shared libraries, shared with no other applications, rather than just statically linking to the pieces that they actually wanted. The software industry is so useless at looking at the big picture...

            1. Nick Ryan

              Re: Thank you Liam

              Microsoft Windows inflicted utterly incompetent shared library management (i.e. none) which resulted in the total horrors of the .net never-shared libraries which exhibit themselves as many, many copies of exactly the same damn library stored multiples times and never, ever resolvable or purgeable. The abject horrors of the Windows\WinSxS directory which may, or may not, many hundreds of thousands of files, the proportion of which are file system links to each other such that it's next to impossible for even the file system to work out the real storage space used by the damn directory.

              For example, the PC I currently have at hand has 87,310 files spread throughout 300,298 directories, comprising 10.6GB of data taking 6.99GB of disk space. This is not a scenario that is remotely sane or manageable but it is the abject level of belligerent incompetence that Microsoft inflicted into their own Operating System rather than implementing shared library management in any sensible manner when they first had the opportunity or need to... i.e. around Windows 3.x time.

    2. OhForF' Silver badge
      Devil

      Re: Thank you Liam

      Most modern systems design regarding performance seems to be done with the premise Moore's law will half the time the endless loop needs to finish every year.

    3. maffski

      Re: Thank you Liam

      "a teetering stack of dozens of layers of flakey unreliable code, which in turn needs thousands and millions of people constantly patching the holes in it, and needs customers to pay to get those fixes fast, and keep paying for them for years to come."

      Guess what. Things are made of things. And very complex things are made of many complex things.

      Doesn't matter if it's code, chemistry or joinery. It's all loads of imperfect stuff cobbled together.

  2. Anonymous Coward
    Anonymous Coward

    Remember 'The Last One'?

    The PR droids for it boasted that you would never need to write another line of code again.

    Where is it today?

    Several million SW devs around the world aren't helping by ... writing new code.

    Just like the authors of the world, writing more and more prose that no single person can read in a million lifetimes.

    On a personal note, I wrote about 2,500 lines of Delphi Pascal last year. For my Train Controller project. The coding is done, now I'm finishing the scenery of the layout.

    I also wrote around 400,000 words of Crime fiction (3 novels and 5 short stories)

    1. katrinab Silver badge
      Flame

      Re: Remember 'The Last One'?

      In the Torygraph, written by a former El-Reg hack

      https://www.telegraph.co.uk/business/2024/02/12/dont-let-kids-waste-their-time-learning-to-code/

      I strongly disagree with his take.

      1. Anonymous Coward
        Boffin

        Re: Remember 'The Last One'?

        > I strongly disagree with his take.

        Me too. Even if you're not going to work in the industry. Getting a program to run causes one to engage in highly abstract thinking that would serve one well in other areas.

      2. Arthur the cat Silver badge

        Re: Remember 'The Last One'?

        I strongly disagree with his take.

        Ditto. I learnt to program 52 years ago. For most of those 52 years there's been someone, somewhere predicting the end of programming "because it will all be done by machines, you'll just tell them what you want done". I suspect this will still be happening long after I've been recycled.

        1. katrinab Silver badge

          Re: Remember 'The Last One'?

          Telling the computer what you want done •is• programming, whether you do it in Assembly, Cobol, C, Visual Basic, Javascript, Excel formulae, or whatever new shiny appears in the future.

          1. sabroni Silver badge
            Boffin

            Re: Telling the computer what you want done •is• programming,

            I respectfully disagree.

            Programming is explaining to other humans what you want the computer to do. If your code isn't easily parsable by a human then it doesn't matter if it works correctly, I can't work with it. If your code is easily parsable by a human and it doesn't work then I can try and fix it.

            Yes, your code tells the computer what to do, but if you're not programming in assembler then you're using a language that is primarily designed for humans to understand.

            1. anonanonanonanonanon

              Re: Telling the computer what you want done •is• programming,

              If you don't start with assembly, don't start programming is how I read that,

            2. anonymous boring coward Silver badge

              Re: Telling the computer what you want done •is• programming,

              "Programming is explaining to other humans what you want the computer to do."

              I cal that the spec phase.

      3. Doctor Syntax Silver badge

        Re: Remember 'The Last One'?

        His man argument seems to be that we need fewer but better.

        Learning to think would be better than learning to code.

        1. richardcox13

          Re: Remember 'The Last One'?

          > Learning to think would be better than learning to code.

          Correct. You learn to think by doing things: analysing problems, solving those problems.... which, when it comes to computers, is programming. There are other areas (physics, engineering, ...) that do this as well.

          Or, for a completely different academic area: one does not study history solely to become a historian.

      4. elsergiovolador Silver badge

        Re: Remember 'The Last One'?

        Top Tories are invested in big consultancies, especially from India.

        They don't want British kids to compete for developer jobs, as they prefer to import immigrants that they can control and exploit much easier - making higher profits and enjoying more dividends and capital gains.

        They already have amended the legislation like IR35 to ensure there is no domestic competition growing that could challenge what they have built.

        1. Peter2 Silver badge

          Re: Remember 'The Last One'?

          Might I suggest that you reflect upon the saying "Never ascribe to malice what may be attributed to incompetence"?

          Expecting several hundred MP's to be separately investing in Indian consultancies and engaged in a conspiracy to destroy the jobs market is rather less probable than the alternate explanation that they have no idea what they are doing. I strongly doubt that any of the MP's understand Chapter 8 of the Income Tax (Earnings and Pensions) Act 2003 (aka IR35) or the amendments since then because it's hardly easy bedtime reading.

          The only less credible suggestion would be that their replacements will in fact be any better.

          1. hittitezombie

            Re: Remember 'The Last One'?

            The MPs are played by the big companies via lobbying. It's not a conspiracy, it's the way big companies work and maintain their dominancy.

      5. Someone Else Silver badge

        Re: Remember 'The Last One'?

        I strongly disagree with his take.

        I can see why, but I'm not ready to hop on that bandwagon.

        I think Andrew is making a distinction between "coders" (which I take to be someone who can cobble up a program more or less by rote; think copy-pasta practitioners), as opposed to "programmers" (those with more highly tuned abstract problem-solving skills, who also know how to manipulate a programming language or two; Andrew refers to "elite programmers" in his tome). I actually think that he is correct in this dichotomy. Those with low problem-solving skills who can slap together a few lines of spaghetti-looking javascript will (and should) be in lesser demand, while those who can design (and implement) more complex systems will still be sough after.

        So, if making your kids "learn to code" means plopping them wholesale into the former group, Andrew's premise (as I take it) is that you're doing no one any favors. If that is indeed his take, I agree.

        And if it is not, OK, I need to improve my reading comprehension skills...

        1. hittitezombie

          Re: Remember 'The Last One'?

          I don't have to write a bare-bone OS to make a successful 'problem solving contribution'. Most of the problems don't require this, and a bunch of copy-pasta can fix the problem and be good enough.

          Most of the code is written for a once-only requirement, with minimum re-usability, and that's quite OK. I keep writing 100-200 line Python programs that fixes my problems, and which then get forgotten quickly.

    2. IvyKing Bronze badge

      Re: Remember 'The Last One'?

      I remember seeing it demonstrated at a computer show/fair. Main take away was that it took a long time to generate and compile the code.

      Then again, I remember attending the First West Coast Computer Faire, which was a mind blowing experience.

  3. katrinab Silver badge
    Meh

    "Benchmark vendors now include tasks like video compression in their tests, even though most computer users never do that"

    If you do video calls on Zoom/Teams/Facetime/Google Meet etc, you do video compression.

    If you upload videos to TikTok/Instagram/etc, you do video compression.

    Obviously that isn't everyone. Maybe it isn't "most" people, but it is certainly a lot more than just professionals in the film/TV industry.

    1. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > If you do video calls on Zoom/Teams/Facetime/Google Meet etc, you do video compression.

      A very valid point. However, I wonder what does it, and how? Is this a bare-metal parallel implementation, or is it some tiny bit of code buried in the webcam that outputs some compressed data stream that gets passed over an uplink to some remote server that does something a bit harder to it?

      My point being that this probably has little to do with turning a HD movie file into an MP4 or whatever, which is what benchmarks measure.

      > If you upload videos to TikTok/Instagram/etc, you do video compression.

      Never in my life, in fact, TBH.

      But I think similar applies here, too.

      1. _andrew

        If your zoom/teams/whatever client is doing background replacement or blurring, then it isn't the web-cam doing the video compression (well, it might be, but then it had to be decoded into frames, processed and then re-encoded to be sent). What's worse, most of the video systems are now using "deep learning" video filters to do the edge detection to determine where the subject and background meet, and that's probably leaning on some of that "AI" hardware that you mentioned, as will be the (spectacularly effective, these days) echo and background noise suppression on the audio signals (which have their own compressions and decompressions going on too, of course).

        I just had to retire my the older of my 5K iMac systems, despite the screen still being lovely and it being perfectly capable for most of the office applications, because it absolutely could not keep up with the grinding required to do zoom calls.

        It might not have made it into the likes of the geekbench-style benchmarking suites, but time-in-video-conference is a key performance measure of battery consumption reports for modern laptop systems.

  4. Mike 137 Silver badge

    “Everybody and their dog is coding”

    There lies the problem -- neither anybody nor their dog has been taught to program for far too long, just to 'code'. Coding the most trivial part of the art and science of programming (it's actually a mix of both). It's just one stage of a process that starts with understanding a problem to be solved, proceeds through definition of mechanisms for solving it ('algorithms'), translation of those algorithms into specification of procedures (yes, at the lowest level even objects contain procedures), further translation into the syntax and semantics of a specific language, rendering of an executable, and , finally, testing and acceptance. Concentrating exclusively on that one, essentially mechanical, stage inevitably results in poor quality results, as it misses out the critical factor of whether the right problem is being solved in the most appropriate way. In any case, coding is the one bit that AI might eventually fully automate, but the other stages of the development cycle require human understanding to accomplish properly. They'll always be needed so they're the ones we should have been teaching all this time.

    1. Anonymous Coward
      Anonymous Coward

      Re: “Everybody and their dog is coding”

      I never write code until I know exactly what I am going to write in my head. This lead to funny situations. Like I would browse internet on relevant topics or look at cats and regulate the focus I have on the problem I am solving. Junior developers would complain to the PM that "he is just browsing websites and not coding. I almost never seen this dude has IDE open."

      The PM would take me for "the talk". "People say you are slacking and it makes the team uncomfortable" and so I ask if I deliver on time and if there has ever been a complaint about my work?

      The junior developers have got reminded to focus on work and not waste time on snooping around.

      Since then I would just go to park or put my headphones on and take a bus ride some place random and come back until I know exactly what to code (if there are no scheduled meetings in the meantime).

      WFH has been brilliant for this, when you need deep focus and be in control of distractions.

      1. Erik Beall

        Re: “Everybody and their dog is coding”

        I tell Junior devs my most important coding is done on paper. I've found walking them thru my thinking is sometimes helpful but more important is to make them walk me through their thinking, and if they're a known copy-paster, to repeatedly point out how much better it is when they're done (thus far, it's always much better code).

      2. PhilCoder

        Re: “Everybody and their dog is coding”

        I absolutely agree. Typing code into your computer should be a minor and final stage in software development. Yes it makes managers nervous, which can be a career problem...

    2. LybsterRoy Silver badge

      Re: “Everybody and their dog is coding”

      I've upvoted you but I think the word "program" in your first sentence should be replaced by "think"

    3. ITMA Silver badge

      Re: “Everybody and their dog is coding”

      Words of wisdom there.

      For a long time I've always cringed at expressions like "learn to code", "we need more coders". NO!

      What we need are proper programmers and software developers.

      Coding, to be little pedantic, is that one single stage of translating the well thought out and designed SOLUTION into the language(s) to feed into the computer via a compiler (or interpreter). That is it. That is ALL it is. One step in a whole chain of steps.

  5. Neil Barnes Silver badge
    Holmes

    Spot on Liam

    Too many layers of cruft... but it's ok, because apparently AI will relieve us of all the problems.

    1. Timop

      Re: Spot on Liam

      The problem will disappear by itself as soon as AI can be harnessed to grow the codebase exponentially without human supervision.

      1. HuBo Silver badge
        Windows

        Re: Spot on Liam

        Easy enough though, just fold the code over back onto itself, 10 times, and whammo, a delicious software mille-feuille! (well, 2¹⁰ really)

    2. Someone Else Silver badge

      Re: Spot on Liam

      From the article:

      ("AI" is another big lie, but one I won't go into here.)

      Cant wait for when you go into this whole-hog. Will be staying tuned (as they say on this side of the Pond...).

  6. Roo
    Windows

    I spend quite a bit of time throwing stuff away that isn't needed - but has been "accumulated" in the codebases I work on. Throwing out chunks of code and dependencies is the easy bit, working out which tests need to be binned/changed is the hard bit. The choice of development tools & frameworks plays an important part too - for example a Java or Node project will pull in a zillion dependencies, most of which you don't *actually* need, and those extra dependencies will introduce unnecessary attack surfaces and wasted time in a) understanding them and b) maintaining/updating them.

    Typically I find a lot of low-hanging fruit that can go for the chop without too much effort - but it does require a fair amount of courage and a good working relationship with your users (ie: so you can work out what their requirements actually are right now and work with them to validate the cleaned up code). Just to be clear here; I am a glass half-full person when it comes to nuking junk code - it makes me happy and it's relatively low effort when compared to adding another bag of snakes to the nest of dependency vipers. ;)

    1. Joe W Silver badge

      users?

      It requires a good relationship with some colleagues, who are more than 10 years my junior and are more of a COF than I am. They also seem to be wed to every single line of code they ever wrote, and treat the stuff some ... lovemaking lovechild of a cowboy mad (spotted that typo, refuse to fix it) up before I joined the company as gospel.

      Sheesh, kids these days. But then the grad I just hired seems to be good material, (finally!) telling the boss (me) when he messes up code / gui / general stuff. Promising. The new one also seems to cope well with criticism and is quite open about new ideas.

      1. fg_swe Silver badge

        Re: users?

        Just make sure not to turn it into Maoist self-accusation.

        Then of course each and every good engineer will be open to positive criticism.

        Great men know that great work is always a compromise betwern different goals. One goal is to deliver in finite time and by finite means.

        If you want to do something truly new+useful, keep the rest as simple as possible. That does not mean you are lazy. Rather, you are focused.

  7. Martin Gregorie

    Spot on.

    The OP has pretty much got it in one. It would appear that pretty much anyone who can read and write legible text in their native language that explains clearly and correctly how to do everyday tasks and/or simple calculations can be taught to write computer programs that compile without errors. Many of these folks can also learn how to write code that not only clean compiles but also does what they intended it to do provided that the task is fairly straight forward, such as account-keeping for a household or small business.

    The same people would also get satisfactory results if they use 3GL systems (or possibly spreadsheets) to carry out similar tasks to automate household or small business accounting tasks.

    However, anything much beyond this, such as designing and implementing apparently similar systems for handling bigger financial structures (banks, stock exchanges or government departments) or for different tasks such as engineering design, science projects or running any business large enough to require a formally structured management board, will probably be doomed to failure simply because these tasks require mathematical, logical and/or managerial skills which many people simply do not possess or are unwilling to learn.

    Almost all our problems arise when people who do not possess these skills or are unwilling to learn them are put, or manage to talk themselves into, positions which require these skills.

    FWIW I've just read "The Rocket and the Reich", to find out more about the A-4 and von Braun. Its quite good about those, but I hadn't realized that it also has a lot to say about just how badly the 3rd Reich was managed and, by implication, what an abysmal manager Hitler was. It also has quite a bit to say about just how much time his immediate subordinates and their hangers-on wasted on playing dominance games among themselves. I've never before seen such a clear description of the general chaos this created: just as well for the rest of the world, I suppose!

    1. Gene Cash Silver badge

      Re: Spot on.

      Freeman Dyson once discussed how the RAF tried hard to locate the admin HQ for German aircraft production, found it, bombed it into oblivion... and were horrified when German aircraft production rates went *up*

      Edit: don't forget the ME-262, the world's first real fighter jet, perfect for bomber interception and air superiority... but Hitler insisted it be used as a bomber, even though it had almost no capacity for bomb load.

    2. fg_swe Silver badge

      Re: Spot on.

      What do you expect of a low level soldier ?

      Skills of a Franco ?

    3. hittitezombie

      Re: Spot on.

      One estimate for cost of the A4 progarmme compared it to the Manhattan project, and all they got out of it was an untargettable mess which wasn't even good at delivering (conventional) bombs. For the cost of a rocket, they could have built a fighter plane.

      Just attempting to bomb Belgium (Antwerp - 1,610 - rockets), they could have made a difference in defending the country and slow down the bombers, but hey, von Braun got to fire some rockets!

    4. Altered Ego

      Re: Spot on.

      Curiously Hitler was an abysmal manager *on purpose*. It is a well known ploy of people at the top of the power tree to assign overlapping responsibilities to their underlings so that they spend a great deal of time on internal turf wars to gain power over their peers - which distracts them from forming alliances to depose the obergruppenfuhrer.

      Truly great leaders don't feel compelled to do this.

  8. karlkarl Silver badge

    One of my personal hobbies is gutting older software projects, cleaning them up and simplifying them, getting rid of the bloat.

    I have a number of carefully maintained ports of i.e:

    - Blender (2.49)

    - Half-Life

    - GtkRadiant

    - Abiword / Gnumeric

    - GTK+2 (sits on SDL2)

    - CDE

    - DOSBox (direct framebuffer)

    Often it is ripping out random dependencies for trivial stuff, replacing GUI libraries and swapping terrible build systems.

    Open-source tends to grow and grow. Sometimes it is good to take a step back, have a think about the role of the core software and ultimately to make the codebase a joy to maintain.

    I get the temptation though. When a project is public, you are always stretched between keeping the project "correct" and focused vs trying to appease and make everyone happy.

    1. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > One of my personal hobbies is gutting older software projects, cleaning them up and simplifying them, getting rid of the bloat.

      Do you publish or release these anywhere?

      Do you offer the patches back upstream?

      Do you remove or restrict platform support?

      1. karlkarl Silver badge

        Some of them I publish and release, but so many people are only interested in "newer" stuff that it often isn't worth the time.

        I rarely offer patches back upstream. Mostly because they wouldn't accept them, gutting build systems, ripping out features is not the direction that these projects go. Hence the article on "bloat" I suppose ;)

        Some of my work to replace the Gtk in GtkRadiant with FLTK got upstreamed in FLTK itself though (Fl_Flex): https://github.com/fltk/fltk/blob/master/FL/Fl_Flex.H

        I only really care about Windows 7 and OpenBSD platforms. Linux tends to build due to the simplification of the software but I only get the urge to give it a shot once in a blue moon. Part of my work on Half-Life was to *remove* Android support because I find it a mess and was damaging the codebase.

  9. Bitsminer Silver badge

    10ms

    because computers aren't getting much quicker, as it gets bigger, software is getting slower.

    The sum of hardware and software on a PC is sufficiently fast (or slow) to support typing at speed. Human typing that is.

    All they sold us was 10 millisecond response time, no matter how many gigahertz or gigabytes or megaSLOC were in the box.

    It's been the same since 1981.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: 10ms

      [Author here]

      > All they sold us was 10 millisecond response time

      "I think you'll find it's a little bit more complicated than that."

      (™ Dr Ben Goldacre)

      In real life response times have been dropping badly for decades.

      https://danluu.com/input-lag/

      1. Greybearded old scrote
        FAIL

        Re: 10ms

        I have a friend who frequently complains that the computer I gave her is slow. I point out that it's fast enough play HD videos. Therefore if her web mail (OK, let's name the guilty party, Yahoo) can't keep up with her typing it's them who are to blame.

        But being right doesn't sort the problem, and the newer lappie that I'm about to give her will only be somewhat faster at running the manky web mail.

        1. fg_swe Silver badge

          Re: 10ms

          Did you ever bother to install Thunderbird for her ?

          1. Greybearded old scrote

            Re: 10ms

            I have, but atm only using it to keep a backup. There's a lot of more urgent stuff to teach her.

      2. Bitsminer Silver badge

        Re: 10ms

        s/10ms/100ms/ as I wasn't really thinking that hard about an actual number.

        Dan Luu's data doesn't totally refute me but it is interesting. The machines above 100ms or so are generally pretty rare. Most are less than 100ms with a range of ages which supports my (revised) point.

        It would be interesting to weight his table by numbers of units sold. Do slow-response machines have poor sales? And that Powerspec g machine....

        1. hittitezombie

          Re: 10ms

          Once you're past 200ms, you start experiencing 'slowness in response'. That' s why you want the response time as close to 100ms as possible.

  10. martinusher Silver badge

    You're paid to code, dammit, so get writing!

    Most programmers have very little control over what they're doing -- they work on a corporate code base with a specified corporate toolset in a corporate environment and they're paid to write code. Under the circumstances its not at all surprising that code keeps on growing, and the ever expanding (and invariably irrelevant) feature creep is just the justification for their work. Since the alternative is unemployment I don't see any likelihood of change -- you either go productively with the program or you join the dole queue.

    I've always maintained that a programmer should be looking for ways to not write code but its definitely swimming against the tide. I learned this early on in my career when I had to manage a small group of ten programmers or so. Of that ten three or four of us were super coders leaving the others to do all sorts of peripheral jobs like digesting documentation, investigating problems (real, imagined or potential) and generally making it possible for the coders to actually code. My management didn't see it that way, of course. Anyone not actively churning out 'x' lines of code per day were essentially redundant so when the inevitable economies happened the ax fell resulting in the inevitable collapse of the entire operation. (I'd moved on by then, thankfully.) What I learned from this is that programming is not the be all and end all and a life spent in endless edit / compile / debug loops is just wasting time and making noise (so from this I also got a lifelong dislike of IDEs). Anyway, you can't fight it and dammit, I'm retired now so someone else can fight those battles.

    1. matjaggard

      Re: You're paid to code, dammit, so get writing!

      I've had to fight the system occasionally to remove unused code. My worst was with a bank where every code change required a change request from an internal or external customer. Thankfully I found a helpful business analyst to sign off on "as a developer I want less pointless code so I can focus on the important stuff" type stories.

  11. david 12 Silver badge

    "Late in Wirth's career, he became a passionate advocate of small software."

    That's true, but a bit misleading. Wirth's early-career Algo proposal was for small software, against the competing proposal for big software.

    And, like his Oberon example, his idea for Algo/Pascal/Modula was that the "small software" should include everything that was necessary (and nothing that was not). That's why Oberon is a compiler including an OS, or an OS including a compiler

    In Pascal, like Oberon, his language explicitly included "everything that was necessary and nothing that was not", as he explained to anyone who listened. It wasn't a new idea that came late in his career.

    FWIW, c and unix represented the same idea, implemented as a kludge rather than as a design. unix was the necessary part of multix: c was "a small language of 32 keywords". Wirth's Pascal was a complete language, including I/O. Ritchie's c was part of the "unix programming environment", which implemented I/O in the unix library (still part of the definition of unix, although the definition is now deferred to the c standard). They agreed about smallness: they disagreed about implementation.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: "Late in Wirth's career, he became a passionate advocate of small software."

      [Author here]

      > That's true, but a bit misleading.

      Fair point, and very well expressed.

    2. MarkMLl

      Re: "Late in Wirth's career, he became a passionate advocate of small software."

      "everything that was necessary and nothing that was not" is fine, /provided/ that the underlying platform (i.e. language, OS or whatever) is extensible.

      For a language to not have a decent macro-expanding frontend is a major omission (and for the language's custodians to point to some favoured facility that can do many, but not all, of the same tricks is inexcusable aggression).

      For an OS not to be extensible by device drivers is a major omission (and the same applies to loadable network protocol plugins etc.).

      The devil take any developer that assumes that he understands all possible use cases to which his product may be applied, and eschews conventional wisdom and expectations.

      1. fg_swe Silver badge

        m4 macro processor

        Wirth was right not to add a macro processor.

        Code generation by macro processor or otherwise should be left to external tools.

        That would also be the unix spirit.

        1. MarkMLl

          Re: m4 macro processor

          However, the real strength of a macro stage is when it can determine the type of its parameters: /this/ is an lvalue to which a 64-bit real can be assigned, /that/ is an expression that evaluates to a real, and so on.

          Lack of a decent frontend which can handle that sort of thing is one of Pascal's many flaws. However when combined with a decent RAD IDE (i.e. Delphi or Lazarus) it still has quite a lot going for it: but I'm no longer sure that I'd recommend it to somebody seeking a language for in-depth study.

        2. _andrew

          Re: m4 macro processor

          Have to call "nonsense" on that one. A language without a macro level is a language fixed in time that can't learn new idioms or constructions. It's one of the things that the Lisp family totally got right: a macro mechanism that exists as a means to extend both the language syntax and the compiler. Rust has one too. I'd be surprised if any more languages were developed that did not have a way to write code that operated "at compile time".

    3. martinusher Silver badge

      Re: "Late in Wirth's career, he became a passionate advocate of small software."

      The power of 'C' is that its not a complete language with I/O and the like built in. You just can't predict the use cases for every piece of code and its a mistake to assume that every program is going to need console or even graphical I/O to a human user. Likewise you can't predict the range of devices you'll connect to the processor(s). A filesystem, for example, is either not needed, minimal or a full blown system. The application may need a network, and if it does it might not be a network that many programmer would recognize (for example, EtherCAT). The possibilities are endless. So you make a core language and augment it with libraries. Simple.

      Its the same with operating environments. They encompass everything from a simple background loop with interrupts through a minimal scheduler to a full blown multiuser operating environment.

      FWIW -- Pascal was designed as a teaching language. Its main claim to fame was that it only needed a single pass compiler, it was ideal for a situation where a bunch of students were all trying to work on the same (underpowered by modern standard) system. When I first came across it the compiler was running on the departmental mainframe, the front end processors that handled the terminal I/O were more than adequate to manage the workload. (....and then came PCs, TurboPascal and so on.......but we're talking 1970s here,not 1980s and ten years is a long time in Computerland)

      1. fg_swe Silver badge

        Re: "Late in Wirth's career, he became a passionate advocate of small software."

        Partialliy right. Language should not contain IO facilities.

        Wrong in that Pascal was just educational. Major systems like HP MPE and Apple Lisa were done in Pascal variants.

        MPE grew into mainframe size with 16 CPUs and 10000 character terminals/userd per machine

        1. Ian Johnston Silver badge

          Re: "Late in Wirth's career, he became a passionate advocate of small software."

          Wrong in that Pascal was just educational.

          PP said that Pascal was designed as a teaching language, and I think that's true. It found many other uses, of course, but that's not why it was created.

    4. Roo
      Windows

      Re: "Late in Wirth's career, he became a passionate advocate of small software."

      "unix was the necessary part of multix:" not quite *violently* disagreeing with you on this, but necessary really is in the eye of the beholder in this case. The MULTICS team had very different objectives to the AT&T UNIX crew, they weren't adding unnecessary stuff from their point of view. MULTICS looks pretty lean to me (see multicians.org), but not lean enough to run on an early PDP-11... I do like your characterization of C+UNIX as being "implemented as a kludge rather than as a design". :)

  12. user555

    Optane is basically ROM

    Because Optane (Flash/ReRAM/whatever) wasn't unlimited writes it couldn't be classed as RAM. That means it's a form of ROM. Execute-in-place (XIP) like games cartridges.

    Similar for file systems. They aren't XIP but rather loads into the temporary working RAM.

    MRAM is probably the only non-volatile memory tech than can be classed as a RAM. But it can't compete with Flash on density. At least not yet.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Optane is basically ROM

      [Author here]

      I have to dispute this.

      > Because Optane

      ... OK...

      > (Flash/ReRAM/whatever)

      Hang on, no. They're all different things. It's absurd to lump them together like this.

      > wasn't unlimited writes it couldn't be classed as RAM.

      Sort of fair, but overall, no: it's memory and it's random access. It's RAM.

      > That means it's a form of ROM.

      Oh, no _way_. That is _ludicrous_. Absolutely and categorically not.

      1. user555

        Re: Optane is basically ROM

        Everything is random access to varying degrees. Fixating on that one word is missing the point of what RAM really does.

        What I was getting at is files are a placeholder for ROM too. It's a means to hold a non-volatile copy for indefinite time. Number of writes (endurance) is a bonus. And from there Optane fits the same definition.

        MRAM is the only candidate that can bridge both worlds because it is also unlimited writes.

        1. Roo
          Windows

          Re: Optane is basically ROM

          I can see how you would arrive at your position if you've only ever known EEPROM tech ... The old school *ROM technology was *very* different and should be occupying it's own category though:

          ROM : Read Only Memory. Back in the day this might be implemented as a hardwired circuit of diodes, definitively not modifiable (by code) after it left the assembly line.

          PROM : Programmable Read Only Memory... Can be programmed (once) - can be implemented as a circuit of fuses.

          EPROM : Erasable Programmable Read Only Memory... Typically erased via UV light (those chips with windows covered by sitckers) - really don't know how the programmable bit was done for these...

          I can see why you blur the lines with EEPROM, but it is fundamentally different from RAM in terms of how it is addressed and how it is overwritten (aka Selectively Erased then Written).

          Just to add to the confusion : Core memory is non-volatile RAM - which also supports unlimited writes... Folks could power cycle a machine and skip the booting as the OS was already in memory... ;)

          1. user555

            Re: Optane is basically ROM

            Yes, Core memory is a RAM. So that's now four RAMs in total - in chronological order: Core, SRAM, DRAM and MRAM.

            PS: SRAM is a space-optimised array of flop-flops that naturally formed out of integrated circuits.

            1. Roo
              Windows

              Re: Optane is basically ROM

              SRAM didn't look like it "naturally formed out of ICs" in my day - it looked like it was generated by a script hooked into the CAD system, validated by the DRC engine, masks were then made from the design, and those masks used to fabricate some dies on 200mm wafers... About as natural as Trump's tan.

              1. user555

                Re: Optane is basically ROM

                Lol. I mean since a flip-flop is integral to compute designs, as soon as the IC came along it's natural to pack them into arrays for blocks of RAM.

                My point there is SRAM isn't a special construction, it's built the same way as the logic circuits around it.

                1. Roo
                  Windows

                  Re: Optane is basically ROM

                  I get your point - but it glosses over a lot of really important implementation detail.

                  Back in the day (a long long time ago) when I had an direct insight into how memory & microprocessors were designed & fabbed - the memory chips were fabbed on *different* fab processes from stuff like logic. That was the case in the 70s, 80s, 90s and I haven't seen anything to say that it's any different today. Sure there are some instances where the domains overlap - for example IBM offer "eDRAM" - often used to provide a big block of memory on a processor die (note: stock DRAM is fabbed using specialized processes).

                  Semiconductor fabrication is a fascinating topic, the world would be a better place with more folks who understand it... it's very definitely not Lego when you get down to the metal. :)

                  1. user555

                    Re: Optane is basically ROM

                    Um, I made the point about SRAM because it's the sole exception by being made of the same transistors as the logic structures around it. That's why CPU caches are built with SRAM. All other RAM types, as you imply, are specially crafted cell structures.

                    1. user555

                      Re: Optane is basically ROM

                      On the ROM side, "Mask ROM" would be the equivalent exception. Both SRAM and Mask ROM are integral to modern CPU fabrication.

  13. Grunchy Silver badge

    Kolibri

    The more I check it out, the more I like the Kolibri project. Well, the entire operating system fits on a single 1.44MB diskette. Not really a lot of bloat there.

    Undoubtedly it suffers security oversights but they’re not likely to be exploited, from the main issue: nobody uses it!

    (I like machine language operating systems, you know Commodore machines worked from machine language operating systems.)

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Kolibri

      [Author here]

      > Well, the entire operating system fits on a single 1.44MB diskette.

      It's fascinating but there are good reasons HLLs were invented.

      > Commodore machines worked from machine language operating systems

      Basically all late-1970s early-1980s 8-bits did.

      But CP/M was implemented in PL/M, and it was the one 8-bit OS that had substantial industry impact, and saw 3 major releases. I think that tells us an important lesson that many vendors missed.

  14. Denarius Silver badge
    Flame

    UI standardised ?

    I really have to take issue with that. Usable interface on Office up to Office 2003. Now all M$ produce is software with a thousand ways to meddle with fonts and not much else, requiring 20/40 vision. As for Android. spit, spit. Android devices or old Nokia would intuitive. My phones in last 10 years have become a source of intense irritation as standard activities like deleting logs of calls and SMS get harder to find. If there are UI standards, the old saw applies. Many standards to choose from. Most of them wrong. I can see why Apple devices are popular.

    As for main intent of article, spot on. I have an early Pentium with Win3.1 that boots faster than lightweight Linices on multicore machines, early Office where documents and spreadsheets get produced very quickly. Cant relay on such old kit, but it is not frustrating while it works. Even grandkids Win98 box with DOS 16 bit games runs well with a fraction of the resources that i7 main PC uses to do similar basic office activities. In a past life I noted that applications developed in house in some Federal government departments hung around for decades because they "just worked". Their modern shiny shiny attempted replacements built by contractors never "just worked" and were incomplete after millions were spent. I wonder if some big organisation took on 6 COBOL programmers of the old school who did the full life cycle design and wrote code to do same as say, SAP payroll. I suspect COBOL code would be a fraction of size, resources and cost to run. Other languages may do also so long as it enforced strong data typing.

    As for Oberon, maybe adapt it to a Pi or similar and see where that leads as young learners develop and move into workforce. Be a 40 year project

    1. MarkMLl

      Re: UI standardised ?

      Agreed. A major issue is that now that programs don't- as a rule- have menus the UI has lost the ability to "pop up" a hint telling the user what he's about to do.

    2. fg_swe Silver badge

      Re: UI standardised ?

      Try OpenOffice with RPI. Lightning !

    3. Ian Johnston Silver badge

      Re: UI standardised ?

      I have an early Pentium with Win3.1 that boots faster than lightweight Linices on multicore machines

      I have an old IBM Thinkpad X32 which used to fly with Lubuntu. Now that Ubuntu has gone 64-bit only I'm running AntiX on it ... and it's a complete dog.

  15. Bebu
    Windows

    Pentium IV

    I remember having 30 P4 systems foisted on us (latest and greatest) with their proprietary ram and ran hot and even at the time the older (&cheaper) Pentium III (Coppermine?) systems outperformed the P4s.

    If I recall from Hennessy and Patterson that the P4's lasting claim to fame is probably that it had the deepest pipeline (20 stages?) of any x86 cpu (or any CPU?.)

    Wirth's plea was also mirrored in the demands for smaller or less complex processor architectures (risc) which also had advocates from Californian institutions.

    The reasoning behind risc was that compilers couldn't, without becoming insanely complicated and huge, optimally use the clever cisc instructions and that compilers could translate higher level code in the much simpler risc instructions and effectively apply far more types optimisations with the actual compilers becoming simpler and smaller.

    Still Ceres workstation Oberon ran on were very ciscy (NS32k) but then Sun sparc boxes only shipped in that year (87.)

    While the gnu compiler suite is huge for all sorts of reasons (good and bad) there are decent tiny C compilers out there.

    There must be a software critical mass where every change (fix) must create an order of magnitude more defects (meltdown.)

    1. Roo
      Windows

      Re: Pentium IV

      "The reasoning behind risc was that compilers couldn't, without becoming insanely complicated and huge, optimally use the clever cisc instructions and that compilers could translate higher level code in the much simpler risc instructions and effectively apply far more types optimisations with the actual compilers becoming simpler and smaller."

      It's often presented that way - but if you follow John Cocke's work that explanation seems backwards and missing the most important bit...

      The key driver behind RISC was to fit the processor architecture to what could be efficiently implemented in hardware. Moving the complexity into software to fill in the gaps was a necessary side-effect - I feel fairly safe in asserting that compilers really did not get any simpler. :)

      With 20/20 hindsight you can see that as microchip technology advanced the hardware (RISC & CISC) got a lot more complex and the compilers also got a lot more complex too. The NVAX/NVAX+ -> Alpha 21064 -> 21164 -> 21264 -> 21364 illustrate the progression of hardware quite nicely over a short space of time (DEC introduced the GEM compiler with the Alpha line - see Digital Technical Journal Vol. 4).

      Intel's Itanium was an interesting throwback to the "simple hardware"/"complex compiler" idea - they made exactly the wrong trade-offs resulting in relatively dumb + inefficient hardware tightly coupled to it's compilers - at a time when competitors were fielding smart + efficient hardware that could run any binary produced by any old compiler quickly & efficiently via the magic of Out-Of-Order execution.

    2. Nick Ryan

      Re: Pentium IV

      The irony of the current CISC x64 chipsets is that many of them use microcode, i.e. pretty much RISC, to implement the CISC instructions.

  16. ldo

    A Few Ironies

    You point to a PDF version of Wirth’s article on “lean” software. Could a PDF renderer run on any OS he was responsible for, and render his writings in the high-quality, typeset fonts we are all accustomed to seeing online? Somehow I don’t think so.

    As for those 1.3 billion lines of source code in Debian ... let’s say Microsoft Windows is a similar size. How many people are there in the Debian project maintaining that code? Only about one-hundredth the programmer resources available to Microsoft. Which platform is more powerful, capable, versatile, functional? Debian, easily, conversely, by a factor of 100.

    With Debian. we don’t need to go down to the line-by-line level to know all that code is there for a purpose, because it’s all divided neatly up into packages. Leave out a package, and you don’t get the functionality offered by that package. It’s that simple.

    As for Windows? There are things in there that even Microsoft has no clear idea about. That’s why a simple base Windows install needs half a dozen reboots.

    1. abend0c4 Silver badge

      Re: A Few Ironies

      Could a PDF renderer run on any OS he was responsible for

      Given that Oberon has a GUI and a glyph-based text-rendering model with arbitrary fonts, it seems quite plausible. What is the basis of your scepticism, other than ignorance?

      1. MarkMLl

        Re: A Few Ironies

        Yes, but there /is/ a fundamental problem here: if you revert to a simpler OS then you're going to have to throw stuff away. And nobody can agree what's to go.

      2. ldo

        Re: A Few Ironies

        Seems like somebody didn’t notice the time lag, of about a decade, between the appearance of those initial on-screen GUIs, and good-quality final-document-view formats (not just PDF, but also proprietary precursors like Farallon’s DiskPaper and Apple’s DocViewer and QuickDraw GX PDD) that could be used onscreen and also printed on high-quality printers. It was because those initial GUIs were built on quite different principles from high-quality (and higher-overhead) rendering engines like PostScript. It was only when the hardware was fast enough to run the latter at interactive speeds that the two usages of text rendering could be unified.

        1. _andrew

          Re: A Few Ironies

          Memory size was probably the bigger barrier than complexity. After all, the Laser Writer and PostScript were more-or-less contemporaneous with the early Macintosh, and they both just had 68000 processors in them. Once you've rendered your postscript fonts to a bitmap cache, it's all just blitting, just like your single-size bitmap fonts. (Complexity creeps in if you want sub-pixel alignment and anti-aliasing, but neither of those were a thing in those early 1-bit pixmap graphics systems.)

          Sun eventually had Display Postscript, but you're right that that was quite a bit later, and used quite a bit more resources, than the likes of Oberon. Postscript itself is not so big or terrible though.

    2. Ian Johnston Silver badge

      Re: A Few Ironies

      Could a PDF renderer run on any OS he was responsible for, and render his writings in the high-quality, typeset fonts we are all accustomed to seeing online?

      I created - and read - my first thesis using LaTeX on an Atari Mega ST2 with no hard disk. A LaTeX install is now 6GB.

    3. MikeHReg

      Re: A Few Ironies

      David Parnas published a paper called "Why Software Jewels Are Rare" commenting on Wirth's paper. He had this to say: 'I sent an e-mail message to Wirth asking if he had an English version my students could read. He replied, "I can either send you the original in English on paper (edited in readable form), or I can e-mail it as an ASCII text. Let me know what you prefer." In this decade, most of us use computer networks that let us exchange papers in better ways. I can send a LaTex, nroff, or PostScript version of a paper almost anywhere and the recipient will be able to print it. Who would want to use a system that would not allow us to send or receive papers that were prepared using "standard" tools such as these? Providing these capabilities requires either reimplementing the processors for those notations or providing the standard interfaces needed by existing processors. Some may view this as "fat," but others will recognize it as "muscle."'

      There is an unavoidable tension between having lean software and providing necessary capabilities, and it requires fine judgement to know exactly what to include.

      These papers were written in 1995 and 1996, before the Internet had really taken off and before open source had become the force it is now. Both of these have now become crucial factors in the software bloat we see today.

      1. tsprad

        Re: A Few Ironies

        "send or receive papers that were prepared using "standard" tools "

        "e-mail it as an ASCII text"

        ASCII and email are indeed important, standard tools. I don't recall ever having trouble reading ASCII text, but Firefox has rendered PDF completely unreadable for me just yesterday.

        I've been reading ASCII text since bits were forged from rings of iron and bytes were hand sewn like baseballs by women in Puerto Rico.

  17. Roland6 Silver badge

    The turns full cycle…

    We’ve been here before, the issues of large codebases and systems, and their maintenance isn’t new, it was a big subject in the 70’s and 80’s, it is what drove formal methods etc. and the creation of ITIL. So there is a lot of knowledge out there, although accessing it may involve visiting a library and reading paper-based articles and books.

    Perhaps with the size of codebases now and the increasing spend on their maintenance, the industry will start to get more professional and adopt good engineering practises.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: The turns full cycle…

      [Author here]

      > We’ve been here before

      Indeed, and the near-universal belief in the false boogeyman of backwards compatibility and the loss thereof was one of the primary subjects of one of my previous FOSDEM talks:

      https://archive.fosdem.org/2020/schedule/event/generation_gaps/

  18. Filippo Silver badge

    >This is the existential crisis facing the software industry today, and it has no good answers. But there may be some out there, which is what we will look at next.

    I'm looking forward to it. I agree wholeheartedly with the issues described, but so far I can't see any way out.

    I find myself guilty of many of the sins described, simply because if I always followed the very best practices in every single row of code, I would be quickly undercut by competitors who don't do that, get to a deliverable much faster as a result, and still produce code that works, even if it won't be maintainable 20 years from now. I follow decent practices most of the time, but every single time you skip doing a unit test, or hardwire something, or call directly into something that ought to be a couple layers removed, even if it's just once in a while, the cruft piles up and it never goes away.

  19. Anonymous Coward
    Anonymous Coward

    Well......

    Quote: "...It is too big to usefully change, or optimise. All we can do is nibble around the edges..."

    No mention of "The Agile Manifesto"..................why not?

  20. Anonymous Coward
    Anonymous Coward

    Multiple Cores Is Not The Same As Multiple Machines....

    ......because the "multiple cores" share the same memory.

    Discuss!

    1. ldo

      Re: Multiple Cores Is Not The Same As Multiple Machines....

      All I have to say is: “NUMA NUMA NUMA ...”

  21. Greybearded old scrote
    Thumb Up

    Yes, and

    Joe Armstrong (of Erlang fame) gave an excellent talk on a similar theme.

  22. MarkMLl

    Simplify and add lightness...

    I'm all for eliminating cruft and keeping systems- hardware, OS and apps- as small as possible.

    The problem is, however, that if you simplify a computer (hardware and OS) too far, then it will no longer be able to support the mix of functionality that almost everybody insists is essential.

    You can obviously get round that to some extent by having more physically-separated computers. But as you do that the number of communications links goes up exponentially, followed by the number of opportunities for race conditions and inconsistencies.

    As should be obvious, in retrospect, to absolutely anybody who has had anything to do with a system such as the UK Post Office's "Horizon": which remains in service with a very worried population of captive users.

  23. Stuart Castle Silver badge

    I agree about the bloated software. Part of me would love to see what the demo coders of the late 80s/early 90s would make with a state of the art gaming rig. That said, I think the main reason they got such amazing results with the 8 and 16 bit computes was that they knew the hardware inside out, and didn't have to fight the OS for control of resources. That meant they could force the hardware to do things it wasn't designed to. They still may know the hardware inside and out, but would an effiect designed (say) for an MSI RTX 2080 work on a 2070? 3090? 4090? Would it work on one of those cards designed and manufactured by (say) Asus? What about a Radeon? Probably not, if the effect relies on access to a specific design fault in the MSI 2080. That is assuming the OS would allow them to hit the hardware directly.

    I would like to see software better optimised though. it's not likely to happen though..

    I think there are a few reasons it is bloated though. One is code re-use. It's often more efficient, therefore cheaper, to re-use code, perhaps in external libraries and runtimes. The problem with that is that is they aren't coded for a specific pupose, so will likely include a lot of redundant code. Unless you have source code access, it will be almost impossible for you to remove the code you don't need. Even if you do, you may not bother to remove any code, because you'd need to spend extra time doing this, and, depending on how much work it is, you might have been better just writing your own code..

    In every case, the developer is likely to go for the cheapest option, which, in most cases, is leave the bloat and hope your application is still quick enough to be usuable.

    I think there is another good reason that current improvements don't appear to add as much speed. They don't. If you go from 250MHz to 500, your speed is doubling. If you go from 4GHz to 5GHz, your speed is going up by 4 times the jump from 250 to 500MHz, but it is not doubling. Not that clock speed is a reliably indicator of processing speed anyway. My Ryzen 5 CPU is about the same speed as the last P4 I had in terms of raw clock speed, but is an order of magnitude faster in terms of actual work it can do in a given time..

  24. Pete Sdev Bronze badge
    Pint

    Back to the 80s

    Thanks to el Reg's recent retro week, I've ended up getting an Agon Light, which arrived today.

    Maybe if people learned to program on a constrained device first, as a generation of us did, we'd have leaner software.

    There's also a lot of NIH syndrome which results in duplication to be dealt with.

    Beer to the memory of Niklaus Wirth

  25. HuBo Silver badge
    Pint

    microVM compilation in unik

    Cool FOSDEM talk! The slides (on that web link) illustrate it nicely too (for the visually paired). Anything larger than Justine Tunney's 512-byte SectorLISP is mostly bloat IMHO! (eh-eh-eh!)

    A couple (three) random thoughts: a) hope you had yourself some delicious moules+frites; 2) Koomey’s Law is probably key to Zettaflopping; iii) Bert Hubert (IEEE link) awesome name!

  26. SNaidamast
    Happy

    Why I write very simplistic software...

    This is great piece regarding all the BS that we as professionals are constantly indoctrinated with on daily basis.

    Given that I am not an internals specialist, I cannot write operating systems, device drivers or the like. I am a software engineer who designs user-based applications.

    And many years ago, when developing applications on the UNIVAC and IBM mainframes, we are were always taught to write our software for the least experienced person in our team. This way everyone could understand what we were all doing and could work more easily with each other's work.

    Since 2010 especially, like the author here elaborates on, simplicity was more or less thrown out the window for complex design patterns, convoluted software solutions using languages such as JavaScript in ways that it was never designed for, and a consistent churning of the software development process that starting in and around 2001 through out common sense for stupidity and egregious idiocy.

    Has no one figured out yet that everything we implement is done so around some for of "Waterfall" paradigm? However, XP Programming and Agile has convinced literally tens of thousands of developers that somehow one can create an application and then when its done, design it. Basically, that is what Agile in all its incarnations promotes while XP Programming or Pair Programming attempted to turn an individual pursuit into a multi-player game.

    Today, anyone who reads the code of one of my applications can quickly get a feel for how it is designed and then just as easily follow all the code once the basic patterns are understood. And I still write in my preferred language of Visual Basic .NET, though I can just as easily write C#. However, in a current assignment I am finding writing C# code a complete chore in Visual Studio 2022. Unlike its VB.NET counterpart and due to the constant enhancements to C# itself and Visual Studio, I can't enter a line of code without having the editor interfere with what I am attempting to write. I don't require all the popup suggestions and hints as I already know how I am going to finish my line of code.

    There is nothing fancy here with my won application development and I eschew all the philosophical crap that has popped up in the last 2 decades to simply write applications that simply work and which are relatively easy to maintain.

    In my corporate career, no supervisor every worried about one of my applications crashing at night or during the day.

    But today, as the article clearly demonstrates, things have gotten out of control both in the internals arena as well as general application development. And it is not going to get better; especially since our young people and university graduates are getting dumber by the day...

    1. Rich 2 Silver badge

      Re: Why I write very simplistic software...

      You get my vote. I’m being forced to use agile at work after having largely avoided it up to now.

      Agile is the king’s new clothes. If it was civil engineering rather than software engineering, it would go something like;-

      - let’s build a bridge!

      - ok, here’s some rubble

      - What? No - we don’t have enough points to develop any cement. Stick it together with glue

      - it’s not quite bridge-shaped yet but let’s put it over this motorway. We’ll call it an “alpha” and see what feedback we get

      - It fell down and crushed a load of cars, eh? Hey - that’s cool. It’s a learning process isn’t it?

      - let’s replace some of the rubble with cheese. Why cheese? Because we can’t make any bricks in the time allowed by the sprint. We’ll revisit it in another sprint and throw away everything we’re doing now and replace it with something else. Maybe bricks. We don’t know yet

      - oh and stick some flags on it!! Because that’s a quick win

      - ok, try that - alpha 2

      - ah. That killed how many? Well that’s ok because it’s not production quality yet

      - let’s replace all those flags with different coloured ones

      - etc etc etc

      I have yet to be convinced that “agile” isn’t absolute and utter bollocks.

      As for XP, and “paired programming”, which wanker came up with those ridiculous notions?

      1. SNaidamast

        Re: Why I write very simplistic software...

        XP and "paired programing" was introduced around 2001.

        It was developed by members of the Chrysler C3 Payroll Project, which was going to replace the existing payroll system in that company.

        These new concepts were introduced in the 4th year out of a 5 year development process.

        For whatever reason, these concepts took off like wildfire.

        Then in the 5th year of the payroll project, the entire thing imploded as a result of the stupidity of such ideas.

        However, by then, the die had been cast and the rest is history....

  27. Rich 2 Silver badge

    Insane bloat

    Some of the software today is bloated beyond any comprehension. Two that spring to mind (and I’m really not wanting to focus on MS here - everyone is equally blameworthy) are Word and Excel. I don’t know the actual numbers now but their code bases are insanely huge. Why the hell you need hundreds of megabytes (or whatever it is) to do a bit of word processing is beyond me and is utterly bonkers. Compare this to Word (or word perfect, or ….) from years ago that would fit on one or two floppy disks. And for 99% off use-cases the functionally is basically the same (except the newer version is demonstrably much worse)

  28. Ian Johnston Silver badge

    Computers aren't much faster now than they were a decade ago,

    True dat, and old stuff is still more than adequate. I am writing this on a Lenovo ThinkCentre from ... sudo dmidecode | more ... 2012 and it rarely comes anywhere close to using 100% of its two 2.8Gz cores.

  29. Electronics'R'Us
    Headmaster

    In the embedded world...

    Most of my software is written for small microprocessors or microcontrollers. Typical maximum flash is around 1MB and perhaps 256k RAM.

    Even here, bloat has taken hold with HAL (hardware abstraction libraries) although it is not extreme; they are, however, opaque.

    One project (using an ARM Cortex M4) required me to do rather interesting things such as respond to external events while the processor was in sleep do a bunch (1024) ADC conversions and DMA the results to as buffer. Only at the end of the conversions / DMA was the core woken up.

    The library DMA initialisation function was 6 layers deep; once I figured out what was being done, it was replaced with 3 lines of code using the same structure the library required. The library function for the ADC / DMA did not solve an issue that the DMA would just run at maximum speed (and not just after each conversion). I made that work by forcing an arbitration after each DMA which was part of the initialisation code!

    My point is that most HALs are poorly written and inefficient. For ADC / DAC conversions, in particular, timing jitter will render any DSP functionality fubared so 'trusting' the HAL to get it right is simply not acceptable.

    Several years ago (decades in fact) I was tasked with adding a test for an optional flash device that could be fitted in a socket (they were expensive at the time). The system resources was 32K (not a typo) of ROM, of which there was perhaps a few hundred bytes free. I managed to get a comprehensive test (using existing flash programming functionality with the walking 1s, walking 0s technique) into 33 bytes.

    I always look at 'frameworks' with a very sceptical eye.

    1. Lipdorn

      Re: In the embedded world...

      "My point is that most HALs are poorly written and inefficient."

      On the ST side I find the reason is because they want, or need, to support as many of the various use case for the peripherals. They do often allow one to start an example project faster. Then if performance is an issue one simply implements the functionality one needs using the HAL as an example. It usually isn't difficult with most peripherals.

      Though I'd agree that I'm not a fan of the C coding techniques typically used by hardware coders.

  30. Erik Beall

    Some abstractions are worse than others at worsening bloat

    Looking at you docker... Unfortunately it's like security was twenty years ago, developers don't care and won't be forced to even begin to confront the issue for another twenty years, and it's not their fault. It's not taught in software development either, however, it's related to complexity which is taught to some extent although I've yet to meet more than one single developer who that computer sciency aspect stuck with. I'm including some of that in current training on the people I'm bringing online and they've all had conventional comp sci. I realize there are people here who do know there are practical aspects to complexity classes, I just rarely get the opportunity to work with people with that awareness.

    Managers don't understand it, which is why they grasp at things like containers and soon AI to magically reduce the growth in development cost in each and every project.

  31. John Navas

    This piece makes some good points, but also some invalid assumptions.

    For example, it fails to appreciate the tension (tradeoff) between efficiency and robustness. Efficient code is brittle. It breaks more easily because it is vulnerable to single points of failure. Fail-safe (e.g., self-checking) code trades efficiency for robustness.

    Cost is also an issue: (a) Good. (b) Cheap. (c) Efficient. Pick 2. If you pick good and efficient, it will not be cheap.

    p.s. If Amdahl's law still held, then there would be no benefit to 64 core processors (e.g., AMD EPYC), but we know that they can indeed be much faster than comparable (say) 32 core processors. https://www.cpubenchmark.net/compare/AMD-EPYC-7702-vs-AMD-EPYC-7502/3719vs3880s

    1. Roo

      Amdahl's Law absolutely still holds and always will... I think what you might be looking for is Gustafson's law - which is for the case where a workload can scale up in size and take advantage of parallelism as a result.

  32. david 136

    "the most exciting kind of non-volatile memory – a bold attempt to bypass a whole pile of legacy bottlenecks and move non-volatile storage right onto the CPU memory bus – flopped. It was killed by legacy software designs."

    No, that wasn't what killed Optane.

    What killed it was the shrinking gap between the performance that it offered vs. SSD in improving form factors like M.2 NVME.

    There was software being written to exploiit optane, and it would have worked great -- but the number of applications that desperately need it was small compared to the ones that were perfectly happy with mmapped nvme ssd. Intel saw that trend and decided they couldn't afford to keep that ship operating. Now, there's plenty of things that could use mmapped files that don't, but that's another story.

    It's sad, because it was interesting, but it wasn't really the legacy designs. It was the lack of clear advantage with other things getting faster.

  33. Christian Berger

    Well it's a sign of the crapularity

    We do essentially the same things now as we did in the past, but with a lot more code. More code means more bugs resulting on the every day experience that computers don't work very well.

    There is some hope that this comes in waves, like the terrible software we had in the 1990s being washed away by obsolesce and good ideas like the UNIX philosophy becoming more popular again. It may we that we are just in a current wave of that.

    However if we aren't and the current trend continues, we might be heading towards a "Crapularity", a singularity of crap, where the systems around us keep failing and nobody can mend the mending apparatus as in E.M. Forster's "The Machine Stops"

  34. Daniel386486

    And then you add multiple layers - an end users perspective

    Definitely in over my head here but I remember using 386 computers with dial up Bulletin Boards and I was captivated by this discussion.

    The bloat is obvious to the tinkerer like myself who has used Linux for the past 25 years. I think the definition of end user is far too myopic for most commercial OS and enterprise software planners and engineers.

    As an engineer (metal and concrete, not 1s and 0s) working for a global firm, the simple truth is that it takes 5-10 seconds to open a simple HD resolution png file from my work PC. I’ve even defaulted to using MS Paint as my image viewer because the latest preview or photo tool takes way longer. It usually takes a full minute to wait for Word to open simple files (heaven forbid opening a 50MB report file sitting on the network). The layers of bloat and slowdown for security checks makes a materiel impact on the speed of work.

    There may be good reasons for all this. I guess my point is, there seems to be little to no feedback loop between the actual use case, including standard environmental factors like security, IT snooping, and AI observation, which culminate in an experience which is significantly degraded from even 10 years ago. Web based tools are unreliable for all but the simplest tasks. The gap between the sold experience and reality is large and growing larger each year.

    Not trying to make this a rant, but just consider this the next time your city floats a multimillion dollar/pound bond to pay for the next water infrastructure upgrade. What is the cost of lost efficiency at scale? And who pays for it?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like