back to article Memory is running out, and so are excuses for software bloat

Register readers of a certain age will recall the events of the 1970s, where a shortage of fuel due to various international disagreements resulted in queues, conflicts, and rising costs. One result was a drive toward greater efficiencies. Perhaps it's time to apply those lessons to the current memory shortage. As memory …

  1. StewartWhite Silver badge
    Unhappy

    Lovely idea - no chance of it ever happening

    The penchant for just lifting huge chunks of code from GitHub or (worse) getting grossly bloated and inefficient code prepared for developers by "AI" (previously copy/paste from Stack Overflow being the mode du jour) mean that whilst a noble idea, this has a negligible chance of success.

    Ultimately the world in general has demonstrated by its choices that it prefers obese and shoddy code that's thrown together as quickly as possible (preferably quicker) but that only works when the wind is prevailing in a South-Westerly direction over well-engineered systems and that ain't gonna change anytime soon (or likely at all).

    By way of a hopefully vaguely interesting anecdote, I did some consulting work at a place where I'm been a permie 10 years previously. I'd barely got through the door before I was harangued about some Turbo Pascal code that I'd written more than a decade before having stopped working. I ran said code and it finished so quickly that I presumed they must be right, only to find that it was actually a network configuration issue and that the reason it had finished almost instantly was just that the performance of their hardware was massively superior to what had been the case when I originally wrote it.

    1. Rahbut

      Re: Lovely idea - no chance of it ever happening

      Like you, I think the articles sentiment around memory should just be extended to software efficiency in general.

      You only have to look at Notepad.exe as a good example of something that's now considerably larger and slower, and delivers little benefit for the bloating (perhaps we didn't need to rewrite it to require a .NET dependency to get tabs? etc)

      I grew up trying to get a program to fit in the boot sector on an Amiga floppy disk - an era when people tried to wring ever last ounce of performance out of a system.

      Some of that still goes on in certain circles - like where there's a cost saving to be made (many years ago Google did a decent bunch of work around optimising webpage efficiency, reducing bloat, best practice for images etc - I guess the result is that the "cost" was shunted to the client, but serving the content at scale became significantly cheaper). More often than not the general idea is to do something good enough more quickly so that revenue can be recognised. Compromises are made so that things can be delivered at scale more quickly - e.g. microservices can be seen as a way to manage people.

      Ignoring what we might think about AI - a lot of the code it produces is inefficient and usually benefits from significant refactoring (which you can get use AI to do, but I digress) - AI slop does fall into the "good enough" boat in order to get that precious revenue.

      All of that is to say that software inefficiency is not a technical problem, but a management issue.

      1. DarkwavePunk Silver badge

        Re: Lovely idea - no chance of it ever happening

        I had a fully functional professional suite of CAD, 3D modelling, rendering, and animation that ran on an 8MHz Atari with 1MB of RAM. Sure, 2-4 times that would have been needed for broadcast, but it's still bonkers.

        1. Jou (Mxyzptlk) Silver badge

          Re: Lovely idea - no chance of it ever happening

          Sound nice but: 4k is the norm, 2k as lowest, 12bit-RGBW is desired, raytracing (including software) developed a lot over the last 30 years, the modeling of tools is now a different level 'cause the computers are bigger and faster now. What counted as "super realistic rendering" at that time can today, when it would be calculated at higher resolution, be distinguished at a glance.

          But there is a difference between the bloat of the, for example, Logitech software and actual useful data to use that RAM for. And as for the Logitech: It is even horribly unstable, 500 MB RAM and nearly 1 GB disk space. I switched to X-Mouse to configure buttons to switch the mouse speed. Fast on desktop, slow when doing pixel editing, medium for some in-between stuff.

          1. Mostly Irrelevant

            Re: Lovely idea - no chance of it ever happening

            If the Logitech mouse is g-series there is a separate config app you can use to update your mouse config and save it to the mouse's built in memory. Then you don't need the software running at all.

            1. Jou (Mxyzptlk) Silver badge

              Re: Lovely idea - no chance of it ever happening

              I am running Logitech Lift and MX Anyhwere, most of the time I use the Lift, but sometimes the MX is better since it is more precise. None of them have internal storage. The G-series is too heavy and too big for my (scarred) right hand.

        2. IvyKing

          Re: Lovely idea - no chance of it ever happening

          The "rendir.com" (rename directory) that shipped with SCP's version of MS-DOS 2.0 took up all of 47 bytes on the disk.

          1. Sandtitz Silver badge
            Headmaster

            Re: Lovely idea - no chance of it ever happening

            "The "rendir.com" (rename directory) that shipped with SCP's version of MS-DOS 2.0 took up all of 47 bytes on the disk."

            Very likely it took 512 bytes of disk space.

            1. Jou (Mxyzptlk) Silver badge

              Re: Lovely idea - no chance of it ever happening

              On a floppy disk, yes, the 47 byte would take 512 byte (+ a few in the FAT + a few for the directory entry). But since MS-DOS 2 used FAT12 for hard disks as well your big 15 MEGAAAAbyte drive had a cluster size of 4 KB, ~3650 usable clusters. Your 20 MB drive had 8 KB cluster size, about 2400 clusters, and the small puny 6 MB card I had in my Olivetty-PC with DOS 2.01 I inherited from somewhere used 2 KB cluster size, about 2900 usable. With the latter I user Interlink/Intersrv with parport cable to my newer PC to transfer files. Don't try it the wrong way around, or your DOS 2.01 will kill your new shiny FAT16 file system on your newer machine.

        3. cdegroot

          Re: Lovely idea - no chance of it ever happening

          That rendered to a resolution much lower than what's on your wrist today, of course.

          Still, we.can - and should if we ever want to take the term."software engineering" seriously - start by stating what a reasonable amount of CPU and memory is for a CAD system that renders true color on 4K. I think that current systems are bloated by at least two, if not three orders, of magnitude.

      2. Old Used Programmer

        Re: Lovely idea - no chance of it ever happening

        Wildest piece of compact code I ever wrote was a two card boot loader for an IBM S360/30 that would IPL load, bring an object deck into memory at the target programs preferred location, and run it. Took two cards because when you hit the IPL button, after selecting the card readed as to boot device, it would read one record and start the IO command chain on it.

        I even managed to keep columns 73 to 80 clear (for sequence numbers) by having the code on the second card use the second IO command word from the first card (no seek needed, so getting the second card into memory only needed one IO command).

      3. GraXXoR

        Re: Lovely idea - no chance of it ever happening

        Software bloat is real. Before I quit Microsoft 365 I used to run word on a Mac. at the same time. I had an old windows XP install in VMWare with office 2003 or something.

        Windows XP boots in about five seconds and word appears instantly upon clicking.

        It’s ridiculous that you can boot an entire OS and a software suite from cold in a cirtualized environment quicker than you can run a modern version of a single application natively these days.

      4. LybsterRoy Silver badge

        Re: Lovely idea - no chance of it ever happening

        I would restrict it to a finance management issue rather than the broader realm of management in general. I also think that there are technical issues which I guess could have been caused further back along the toolset chain by finance management.

      5. Adrian The Alchemist

        Re: Lovely idea - no chance of it ever happening

        Without sounding like the four Yorkshiremen sketch, in the 8 bit days we had kilobytes to cram as much code in as we could and it was amazing what was produced on a variety of systems

        I jumped from the 8 bit BBC to 32 bit Archimedes and a whole whopping 4 meg of memory appeared, I think it prepared me to he a better coder when I went to University doing Chemistry and Computing

        While having even more system resources is nice I do wonder if with exceptions like SSD drive speed has Computing really got that much faster ( or is it my rose tinted brain)

        1. Dan 55 Silver badge

          Re: Lovely idea - no chance of it ever happening

          As it's also a storage shortage we may go back to spinning rust and find out...

          1. druck Silver badge

            Re: Lovely idea - no chance of it ever happening

            Don't mention Rust, on my ASUS laptop a couple of thousand line utility to control fan speed and keyboard LEDs under Linux sucks in a GB of Rust crates, and generates another 2.1 bloody GB of intermediate crap when it compiles, can you image doing that on a spinning rust disk?

        2. More Jam

          Re: Lovely idea - no chance of it ever happening

          At one point in my first "real" job, after much study of linker output I went through the code base replacing every call to printf, sprintf, etc. Because calling printf linked in a whole floating point emulation library that we never used. All statically linked, of course. We were developing on original IBM PCs with 512K of RAM, but the production machines only had 256K. That was, erm, some time ago.

    2. DS999 Silver badge

      It would only happen if the shortage was permanent

      If we got memory from mines and the mines had started to run dry, then we might think a new paradigm is at hand and start conserving memory in the same way you might conserve other natural resources like oil.

      But we know the shortage is temporary, so everyone is just going to ignore it because it will be just a memory (sorry) a few years from now.

    3. HereIAmJH Silver badge

      Re: Lovely idea - no chance of it ever happening

      Ultimately the world in general has demonstrated by its choices that it prefers obese and shoddy code that's thrown together as quickly as possible (preferably quicker) but that only works when the wind is prevailing in a South-Westerly direction over well-engineered systems and that ain't gonna change anytime soon (or likely at all).

      And now realize that same mentality will be used for critical systems, like autonomous cars, military drones, and medical devices.

      Speaking of medical devices, I have one that has a 'daily' reminder. The alarm has never been configured, there appears to be no place in the settings to configure it, and it only triggers randomly days or weeks apart at different times of the day. You can snooze it, but you can't disable it.

      If the day of SkyNet ever comes, it will be because of sloppy QA and decision makers wanting to be first to market. Although I have to say, I'd prefer Maximum Overdrive over Terminator.

      1. Adrian The Alchemist

        Re: Lovely idea - no chance of it ever happening

        Actually SkyNet happens as when it achieved self awareness the military got scared and tried to pull the plug

        Perhaps trying to say "hey you with the pretty face, welcome to the human race" and being pleased to see a new entity may stop any homicidal tendencies

        Play it Mozart, ABBA and Queen, read it Terry Pratchett and we should be fine

        Anyways good cheer to everyone, think of your loved ones and the poor people who are in hospital and those who are having to work this week and everyone else who are not as fortunate as us

    4. Dr Fidget

      Re: Lovely idea - no chance of it ever happening

      I remember being chuffed when I realised that "XOR" took 1 byte to set the accumulator to zero whereas "LDA 0" took 2 on the Z80 so I could save another few bytes of my 48Mb RAM on my Nascom 2

      1. Crypto Monad

        Re: Lovely idea - no chance of it ever happening

        Presumably you mean 48KiB, not 48Mb ?

      2. Anonymous Coward
        Anonymous Coward

        Re: Lovely idea - no chance of it ever happening

        I remember some type-in listings for the ZX Spectrum I once came across used "NOT PI" instead of 0 because apparently the Speccy has a minimum memory size for variables, and using a fixed stored value like PI consumed slightly less memory.

    5. ilpr

      Re: Lovely idea - no chance of it ever happening

      It's not entire impossible, recall that various people have taken to task of reducing buffering in network stack due to latency issues.

      All you would need is a way to tell the pointy-haired boss how much the RAM usage costs and how much can be saved. That is the only way to make them sign off on the work, put a clear price on the waste.

    6. JoeCool Silver badge

      The software equivalent of cheap energy dependency

      Just like the oil companies and their drones that refuse to consider any future beyond one quarter,

      Modern tech management is incapable of changing the culture built around cheap labour, frameworks/jvms/platforms and limitless Moore's law improvements.

    7. LucreLout

      Re: Lovely idea - no chance of it ever happening

      Ultimately the problem is that professional software development is an unregulated profession.

      There can be no guard rails. No quality control. No minimum standards. And absolutely no consequences for spending 30 years having a go.

      If we want better software we have to have better engineers and that absolutely requires that we dispense with the poorly trained cowboys.

  2. Jou (Mxyzptlk) Silver badge

    Microsofts fault...

    Double: First the inefficiency in Windows 11, the defender and the browser, and then stealing our RAM for their AI crap, which needs even more RAM on our local machines even when we did not ask for it. (Server versions of Windows are still quite clean and lean in that regard, but Server 2025 shows a bit more memory hunger than 2022)

    1. Helcat Silver badge

      Re: Microsofts fault...

      Honestly I want to give you a hundred thumbs up: Spot on with the AI crap: Most of us don't need it, don't want it, so why is it running on our systems, consuming resources including RAM but also CPU cycles. Why make it so hard to disable the darn stuff?

      Although... M$ isn't the only company pushing bloatware and unnecessary functionality such as AI. Google are doing so, too.

      1. Clausewitz4.1
        Devil

        Re: Microsofts fault...

        Although... M$ isn't the only company pushing bloatware and unnecessary functionality such as AI. Google are doing so, too.

        I am not anti-AI, but the use cases are small for the amount of money/hype invested into it. Military, for sure. Programming, limited. Call centers, limited. Will it become hyper intelligent (AGI or whatever) ? Not in our lifetime.

        But, again, do not expect a bubble to burst soon, if there are BILLIONS "circular-invested" into it. AI will be poured down your throat for quite some time, in all the ways imaginable.

        As for the burst, SoftBank is showing signs of fatigue. Could be 2026, but I bet 2-4 years. Let's see what they make with their ARM stocks in a few days...

    2. Dan 55 Silver badge

      Re: Microsofts fault...

      As much as you might try to make your code lean and efficient to fit it into memory, I can't see Microsoft cutting Windows bloat by half in Windows 12... and it should be cut by a factor of 10.

      1. Jou (Mxyzptlk) Silver badge

        Re: Microsofts fault...

        Oh, that is easy: Switch the shell back to what is was in Win10. Kick all those unnecessary cloud-per-user-running services, which are per default ready running to do any sync if you run the mouse at the wrong angle. And while they are at it: Make scrollbars always visible, wider, and the resize frame visible and at least a pixel wider (and the damn divider in every mmc console)... Some would argue to bring back the Server 2003 shell, but I am not a fan of that: Some things got better, and a back view so far into the past is always a bit rose tinted. Even for the task manager: If you the the old one from Server 2003 x64, you will miss the "command line" tab... Kick the "modern" control panel, which never added much improvement anyway.

        But like you said: MS won't.

    3. Bluffer Cubic

      Re: Microsofts fault...

      Not only Microsoft. Also other big companies like Google, and Apple. Too much power because of the enormous user base gives them free path to keep bloating their software with crap we do not really need but what cost too much resources. And on top of that, there is no user friendly way to disable it. Especially Microsoft, who forces bloated crap "features" upon its users.

    4. FirstTangoInParis Silver badge

      Re: Microsofts fault...

      So having spent time yesterday looking through Edge settings on a family laptop that was being remotely abused through website notifications, there’s a fantastic amount of functionality that should never even be there. It creates a massive attack surface for oiks to screw money out of unsuspecting users. Most secure browser? Absolutely not, it’s a hackers paradise because most will click on Yes if a website asks to access my local network (FFS) so out of the box all this should be turned off and the code burned on a bonfire. And then we have drivers for loads of stuff that will never be attached (see also Linux-firmware) and $deity knows how much legacy cruft from Windows 95 that nobody dares to touch lest “Unknown Error” appears.

  3. JimmyPage Silver badge
    Linux

    Of course Linux users

    Will simply ask "What's this all about then ?"

    My latest Mint install is still running perfectly in a 4Gb laptop from last decade.

    1. Rich 2 Silver badge

      Re: Of course Linux users

      I’m afraid the Linux world doesn’t get away with it either. I have noticed a steady increase in bloat in unixy software. A more of it seems to stem from the massive dependency list many applications require. Does that text editor REALLY need to bring in 2 dozen dependencies? Probably to use just one or two features from each. And before someone points out that they are shared libraries and so used by lots of stuff, that’s not the point. The point is laziness - why write a small function to do exactly what you want (and only what you want) when you can bring in megabytes of other crap instead?

      1. retiredFool

        Re: Of course Linux users

        Unfortunately I have to agree. Linux used to fit in MB not GB. I'm doing my part though. Earlier today I was working on a new feature for my product. As always I begin with the data structs and worry about cramming things into bit fields and worry about alignment in structs. Always try to order the fields in structs for no waste. I just can't break myself of doing this stuff even if it only saves a few bytes. Maybe it was starting to code in the 70's & 80's when memory was dear.

        1. Crypto Monad

          Re: Of course Linux users

          Around 1994, an 80836 machine with 2 MiB of RAM could happily run Windows 3.11 (a.k.a. Windows for Workgroups), and Linux kernel 1.2.13 would run well on a similarly spec'd machine.

          The smallest Linux these days, something like OpenWrt or ddwrt, is unlikely to work with anything less than 64MiB of RAM.

      2. StrangerHereMyself Silver badge

        Re: Of course Linux users

        I agree. That Linux Mint needs 2GB to run and 4GB to run smoothly is just filthy.

        It's mostly due to the fact that Linux Mint uses Python scripts for just about everything. Their stuff isn't written in C++ as it should be. Python is a memory hog and dead-slow to boot.

    2. Altrux

      Re: Of course Linux users

      Mint overall is a much of a bloat-fest as anything else these days. Of course, there are genuinely small, fast, and efficient Linux alternatives available. With Windows, no such luck!

    3. lordminty

      Re: Of course Linux users

      Amateur!

      My 2GB laptop (RAM maxxed out by me well over a decade ago) runs 32-bit Raspberry OS and has just celebrated its second decade.

      Yep it is 20 years old this year, and runs Linux just fine.

  4. zo0ok

    Wasting RAM wasting Cache

    Performance in CPUs depend on the (CPU) cache. Wasting RAM is also wasting cache, which in turn is wasting performance.

    However I think there is an opposite trend, where fast SSDs everywhere makes it less relevant for programs to load data from storage into RAM.

    That should at least flatten out the need for RAM a bit.

    1. Jou (Mxyzptlk) Silver badge

      Re: Wasting RAM wasting Cache

      On the contrary: Quite a number of SSDs, especially cheaper ones, use a bit of the system RAM as cache to avoid having to overwrite the same blocks several times just 'cause one bit changed (there was a reg article around issues with that and windows 11, cannot find it right now). The better SSDs have that bit of RAM on their own board.

  5. VicMortimer Silver badge

    Yeah, not gonna work.

    The bloat isn't likely to go anywhere.

    All we can really hope for is the idiotic AI bubble to pop soon. It's doing nothing but increasing misery.

  6. Captain Hogwash Silver badge

    This argument has been going on for at least the thirty years I've been in the industry. I have to agree with those earlier posters that those who call the shots seem to care about time to market more than anything else. So nothing will change.

  7. Michael Strorm Silver badge

    Ironic considering that just yesterday The Register had...

    ...an article berating Linux desktop users for not "using Flatpaks, Snaps, and AppImages to install programs instead of worrying about library incompatibilities and the like".

    Well, here's the thing. Linux Mint includes both "System Package" and Flatpak versions of the Gnome Calculator app.

    The system package version is 7 MB. Which would have been ludicrous for a calculator app back in the day, but is small by modern standards.

    The Flatpack version is "1.1 GB to download, 3.6 GB of disk space required". For something we can assume is still basically the same 7 MB calculator app.

    Yes, that's disk space rather than RAM, but it illustrates the principle regardless.

    Supposedly Flatpak gets more efficient at using space as more packages are installed and it reuses duplicate files, but that's still nothing short of horrendous.

    1. Rich 2 Silver badge

      Re: Ironic considering that just yesterday The Register had...

      I would love to see a breakdown of exactly what a calculator application does with that 1.1MB

      Or a web browser that uses 100MB of memory before you actually bring up a web page (and then uses 30MB per page, or whatever it is)

      It’s ludicrous - but you already know that

      1. Michael Strorm Silver badge

        Re: Ironic considering that just yesterday The Register had...

        > "I would love to see a breakdown of exactly what a calculator application does with that 1.1 MB [sic]"

        The point here is that the core application itself is- minor differences excepted if they're not the exact same version- no different and presumably no larger.

        As far as I'm aware, most of the Flatpak bloat is due to the supporting content included and required to run it in a sandbox independently of what is or isn't already installed on the host OS itself. Everything that's required is included with the Flatpak, including the specific libraries and supporting OS content it was designed to run with.

        Which I'm sure is *lovely* for compatibility, but means you end up with a calculator app that's 1.1 GB just to download and three times that size once you install it.

      2. An_Old_Dog Silver badge

        Another Cause of SW Memory-Use Bloat

        ... is idiot programmers who decide to "speed up" their programs -- or maybe they just thought it would be fun and cool -- by grabbing RAM and using it for their own, app-specific cache.

        By doing this, they reduce the amount of memory available to the OS-managed cache, which in turn, slows everything else down.

    2. Anonymous Coward
      Anonymous Coward

      Re: Ironic considering that just yesterday The Register had...

      The system package version is 7 MB. Which would have been ludicrous for a calculator app back in the day, but is small by modern standards.

      7MB?! Early versions of Mathematica came in two floppies and probably were more capable.

      Windows 10 Calc.exe is 27 kbytes.

      The memory usage rot started when non-assembler languages were invented.

      :)

      1. Adrian The Alchemist

        Re: Ironic considering that just yesterday The Register had...

        Apparently in the end of first year comp science exam I was the only person who attempted the assembly language question ( and scored well) in 1990

        It was literally a few lines of Pascal to turn into assembly

        I laughed at the "how does a ball mouse?" Question

      2. NXM Silver badge

        Re: Ironic considering that just yesterday The Register had...

        Couldn't agree more, but as an assembler programmer I'm a bit biased. Used to do C but that job wasn't nice. Now I do everything in assembler in PICs, because they're cheap (1). Very little code space and less ram (2) so you have to achieve more with less. It's only recently the cheaper ones have continuous ram instead of paged for goodness sake.

        (1) If I used a more expensive chip it costs me money because I manufacture the product as well.

        (2) In an early design for someone else I resorted to putting text strings in external eeprom to save code space.

      3. PRR Silver badge

        Re: Ironic considering that just yesterday The Register had...

        > Windows 10 Calc.exe is 27 kbytes.

        Are you sure that's the executable? Or some kind of redirect or flash? On Win 7, I have C:\Windows\System32\calc.exe at 897KB. (as AMD64; x86 exe is 758KB.)

        I'll also note that my TI-30Xa hand calculator, released in 1984 (40 years ago) (1976 in LED display!), and capable of nearly any proper calculation(*), surely does not have anything like a million devices in it. True, a TI-30Xa does not need bit-mapped display or buttons; that's baked-in to the key molds (double-shot!) and LCD.

        (*)Win Calc may have more extended STAT functions, and COPY/PASTE, true.

        1. Anonymous Coward
          Anonymous Coward

          Re: Ironic considering that just yesterday The Register had...

          Are you sure that's the executable?

          Yes, 27648 bytes == 27kB to the byte. Windows 10 64bit.

          Windows calculator looks to have been completely re-written with Windows 10 prolly for touch.

  8. Mike 137 Silver badge

    "long shaken their heads at the profligate ways of modern engineering"

    The real problem is that it's not engineering -- it's clusterfudging. Software development ceased to be engineering when the microcomputer took over from the mainframe and mini. Those were programmed by experts aware that, on time sharing systems, anyone who crashed the machine would be seriously unpopular with all other users. Plus they worked inescapably very near the metal so they understood the technical implications of their code. The "micro revolution" was, however, driven mainly by self-taught kids in back bedrooms who had unlimited enthusiasm but neither the ethics nor the technical mindset of the engineering discipline. (I know, I was there, but was fortunate to have had a scientific training which imposes the same discipline).

    The parsimonious use of memory at that time was not a matter of judgement, or even choice. It was forced on those writing code by the cost of memory (e.g. £1.60 + 15% sales tax per kilobyte from Watford Electronics in August 1982). So it was done, but without being any fundamental concept that would stick when memory became more plentiful and cheaper. Unfortunately, by virtue of the commercial success of the resultant negligent approach, there's never been any incentive to professionalise micro software development. Indeed the opposite has to a great extent occurred -- witness the deprecation of C as a "hazardous" language in favour of newer languages that prevent the making of basic coding errors -- seemingly eliminating the need to pay strict attention to what one is coding.

    The details may be open to argument, but the basic truth exists that software development is not yet an engineering discipline but absolutely must become one. Not only bloat but fragility and vulnerability have reached utterly unacceptable proportions given the extent to which we rely on software to keep our societies running and safe. In all established branches of engineering (even down to gas fitting and plumbing) there are formally ratified mandatory standards that must be met. We need the same for software development in any domain where personal privacy, business security, livelihoods or lives could be affected by inadequate code. And almost inevitably, such standards would drive down bloat, as excess complexity is itself a primary source of the relevant hazards. Bluntly, we have to train would-be software developers to consider carefully (and feel responsible for) the implications to the end user of what they develop -- that's the primary principle of the engineering mindset.

    1. dlc.usa
      Holmes

      Re: "long shaken their heads at the profligate ways of modern engineering"

      Hmmm... Inquiring minds want to know if AI can be led into such software design and implementation...

    2. Boris the Cockroach Silver badge
      Boffin

      Re: "long shaken their heads at the profligate ways of modern engineering"

      This is one of the things I've been banging on about for ages

      That we take stuff thats been designed by real certified (and certifiable) engineers and bash it from a CAD drawing into a physical item, which then undergoes testing before its certified to go on an aircraft.

      But commecial software seems to be thrown together from whatever is cheapest and handy before being thrown out of the door with the idea that if theres a bug/fault in it, well we'll just issue a patch. I know theres branches of software engineering where more formal and tested methods exist, the motion control software in the robots, the CAD/CAM we use for taking a CAD file and making it into something the robots understand, but these are very much the exception.

      The memory thing is a bit of a red herring here, since my days of writing assembly that can fit into 1K proved that its perfectly possible to write crap code in a very small space. and adding 16K of memory didnt change that at all.

      1. doublelayer Silver badge

        Re: "long shaken their heads at the profligate ways of modern engineering"

        To extend that, one of the other consequences of being forced to fit software into very small amounts of memory was and still is that corners get cut. Lots of good practices use a little more RAM and make sure that it's not going to get an answer wrong, if it fails there's something it can recover from, or that it checks security criteria every time rather than having a guess or cache which can become a gaping security hole. Efficiency is not the only good part of engineering standards, nor does managing efficiency prove or suggest that people have been sticking to other good practices. Unfortunately, a lot of people who like complaining or proclaiming themselves superior frequently conflate them.

        1. Anonymous Coward
          Anonymous Coward

          Re: "long shaken their heads at the profligate ways of modern engineering"

          Fair comment ... BUT ... people with the mindset to do good quality code in 'small spaces' using things such as Assembler usually have the 'right' mindset for 'Good Practices' !!!

          Assembler does not automatically mean 'Hacker or hacking code' !!!

          The effort to understand the problem and to be able to write the appropriate well designed code to do the job is not a quick 'hack', unless you throw something together to do a job as a one off, in 5 minutes.

          I would be called a 'Greybeard' because of my age but I have standards that I work/worked to that are possibly being ignored today by younger developers, either through choice or because they are compelled by their managers to write code quickly.

          I understand the pressures people work under BUT the current lack of respect for developers and their product is because of the decades of 'shortcutting the process' that has produced the dubious quality of code we live with.

          You cannot avoid 'Cause & Effect' !!!

          :)

          1. doublelayer Silver badge

            Re: "long shaken their heads at the profligate ways of modern engineering"

            So you agree with your first sentence then go on to perform exactly the conflation I warned against. That assumption is often wrong in many ways. For example, we happily take credit, or at least many posters here do and I happily back them up, whenever the Y2K problem is mentioned. The prevailing description of it as a non-problem that didn't deserve the attention it got is correctly countered with the fact that it was solved through significant and widespread effort. But that was only necessary because of a quest for efficiency, and all the expensive problems avoided through prudent but still expensive work in the 1990s were first created by people shaving a few bytes off RAM use. The programmers who did that in the first place have the sometimes reasonable excuse that bytes of RAM were very limited when they wrote their version. People doing similar things today don't have that excuse. If you do things like that, you are making a bad program in an effort to demonstrate your meaningless savings.

            Knowing how to use assembly did not and does not correlate with knowing enough about many other aspects of good programming. Good programmers learn both. Bad programmers sometimes learn neither. But sometimes bad programmers learn the assembly part and sometimes they use it without having gained the rest, and they have been doing this for decades. The reasons for the many changes we have seen should be familiar to any good programmer of the time; it's balancing resources. In their case, it could easily be limited CPU cycles versus limited bytes of RAM, and of course we still handle that tradeoff today. However, another important one is RAM versus programmer time, and programmer time is often the more expensive nowadays. If you write a program that uses up 80 MB of RAM and it takes you two hours, I write a program that produces identical results, runs in 8 MB, and take two weeks, it's very likely that yours will be better for the user. Either they got their program faster, you got more time to test it and fix bugs, or you can add more features than I can. Meanwhile, 72 MB of RAM on a desktop is easily available. If it's an embedded device or there's another reason why the RAM can't be afforded, then that might change.

            1. Anonymous Coward
              Anonymous Coward

              Re: "long shaken their heads at the profligate ways of modern engineering"

              To condense your long answer, you are including in the equation 'resources' which is also time to do the job.

              Once again I agree ... BUT you are ignoring the time & resources that are necessary to support the 'product' which may have problems not because of the original developer BUT because there are unknown problems with the framework you use to faciliatate the faster development time.

              I know that support costs are not included in the cost to produce the 'Software' BUT they are costs to the company as a whole !!!

              I am not against developing code quicker to minimise cost and therefore maximise profits ... BUT there is always a cost to 'shortcuts' !!!

              I wrote code and it was probably slower than some high level language BUT it did not create costs down the line regarding support etc.

              Horses for courses as per ususal, you pick your prefered method and live with the consequences.

              Your method is better IF you are able to produce good code that does not have hidden problems from the framework etc.

              Remember if the framework changes or the associated libs/.dlls are changed that they may produce new errors/problems. (DLL Hell).

              My method is low level and only is impacted by a change in the CPU Instruction set low level functionality. (i.e. New CPU version changes)

              I am not saying I am 100% right at all times BUT neither am I 100% wrong !!!

              :)

              1. Anonymous Coward
                Anonymous Coward

                Re: "long shaken their heads at the profligate ways of modern engineering"

                I am comitting the heinous crime of replying to my own post !!!

                Simply see https://www.windowslatest.com/2025/12/24/microsoft-denies-rewriting-windows-11-using-ai-after-an-employees-one-engineer-one-month-one-million-code-post-on-linkedin-causes-outrage/

                Proves my point re: quality of code we have been/are still accepting !!!

                :)

            2. J.G.Harston Silver badge

              Re: "long shaken their heads at the profligate ways of modern engineering"

              Except, again, that's throw-together instead of engineering. Two bytes will give you a 65536-year date range. Bodging gives you a 100-year range, using 16 bits to store 8 bits of information. Even *ONE* byte gives you a 256-year range.

              1. doublelayer Silver badge

                Re: "long shaken their heads at the profligate ways of modern engineering"

                But, if the data is stored in string form for presentation so it doesn't have to be calculated for each row using also limited CPU cycles, it does save disk space and RAM during processing. Another version of how it worked did use one byte to store the year number but a basic string processing system which would concatenate "19" to it, resulting in a presented value of "19100", which if properly parsed as an integer for calculations on something else would have made 2000 into 17200.

                Those little differences were minimal for individual records but quite a bit more severe when you had a large set of them, a 5 MB hard drive to store them and whatever was attached to the dates on, and a single CPU shared by lots of software. The people who wrote it almost certainly knew this wouldn't work in 2000, but they considered their resource limits and assumed that, by 1995, nobody was going to be running this 20-year-old code or, if they were, resources would be less limited and it could be improved, which it was but at a much higher cost than they anticipated. And in some cases, they should have done it differently from the start anyway. Modern programmers mostly don't have any chance of justifying cutting corners for a bit of RAM efficiency, and those who play memory golf without being very certain it's a good idea can make things worse while still feeling superior.

          2. Mike 137 Silver badge

            Re: "long shaken their heads at the profligate ways of modern engineering"

            "standards that I work/worked to that are possibly being ignored today by younger developers, either through choice or because they are compelled by their managers to write code quickly"

            Or more likely because neither the developers nor their managers are actually aware of said standards. It's interesting that the OWASP Top 10 list of crass mistakes has stayed almost the same for the entire lifetime of OWASP -- pretty much the same list just gets shuffled around a bit. It's obvious from this that nobody is really learning the basics of good practice (which is a helluva lot more than mere "coding"). I suppose there's no incentive to if the dosh keeps rolling in despite the product being crap. Design a plane like that and people die, which gets attention, but it's perfectly OK for mainstream software to disrupt businesses and leak secrets, because that doesn't attract the same level of bad publicity (if any at all).

            Throughout the modern history of engineering (at least the last 300 years or so) it has been public revulsion to accidents that has driven improvements to standards. Sadly, in the software domain we the people seem to accept anything we're thrown regardless of consequences, and that has assisted in the development of quite ridiculous indemnities from liability (the EULA). If standards are to improve there have to be binding obligations to do so, and those will only come about if the public demand them loudly enough.

        2. This post has been deleted by its author

      2. ecofeco Silver badge

        Re: "long shaken their heads at the profligate ways of modern engineering"

        Professionals : measure twice, cut once

        Amateurs: measure once, cut twice... or more

        1. coredump Bronze badge

          Re: "long shaken their heads at the profligate ways of modern engineering"

          ... throw the whole thing out, start over.

          Measure not at all, cut again where they thought they remembered it was, cut off thumbs, ask pratGPT, start again....

      3. Timop

        Re: "long shaken their heads at the profligate ways of modern engineering"

        If you just knew how people meddled with those certified parts etc.

        Just get a subcontractor and make sure the contract moves responsibility forward well enough. And pocket the price difference. Of course the price is smaller and schedule tighter than what would be required for doing things properly.

        If something crashes and burns, blaim the subcontractor that was legally bound to do something to prevent it through the contract.

        Just in case someone has been wondering about what the fuss about supply chains is really about.

    3. Jamie Jones Silver badge

      Re: "long shaken their heads at the profligate ways of modern engineering"

      I've said the same for many years - it was annoying back in the 90's when MS made such unreliable software, and a whole generation grew up to believe "all computers need to be rebooted every day or so" (tell that to the users of traffic lights or other SCADA systems)

      Nowadays, it's more than just annoying. It could be critical, yet people don't seem to care. If bridges and buildings were as unreliable as software, there'd be hell to pay, but unfortunately rather than heading to a engineering mindset, it's going the other way. Commodity coding by minimum wage employees using AI, cut and paste, without knowing what they are doing.

      As for the memory, I remember using all sorts of tricks and optimisations to reduce memory. If I could save ONE BYTE, i'd be happy!

      I remember things like the Z80 trick of using "XOR A" instead of "LD A, 0" as it was one byte less! And also using for data memory that was previously used by the program itself, but will no longer be executed (init sections etc). That was horrible, but it saved some space!

      1. Adrian The Alchemist

        Re: "long shaken their heads at the profligate ways of modern engineering"

        Unfortunately all the bridges ARE falling down and some numpty allowed thermite to be stuck on a ton of buildings worldwide, there's also sewage in the water, poison in the skies roads collapsing and China has irradiated the only city in Mongolia processing the worlds Lanthanides (not very "Rare" Earths(

        Apart from some political grumbling nothing remotely gets done for the same reasons that there's so much bloated and broken software

        Mankind needs to take a good hard look at the mess we have made and clean up our act

        1. Anonymous Coward
          Anonymous Coward

          Re: "long shaken their heads at the profligate ways of modern engineering"

          I've heard of one or 2 in the USA, but that's mainly due to under investment in critical maintenance, not engineering issues.

          Thermite is not stuck on buildings worldwide... It is a reactive powder, not a solid building material or dried paint.

          The city you mention is Baotou, China. You are confusing the independent country of Mongolia with Inner Mongolia, which is an autonomous region inside China. And it's not the only place that processes the worlds Lanthanides. It does about 80%, that's true, but there are other places in China, Australia, and even the USA. And the environmental disaster can be attributed to mismanagement, and lack of care when disposing of waste, but you make it sound like it was a deliberate act.

          You really need to learn the difference between systemic, intentional conspiracy, and complex, real-world problems like aging infrastructure, environmental negligence, and political bureaucracy.

          1. doublelayer Silver badge

            Re: "long shaken their heads at the profligate ways of modern engineering"

            You're correct about many of the specifics, but wrong about your admonition:

            "the environmental disaster can be attributed to mismanagement, and lack of care when disposing of waste, but you make it sound like it was a deliberate act."

            No, this entire comment thread has been about mismanagement and laziness and the consequences thereof to compare about what people see, correctly or incorrectly, as exactly that among software writers. Programmers generally aren't trying to make their code run in lots of RAM for no reason. Those who use too much are doing so because it's easier, they don't know how to do it better, or they don't understand that there are consequences when they waste it. Similarly, laziness with what you do with hazardous waste can have consequences and people sometimes ignore them because it's easier not to without intending to cause a disaster. In all their examples, they were saying that there are things which could have been designed or maintained better but were not due to laziness, either to make the case that software isn't special and shouldn't be singled out or contrasted as if it's unique there or to blame programmers as members of a wider group. They did not say and clearly do not believe that this is a malicious or pre-planned deficiency.

  9. Anonymous Coward
    Anonymous Coward

    Maybe the answer to soaring RAM prices is to use less of it !!!

    Yes ... 10000000000000+++ times !!!

    Sick of bloated software that is insecure, 'plays badly with others', spys on everything for 'reasons' and is upgraded, in 9 months, to something 'Better' that is totally different in UI & functionality terms because 'now we must include 'AI' or whatever is flavour of the month' !!!

    Back in the day I wrote software in Assembler to fit the small amount of memory or to maximise speed !!!

    Hard work but extremely satisfying as you had to know what you were doing to get it to work ... and it did !!!

    :)

    1. Neil Barnes Silver badge

      Re: Maybe the answer to soaring RAM prices is to use less of it !!!

      /me smugly points out that I am somewhat masochistically designing a FAT32 system to work with compact flash on a 2MHz 65c02 with a whole 32kB of ram to play with!

      (And in response to Watford Electronic prices, I still remember the shock of buying two 1k by 4 memory chips from Technomatic in 1978... for a tenner each.)

      1. Sudosu Silver badge
        Joke

        Re: Maybe the answer to soaring RAM prices is to use less of it !!!

        Please, please, please do not consider this in any way, shape or form a political statement or endorsement but:

        Make Assembly Great Again

      2. skswales

        Re: Maybe the answer to soaring RAM prices is to use less of it !!!

        "for a tenner each" I too was that person, maxing out my Superboard.

  10. Blackjack Silver badge
    Devil

    Become a Linux Terminal Wizard then, some of the most memory effective programs run text only from the Linux terminal.

  11. Gary Stewart Silver badge

    Call it Gary's theory

    feature bloat x levels of abstraction = huge programs

  12. Wang Cores Silver badge

    I'm trying to figure out what's going to happen if the RAM panic continues? Do we just surrender personal computing to the lizard men of California that can't optimize their software AND bottleneck the supply of memory to run their bullshit?

    1. elsergiovolador Silver badge

      You'll be buying 32G off of street dealers.

      1. VoiceOfTruth Silver badge

        The 'good old days' will come back. Just a bit before 2000, I went into the office and found my computer unusually was switched off. Turned it on. Half its memory had left the building.

        1. Adrian The Alchemist

          Friend of mine was doing his PhD in cybernetics and one weekend they hit computer science and cybernetics and everywhere else and walked off with as many computers as they could carry

          He lost all his robot designs and software to run then and the thugs even broke his motherboards getting to the PC

          Needed to get a special grant for another 2 years money

          Lucky Chemistry wasn't hit given that the security on the chemicals was laughable (Yale lock to stores)

          1. Neil Barnes Silver badge

            This has been the subject of a long-ago El Reg article, but one time I shipped some computers from the UK to Tajikistan, via Russia (the only way you _could_ ship anything, at the time). When they arrived, the two pallets they had left on had mutated into any number of smaller packages, and _mysteriously_ the PCs - top end 486s! - had had processors, memory, and drives literally ripped from the motherboards, damaging things in the process. Oddly enough, the high-end audio cards fitted, which cost more than the entire PC, remained in place; probably not recognised or mistaken for something commodity cheap.

            1. werdsmith Silver badge

              There was a time in the 90s when officer break ins were done just to remove the RAM Simms from each PC.

              All 8MB of it.

        2. dmesg Bronze badge

          Back in the day my uni had a computer lab with IBM RS 6000 workstations. Cases were padlocked shut and machines were cabled to desks. One morning students came in to a room full of inoperative machines -- a thief had used needle-nose pliers to reach through the cooling slots on the side of the cases and extract the RAM.

    2. Dan 55 Silver badge

      Hey, they've got to do something to make money from the spare capacity in data centres after the AI bubble pops. Why not make it running your apps remotely and storing your data?

    3. doublelayer Silver badge

      If it drags on long enough, more places will manufacture RAM. There are new companies that are in that market, with both Chinese and Indian manufacturers eager to spread out the market a little. Existing manufacturers can also increase production eventually. If the bubble doesn't break on its own, they will eventually make more because they're otherwise leaving money on the table. And, to get that far, the people buying huge amounts of it will have to continue doing so consistently even though it doesn't fail that quickly and they have limited funds. Until then, some computers will be more expensive, so people will use the ones they have for longer. This problem isn't going to grow infinitely any more than the AI companies can.

      1. David Hicklin Silver badge

        > If it drags on long enough, more places will manufacture RAM.

        But it takes time (literally years) to build and bring on-line a new plant, and they will only do that for the most profitable stuff so if you want older but still usable tech you are stuffed

        1. doublelayer Silver badge

          It certainly won't happen overnight, but if they build a new plant to make the fastest memory, then the existing plants that are currently making that can go back to making slower and cheaper commodity stuff. There is some new construction on the way already, and if demand remained this strong for quite a long time, manufacturers would do something about it. Also, demand probably won't stay this high for long because this level of demand is expensive for those doing the demanding. There is a gap between "it's going to be fixed next week" and "it won't grow infinitely". We are in that gap. We will have to deal with high prices and inability to get what we want at a moment's notice, but we don't have to plan for a time, next year or ever, when everyone's got a 1 GB dumb terminal because RAM is too expensive to have anything else.

  13. This post has been deleted by its author

  14. vogon00

    Gonna get expensive!

    "it is time engineers reconsidered their applications and toolchains' voracious appetite for memory."

    Most of us do already - at least those of us that remember being 'king of the hill' one's 286 AT clone had a whole 2MiB of DRAM. At work, things were of a more embedded nature and we were constantly having to re-factor code to fit things into the available space.. all 8KiB of it. With the more mature embedded stuff, the code deduction required to get enough space for the fix/new feature could sometimes be harder than the fix etc..!

    As an old fart who was used to counting the bytes, I've often wondered how the 'memory bloat' introduced by the OS/Tools/Runtimes etc. could be tolerated. The answer is, of course, 'plug in more RAM!'. ISTR that RAM was about £10/MiB at the time.

    The idea idea that RAM is a finite resource appears to have dropped out of the syllabus:-) Think yourselves lucky - on occasion one had to count the CPU Cycles used by each machine instruction if things were time-sensitive.

    1. HereIAmJH Silver badge

      Re: Gonna get expensive!

      at least those of us that remember being 'king of the hill' one's 286 AT clone had a whole 2MiB of DRAM.

      640K ought to be enough for anybody.

      1. IvyKing

        Re: Gonna get expensive!

        The monitor debugger that was used by many of the early 6800 powered micro's was hard coded to start at 32K. The thinking at the time was that no one would have anything close to 32K of memory. The original MS-BASIC had a version that would run in 4K.

  15. JerseyDaveC

    It's a shame, but bloatware is here to stay because generally speaking they don't teach people to design code any more.

    When I went to uni (1988-91) my Comp Sci degree was highly theoretical - Comp Sci was new and many of the Faculty were mathematicians. So we learned all about data structures, algorithm complexity, that kind of thing. And we had to write frugal code, because in those days we were working in Modula-2 on Mac desktops with 4MB RAM and in C on Sun-3 shared systems with 32MB or 64MB.

    I remember competing in the BCS's annual programming competition back then, too: each team was given a PC with a copy of Quick-C and you had to keep it small and not bust the "small" memory model which if memory serves was something like 640KB. Taught you to think about the algorithm and not just throw a highly recursive, clunky monster at it and hope, because the judges (of which I later became one) would see that coming and would have test cases that would make the code bust the RAM limit. I once set a question (Sudoku solver) which did that, for that precise reason - if you brute-forced it, you'd blow up, so you had to write a vaguely clever algorithm.

    I flinch when someone tells me they're a "coder". There are many, many extremely good software engineers in this world, but they're a dying breed because modern technology saves us from ourselves when we write bloated, inefficient code and so the need to actually design code properly is vastly reduced compared to 30 years ago. There are two many people who can write programs, but not good ones.

    Incidentally, in the BCS competition each team had one PC. You were given a bunch of questions and you shared them out, designed the solutions with pen and paper, and then took your turn on the PC to bash in the code. I wonder how many people do that today.

    :-)

    1. AndrueC Silver badge
      Meh

      It's a trade-off though. Writing safe and efficient code takes time and requires a highly skilled programmer both of which are in short supply. If you wait for the meagre few highly skilled programmers you've managed to find to produce a marketable high quality product the opportunity will likely have evaporated. Either because your competitors got there first with their barely adequate offering which is nonetheless selling well or perhaps even because the market has moved on and your product is no longer required.

      Hand wringing and crying over second-rate code and second-rate performance is basically ivory tower thinking. The general marketplace doesn't currently demand or even particularly want top quality code - it just wants 'good enough'. That's the business reality and since programmers are supposed to be writing code that the market wants they are doing what is required of them.

      1. Anonymous Coward
        Anonymous Coward

        Nope !!!

        It is not hands-wringing at all !!!

        The assumption is that anyone who says do it 'properly' is in an Ivory tower isolated from reality.

        Guess what I worked for many small companies where time was valuable because we only had a few 'coders' and a few support people.

        I know that spending 20 weeks to produce the goods meant that you lost the job by being gazumped by some Hacker who threw something together in 7-14 days.

        We worked hard and delivered the goods BUT never lost sight of the fact that 'crap code' meant that the support people would be on the telephone for weeks trying to fix the problems which would be landing back on 'our' laps anyway.

        If we overwhelmed the support people we would be impacting ALL our customers and future work from those customers.

        There are 1000 excuses for doing 'just enough' and quick profits is the number one.

        This is what has created the world we live in, doing a good job is lost in time.

        Standards, personal and otherwise are important.

        Quality is important.

        Ignoring these things is the thin-end of the wedge.

        It is what was ignored in the past and ... look all the code is bloated, crap and hard to maintain and support !!!

        What a surprise that is NOT !!!

        A little bit of pride in your work and integrity goes a very long way.

        If your manager does not understand this ... educate him/her !!!

        If your manager thinks there are no consequences to 'Good enough' ...

        I would recommend polishing your CV ...

        You will need it sooner than you think !!!

        :)

        1. J.G.Harston Silver badge

          Re: Nope !!!

          Yebbut, the resources for support is Somebody Else's Problem.

          1. Anonymous Coward
            Anonymous Coward

            Re: Nope !!!

            You forgot the 'joke' icon ... I hope !!!

            :)

      2. Steve Davies 3 Silver badge
        Facepalm

        re: requires a highly skilled programmer both of which are in short supply

        Spot on.

        I used to test candidates by asking about how they handled five different types of errors. One said and I quote, "I write code that does not have errors"

        He completely missed the point. The code might not have errors but what happens if the inputs do? Or, the output system stops responding etc etc.

    2. Francis King

      Memory models

      >> which if memory serves was something like 640KB

      If fact 64KB code and 64KB data. https://en.wikipedia.org/wiki/X86_memory_models

      1. Jou (Mxyzptlk) Silver badge

        Re: Memory models

        No, 64KB segmented, whether code or data does not matter and is programmers decision, and as many Segments as you can get (including ignoring some older OS-es and overwrite their memory, which includes unix, not only DOS). however: If your code and data fit into 64 KB, you can use a .COM instead of .EXE in case of DOS. It simplifies assembly coding since you don't have to care about a few things - you are always in the same segment.

    3. abufrejoval

      DOS "smal" and "large" memory models

      >I remember competing in the BCS's annual programming competition back then, too: each team was given a PC with a copy of Quick-C and you had to keep it small and not bust the "small" memory model >which if memory serves was something like 640KB. Taught you to think about the algorithm and not just throw a highly recursive, clunky monster at it and hope, because the judges (of which I later became >one) would see that coming and would have test cases that would make the code bust the RAM limit. I once set a question (Sudoku solver) which did that, for that precise reason - if you brute-forced it, >you'd blow up, so you had to write a vaguely clever algorithm.

      Your bio memory is unfortunately befuddled by early PC memory abstractions...

      The 8086 or "DOS" memory model took the 8008/8080 or "8-bit" memory model, which consisted generally of 8-bit registers and ALUs and an 16-bit effective memory addresses space, which either combined two 8-bit registers with base and offset (e.g. 6502) or included a few 16-bit registers in a generally 8-bit architecture (8008/8080/Z-80 and lots of others) and extended that via 16-bit registers (which could also be used in a 8-bit manner, e.g. "AX" (16-bit) also being usable as "AL" (lower 8-bit) and "AH" (higher 8-bit) and extended it via a "segmentation" approach.

      A segment was mostly the 64KB area which a 16-bit offset could address natively and then typically 'implied' or translated behind the back via a MMU (memory management unit). E.g. PDP-11 machines would have code, data, stack and heap segments that could be mapped to distinct physical memory spaces e.g. for each process, allowing different processes to run both with physical memory isolation and using far more than just a single 16-bit or 64KB physical address space.

      The 8086/8088 only went half-way, not using a full function MMU with flexible mapping and segment faults for transparent on-the-fly translation and virtualization, but shifting segment addresses four bits to the left and then adding 16-bit segment address on top. It gave it an effective 20-bit (1 MByte) address space with a fixed physical mapping, where different address segments might actually overlap do a large degree in the same physical address space: the idea was that lots of programs wouldn't actually need a full 64KB code, data, stack or heap segment so making them non-cotiguous via a full 16-bit shift avoided excessive RAM use when typical segments were smaller.

      The only reason that 1024KB address space became 640KB effectively on PCs was the fact that the upper 384KB were mapped to I/O by IBM's PC designers: they just couldn't imagine that the Apple ][ replacement they were designing might actually ever use the full 20-bit address range, which today has reached 64-bit (while IBM's "proper" single address space architecture, the i-series or AS/400 went from 48 to 128 bit during that time...).

      The overhead of using a real MMU, including exception handling, was pretty near minimal, even in those early days, comparable to what the IBM PC-AT then used to implement 24-bit DMA for floppy operations, but that's just one of those many personal computing "whatifs", that are so interesting to loose yourself in, ex-post.

      A "small memory model" program would then be basically an "8-bit" application, perhaps using 16-bit registers and arithmetic, but only 16-bit addresses/offsets for everything, code, data, stack and heap.

      The benefit was tight/native "single action" 16-bit addresses being used throughout, even if very few instructions actually completed in a single clock cycle in those early and pre-RISC days.

      If 64k wasn't enough, programmers would have to use a "large memory" model, which implied that you'd have to use "DWORD" addresses, which were a full 16-bit segment and 16-bit offset, 32-bit in total, even if on a 8086 those 32-bit of address only yielded 20-bit of physical address space.

      The overhead was significant, but if your code, or your data just would no longer fit into a 16-bit address space, you'd at least be able to make do. Compilers of those days would actually support chosing between "small" and "large" for each domain, e.g. you'd be able to combine a "small code" application with a "large data" model, vice versa, or combine both.

      I don't think that "large stack" applications were supported, and I'm not sure about segmented heaps either.

      Needless to say, it was a mess, especially once applications and operating systems needed to support both, 16-bit relative addresses and 32-bit DWORD parameters in calling conventions, especially with so few registers to use in case of x86. But in those days it was considered a privilege to be able to somehow compute at all: everything was better than a human computer, or having to resort to pencils and paper, or having to wait for a time-sharing slot.

      Recursion was great for transitioning from extremely hardware oriented early code to mathematical abstractions, but meant that a lot of critical data structures wound up on a stack, that then would only take a max of 64KB of RAM; actually heap and stack were typically forced into a single segment, used from the bottom and the top conversely, only to terribly crash, once they met, if "non-typical" input data led them on such a collision course...

      The 80286 protected mode implemented the fully "PDP-11" class memory abstraction and eliminated the fixed mapping of segment addresses (via the 4-bit left shift), replacing it with a full MMU and an exception handling mechanism, to implement physical memory overcommit and on-demand swapping of memory segments. The physical memory space was extended to 24-bit, while DWORD pointers still consumed 32-bit, and registers mostly remained 16-bit.

      Since VAX like abstractions with 32-bit registers, offsets and 4k page granularity followed only 2 years later via the 80386, the "PDP-11" like memory model on the 80286 never really took off, which turned out to be a great thing: virtual 8086 and DOS was bad enough already.

      1. Jou (Mxyzptlk) Silver badge

        Re: DOS "smal" and "large" memory models

        So, this was the longer explanation of "simple 8086 segment model". Grade: A+.

        For extra bonus: Explain in same detail how the 64 KB barrier is broken for C16/C64/plus4/C128 (not PET). Those CPUs, in the way they were used, could address more than 64 KB as well, which was used to get actual 64 KB usable RAM even though several KB were "used" be ROM. C16/Plus4/C128 Basic implemented bank switching in their Basic to free more of the RAM for Basic. C64 did not, therefore the "38911 BASIC BYTES FREE" message. C16 (with 64K extension) and Plus 4 showed "60671 BASIC BYTES FREE", and C128 "122365 BASIC BYTES FREE".

  16. Caspian Prince
    Mushroom

    Not all optimisation in software engineering has been about resource efficiency

    There are other concerns that the commissioners of software have long been concerned about need to be considered yet it's fashionable amongst us older devs to pretend that back in my day, we had to lick t' road clean before fatha would even let us go to school and we made every byte count, etc. etc.

    The problem is if you cast your mind back to software development in the 70s and 80s and even 90s ... it was fucking awful. The tools were primitive. There wasn't a lot in the way of useful abstraction, which means stuff took ages to develop, was usually fragile and tied to specific bits of hardware, and riddled with the sorts of potential security bugs that would have you hung by the foreskin until sorry for in modern day computing (but fortunately back then everything was air-gapped).

    Not only did stuff take ages to develop, it didn't actually do anything much that everyone takes for granted these days. Software almost certainly wouldn't have worked with any other character set than US ASCII. It wouldn't have handled RTL script. It very likely didn't have any undo feature, or any cut and paste or global clipboard. Your fonts would have been shitty bitmaps on a low-res monochrome screen instead of beautifully rendered glyphs in 32 bit colour rendered slightly larger on your 4K monitor for your tired miserable old eyes. It might not, if you go far enough back, even have had a GUI. It won't have been able to even access more than 2GB of RAM until it got a 64 bit OS. It would likely have just crashed when it ran out of physical memory. All of these things have been added and demonstrably made using software vastly better than it used to be... and it's all cost space. Everything's come at a cost in space, and CPU power.

    And while it was taking ages to develop, very few people had much of the required patience and autistic attention to detail to actually do it properly, and so they commanded a very high price, which they charged for a very long time, and this vexes people who pay for these things to get developed, so they're very keen to make it a) easier and therefore less of an exclusive and hence expensive club and b) quicker so it costs even less to make and gets to market faster. And these two last drivers of market forces are the full force of what's driven software development for the last ... 3 decades or so? Make it quicker. Make it cheaper.

    It's still vastly cheaper to buy RAM than it is to optimise software. Vastly, vastly, cheaper. Even at todays slightly higher prices. And ... I'm fine with that, because I can concentrate on the first, and most difficult bit of software development - make it work - for longer before I have to worry about the next bit - making it fast.

    1. isdnip

      Re: Not all optimisation in software engineering has been about resource efficiency

      Well, no.

      It is cheaper to buy RAM than to optimize software that will be used in a narrow application, like in house, or for some obscure market. But when we're talking about schytt that goes out as part of an OS package to hundreds of millions of users, then a little effort done by the developers will have a big payoff worldwide. And we know that the big devs don't do that.

      1. Caspian Prince

        Re: Not all optimisation in software engineering has been about resource efficiency

        That's not quite the case though: because RAM is not reserved by a single application it's easy to conflate RAM usage with the simple requirement of disk space. The reality is that RAM usage is amortised over the usage of multiple applications simultaneously, and not all applications are loaded at once. So while you are totally correct to say that it is way cheaper to bung 4GB of RAM into a server rather than shave 4GB off of the memory usage of the service running on it, it's also still true of an application that runs on a million installations: because 4GB of RAM installed on a million computers is amortised over the varying usage of a thousand applications. Your application isn't the only application to benefit from the 4GB of RAM: the other 1000 different applications also benefit from it, and that makes all 1000 applications quicker and easier to build and maintain, and that makes all 1000 applications cheaper to make, and that means that those million users pay correspondingly less for their software, and it *still* works out cheaper for them to just buy a 4GB stick than it is to pay for the extra cost of the engineering time to make 1000 applications a bit more efficient.

    2. AndrueC Silver badge
      Unhappy

      Re: Not all optimisation in software engineering has been about resource efficiency

      ..and good luck finding enough programmers of a high enough quality to ensure that the code meets exacting standards. Computer programmers of any ability have been in short supply since the career was first invented. I also don't think the situation is improving. The impression I got before I retired was that things were getting worse.

      1. Caspian Prince

        Re: Not all optimisation in software engineering has been about resource efficiency

        Only about 1 in 10 people I've ever worked with over the last 35 years or so have really been software engineers... the rest do it for various reasons but not because *it's their thing*, they just kind of accidentally fell into it like accountancy or sysadmin or estate agents, and trudge along in a mediocre fashion, never really caring much for what they do.

        1. J.G.Harston Silver badge

          Re: Not all optimisation in software engineering has been about resource efficiency

          And that's so frustrating. There are people who just innately *ARE* programmers who can't get employed, while at the same time there are people who *didn't* want to be programmers who ended up being programmers as make-do work. HUGE waste of talent.

      2. LybsterRoy Silver badge

        Re: Not all optimisation in software engineering has been about resource efficiency

        Do you think there is any chance of applying a sensible solution ie stop producing so much crap? Nah I thought not.

      3. J.G.Harston Silver badge

        Re: Not all optimisation in software engineering has been about resource efficiency

        Computer programmers of any ability have been in short supply since the career was first invented.

        That's a lie, the reality is there a YUUUUUUUUUUUUUUUUUUUGE oversupply of programmers, how else can you explain thousands upon thousands of applicants for every vacancy and employers not even getting off their arses to say **** you to applicants?

    3. LybsterRoy Silver badge

      Re: Not all optimisation in software engineering has been about resource efficiency

      Yes, there were hardware limitations back in the past, but just because you now have an AI powered toothbrush doesn't mean your teeth are any cleaner.

  17. Pascal Monett Silver badge

    "rewards should be given for compactness, both at rest and in operation"

    They will be - in time. When Linux will have as large a library of applications as Windows and be able to run comfortably in an 8GB PC while doing everything you need to do, just like in a Windows Bloat system with 32GB of RAM.

    There are people who are still capable of minimalist programming, but they do not include GitHub libraries in their codebase. They write their own libraries and know exactly what is in them and why.

    But yeah, that takes time. Time to think about the how, time to write and time to debug and make sure it works in all use cases including edge cases.

    Time is money, so managers prefer to bring on the GitHub bloat - even if that means "supply chain risks".

    The cost of RAM is up ? Who cares ? Time to market is more important (especially for bonus purposes).

  18. isdnip

    Anybody remember the linker?

    I'm not a programmer, but back in the '70s I did write some code for the systems of the day, like RSX-11M and VMS. Memory was expensive. What may be forgotten was that in those days, the compile stage was followed by a link stage, wherein the linking loader incorporated into the executable only those system or library routines that it needed. If the code wasn't used, it wasn't linked in. It seems to me that the problem nowadays is that there are big libraries and they're brought in whole, rather than only the parts in use. Isn't that what a good linker handled? Why was that forgotten? Or am I just making stuff up?

    1. Jou (Mxyzptlk) Silver badge

      Re: Anybody remember the linker?

      Actually malware uses that part the most today. Why write all that easy to detect code when the OS has tons of it. (In that regard: capable frontends like powershell and *sh should be flagged as malware? command.com and cmd.exe too...)

    2. AndrueC Silver badge
      Boffin

      Re: Anybody remember the linker?

      Such linkers still exist. And in fact compilers are also now much better at code folding and de-duplicating. Code that programmer A writes will be improved by the compiler and linker. Maybe not as well as code written by a highly skilled and detail oriented programmer but a lot of 'Joe Average programmer's' coding laziness will be mitigated.

      The problem is that in order to save time and reduce mental load programmers have to rely on libraries. And most libraries ship with 'everything anyone might want' out of the box. Now we could split a large library into several smaller ones with a public 'shim' for access that loads only the needed bits as when and indeed .NET does this. Unfortunately this has security and performance implications and a lot of modern code just inherently has a lot of interdependence. The result is that it's just too much hassle and incurs too much of an expense to do.

      1. isdnip

        Re: Anybody remember the linker?

        I was suggesting that the good linkers didn't bring in libraries, just routines that were being used from the libraries. That's harder but more efficient. Does anyone do that nowadays? It doesn't seem so, but wasn't that commonplace 50 years ago?

        1. Caspian Prince

          Re: Anybody remember the linker?

          That's how they still work - if you want to do it that way. These days the favoured way is to dynamically link a shared object library, the reason being that hundreds of applications will all be using the same code so you might as well have the OS only load it into RAM once. In other words after not-very-much linking it becomes hugely more efficient, not less efficient, to load the entire library instead of just bringing in bits and pieces.

          1. Deckard_C

            Re: Anybody remember the linker?

            Except these days they all be having there own copies of the library which will be outdated and have CVEs getting flagged by your vulnerability scanner. Hello OpenSSL and curl I'm currently wondering why Tesseract-OCR need either of those two.

            OpenSSL library is popping everywhere as AI is added into everything and they of course have to secure the connect to AI datacentre with an outdated copy of an OpenSSL library

          2. Benegesserict Cumbersomberbatch Silver badge
            Facepalm

            Re: Anybody remember the linker?

            Only to have all that hard work undone by flatpak, snap and the like.

        2. that one in the corner Silver badge

          Re: Anybody remember the linker?

          As noted above, linking to libraries as DLLs rather knackers that ideal.

          If you are prepared to go for static linking (which has many benefits, btw, such as being certain which copy of routine x() you are calling[1]) then the prevalence of big, complicated, libraries also kicks the ideal in the nadgers because very few library authors are willing to break the code up into the thousands of separate dot-c files necessary for the trick to work[2]. Because it is a lot easier to deal with "only" a few hundred source files than many thousands, and a lot easier if you can load one source file into the editor and see all the methods that make up a single class, for example.

          All of which is a dreadful shame: all it comes down to is how the source editor works, as they all still have you think of terms of "which file did I put that routine in" and "keep related things together for life cality of reference". But sorting out the editor could have been done decades ago[5]. The version control eould have followed easily (e.g. take the same manifest as the Linker and don't bother the User with the fact that so many little files are involved). A simple(!) tool to break down existing full-fat files[6] to get things started. There *would* also have to be some fiddling with the programming languages, but nothing too drastic: when the tool breaks down your full-fat files, it will have to add some annotation to indicate that *this* static declaration is visible *here* and *here* but not *there*; oh, and "there" has its own static (in the C/C++ meaning of the word) with the same name[7]...

          Anyway, bottom line, we *could* have had (still could, really) a toolset that FORCED the ability to do a minimal static link. But who cared? Cared enough to fill in the gaps I've left, write the tools - and then who cared enough to run these tools in their build? Personally, I'd love to *have* such tools, but, um, well, haven't got to grips with creating them (see note about it could be done, now, with LLVM, but that wasn't around when...) but everybody else is happy with DLLs...

          BTW there ARE arguments for going in exactly the opposite direction: for example, although you *can* happily build SQLite as "one dot-c per compilation unit" the documentation makes it plain that they prefer you use a "all the dot-c files pulled into one, single, humongous compilation unit"! Which will totally defeat the Linker (one reference to any part of SQLite and *everything* gets dragged in). BUT they then claim that doing this lets the compiler's optimiser have full reign over the code and this makes the result faster and betterer, so it is worth doing things this way. Whicn is true, you *can* get better speed, if not space, optimised code with this trick. Sometimes. Don't just think it'll work for every bit of code you have...

          [1] and others will say this us a DISadvantage, as you can't update x.dll yourself with a fixed copy of x() when the vendor goes byebyes; this - discussion - can rage on and on...

          [2] because the Link Phase still does what it always did: pull in the separate, but entire, dot-obj[3] files from within the dot-lib[4]

          [3] sorry for the DOS'isms, replace dot-obj with dot-o etc to fit your preferred OS; and dot-c with your preferred programming language's default file extension.

          [4] because a dot-lib file really is nothing more than an archive file of a pile of dot-obj files, where each dot-obj is the result of compiling a single compilation unit (which is, in the simple case, one dot-c file per compilation unit); this is made obvious when you spot that the "make this lot into a library" command may literally be the 'ar' file archiver command. Like zip files, only an older, simpler, format.

          [5] E.g. an agreement on a simple text file - call it a "manifest" - that the tools kept up to date and prevented you from editing (e.g. put a checksum, in text, at the end - you can read these manually, just not edit rhem and expect to have them work) which listed all the files that the editor had used to store all the separate bits of code: putting it crudely (note that - being knowingly crude here, not dotting all the t's) one file per definition of a linkable entity. This is used as input to the Linker, just naming all the obj's. Similarly, you4 Make scripts would read the manifest for use in dependency chains...

          [6] even without going the full way, such a tool would be *very* useful today - hoping to see one built using the LLVM tools.

          [7] hint: the trick is in the naming! Let the static be replaced by a non-static and use name-mangling to enforce the visibility rule...

          1. Richard 12 Silver badge
            Boffin

            Re: Anybody remember the linker?

            The linker can and indeed does remove unused functions and static variables, even if the entire static library is a single C file.

            The standard says "as-if", it doesn't actually have to bring in the entire translation unit. One function is enough if it can prove that the rest is never used.

            This feature of toolchains has been known to occasionally cause/expose bugs - usually because the coder did something that relied on implementation-defined or more often undefined behaviour.

            The trouble is that this optimisation extends the build time, and so a lot of teams turn link-time code generation and other optimisations off. Possibly because they're being charged by the second for build resources, either directly or via that curse of the corporation, an "internal market".

            1. that one in the corner Silver badge

              Re: Anybody remember the linker?

              > The linker can and indeed does remove unused functions and static variables

              Depends upon your linker having that feature; for common targets and/or Big Project toolchains, yay, great. But you can't guarantee that, even nowadays, especially for oddball targets that are more likely to be memory constrained. The plan I outlined works for every target - *and* would have done so from the Before Days when we were still calling them "Link Editors" and worried every time over how well they'd enable the Loader to do its job.

              Ok, yes, w.r.t. the core argument of TFA (wanting better memory management on "standard platforms", i.e. Windows and Linux on PCs and servers) you are more than likely to have a pleasingly capable linker (and compiler willing to work nicely to allow the linker to do its very best job).

              > Possibly because they're being charged by the second for build resources, either directly or via that curse of the corporation, an "internal market".

              And when you try pointing out what happens when you compile & link a release copy just the once, then run it thousands, millions of time... (Ah, "dev" and "factory floor" are two different internal cost-centres and I only have to budget for one of them").

              Oh, for the simpler days of 2007.

              (Of course, *I* always use that time to plan out module tests. Ahem)

  19. Joe Gurman Silver badge

    Durn kids

    Get off my lawn, and take your 64-bit word size and your GUIs and your hi-def monitors and your inefficient libraries with you!

    1. Jou (Mxyzptlk) Silver badge

      Re: Durn kids

      No! I don't want to go back to 160x200 with two colors, or four if I accept some artifacts... Or the luxurious two line forty character display my uncles were proud to have...

  20. Random as if ! Bronze badge

    Memory <> Datacentres

    This is like selling suncream in case it's warm next year, when the Datacentres don't materialise , including the ones in space , memory will be cheap as they rush to sell stock

    https://en.wikipedia.org/wiki/Beer_distribution_game

    The emporer has no clothes!

    1. Clausewitz4.1
      Devil

      Re: Memory <> Datacentres

      "when the Datacentres don't materialise"

      Let's just hope they can be repurposed to something useful.

    2. fromxyzzy

      Re: Memory <> Datacentres

      Server memory doesn't generally work in consumer devices.

      1. doublelayer Silver badge

        Re: Memory <> Datacentres

        True, but it will work in many servers, so anyone wanting to buy server memory can buy it from those guys and anyone choosing what kind of RAM to make with their factory this month would do well to choose consumer, dropping prices for both groups. There's a bit of complexity since some of this is specifically intended for GPUs or other areas where not even normal server memory is involved, and that would likely delay the speed of prices coming back to normal, but even if everyone who bought memory decided to shred it rather than sell it, the prices would still come back to normal as long as they weren't continuously buying more.

  21. Altrux

    Too True

    Go and find a classic 90s 4k demo (coded for 386 or 486!), which might even still run on modern(ish) Windows. Marvel at what could be achieved with 4 freaking kilobytes of compiled code. Then wonder why ever a bare-bones text editor consumes mountains of megabytes today. Modern Linux-based systems are obviously just as over-bloated as Windows, although at least it's /possible/ to build an ultra-minimal cut down implementation of the former.

    1. Jou (Mxyzptlk) Silver badge

      Re: Too True

      The executable was 4k, the demo(s) themselves needed more. And today, on modern hardware in Windows, those current 4k demos do things woah...

      https://demozoo.org/parties/5222/results_file/1804/ (official site shows no result? https://2025.revision-party.net/ )

      https://2025.evoke.eu/party/after-party-roundup/

      1. Mike VandeVelde Bronze badge

        Re: Too True

        I still remember the first digitized music I ever heard, a few seconds of some song by Wham on a floppy disk, on a Commodore computer in the mid 1980s. Not exactly CD quality but perfectly recognizable. My dad would bring home all kinds of goodies from the local user group meetings. And he would set me to typing in hexadecimal machine language code out of the back of magazines to get a game, and I loved it. That was how I learned that not every new game was totally awesome to have, a whole bunch were just not even slightly worth it.

        I still remember once in a while at work in the mid noughties in bored/frustrated moments going through script libraries and manually removing the trailing whitespace that would always slowly build up for some reason. If I could bring the size down from say 7893 bytes to say 7807 bytes I knew nobody anywhere would ever notice, but I got some dopamine out of it contemplating how many times it would be loaded out there and what it might add up to. It was basically meditation, the kind you need in order to have some random breakthrough bubble up to the surface from percolating in your subconscious. The kind of thing terrible managers could never understand. What have you been doing? Even starting to attempt to explain it could only make it worse.

        https://en.wikipedia.org/wiki/Commodore_64_demos

        Desperately needed all over the place here: [reminiscing icon]

        1. doublelayer Silver badge

          Re: Too True

          I suppose it depends what kind of script you were doing that with, because there's some chance that it had less effect than you considered. For example, if this was disk space you were worried about, then the likely change was zero because the old version and the new version would fit into the same two 4 KB filesystem sectors that you were likely using. But maybe these were JavaScript libraries and you were worried about bandwidth, except that traffic was usually compressed at least with gzip at the time and gzip can and would have compressed that kind of thing significantly already. Also, if that was it, you would have gotten more effect by running a minifier against the JS before making the production version which would have removed far more bytes than that. You could have also saved some time by writing a quick script to remove trailing spaces from any line of a file rather than doing it manually.

          It's a fine way to deal with boredom, but I wouldn't jump to the conclusion that it helped anyone or even everyone in aggregate. Similarly, when people claim they're working efficiently, it's possible they're playing a puzzle game about whether they can get something to work with less, a game which is fun (and when it stops being fun people stop trying), but it only helps people some of the time.

          1. Mike VandeVelde Bronze badge

            Re: Too True

            Another badly needed icon around here: [don't get me started]

            It was a Domino server. As in Lotus Notes. ORDER IN THE COURT!!! So it was mostly LotusScript script libraries. Which was basically a clone of VisualBasic with an army of Lotus objects on top. ORDER IN THE COURT!!! It was some kind of quirk in the IDE with at least copying and pasting and maybe more than that, that I started noticing. I would check mine but I wasn't the only one with my fingers in. So any code I hadn't been into in a while, I would select all and the cruft would come to light, and I couldn't let it lie. Well I could, but while waiting for inspration to strike me on just what should be done next it would become a bigger and bigger issue. I would often succumb to my inclinations.

            But then the thing was this was in a template. The template would be used to populate dozens of production instances. Each production instance would have dozens and dozens, maybe hundreds of local replicas. All replicating back and forth.

            Plus these were not individual files in an operating system file store. These were bundled up into .nsf files, so choke on your nerdy 4KB attempt at dousing my flame ;)

            Plus this was Domino, so not entirely LotusScript. There was also a bit of Java. There was also a lot of JavaScript.

            So I was saving on disk space. I was saving replication traffic. In the case of JavaScript I was saving internet bandwidth. I WAS DOING SOMETHING! Lol. Otherwise I would have to have been reading the venerable old El Reg, with all the evidence of non productive activities logging that would have entailed. (which I also did)

            I could have scripted a solution somehow (digging in to .nsf or manhandling the IDE?), but what would be the spiritual salve in that??

          2. Jou (Mxyzptlk) Silver badge

            Re: Too True

            > It's a fine way to deal with boredom, but I wouldn't jump to the conclusion that it helped anyone or even everyone in aggregate.

            I think he mentioned that it would not have been necessary. But those exercises on what is possible are the important learning experiences. In other languages switching array types can speed up > factor ten, a simple sql query changed from "between a and b" to or from "greater than a and smaller than b" can make a speed difference of of factor five. But without such seemingly useless playing around you lack the base of seeing possibilities. Problem solving skills, which are the puzzle games you talk about, are what keeps you from being an assembly-line worker which cannot handle the slightest deviation. Being known to be able to solve problems, is what keeps you in a better job. That includes being able to solve problems in an area not active in before.

            Example from my environment: The one who did PKI/CA in depth left the company for health reasons > a year ago, might return some day. Up to then I was superficial into that stuff, and now I am the one asked when there is a problem, and I even understand why something does not work as expected. Several other things landed on my table too. Without previous seemingly useless acquired knowledge, in my case powershell a bit more in depth beyond cmdlets 'cause I needed it for something else, quite a number of things would have been unsolvable, way more expensive to solve or would have taken over ten times as much time.

            This is why I promote, here on this specific post of you, the contrary: These skills, acquired that way especially when young, are very important. Later you use tools, but seeing when a tool went wrong and being able to tell exactly where and handle it, stems from those ever changing and never the same type of puzzles.

          3. J.G.Harston Silver badge

            Re: Too True

            I've done that sort of tweeking specifically to get a file just under a sector size. I have ended up with loads of files where the size is xxxFx. Not being able to get something down from xxx01 is a frustating loss of a whole disk sector.

  22. Michael Hoffmann Silver badge
    Unhappy

    Won't be a problem...

    In the Brave New World of "you own nothing", you won't have a computer that you need to put RAM into anyway.

    You will be forced to rent one. Starting at only $99.99/month. RAM not included. That's an extra $2/month per gigabyte.

    1. Caspian Prince

      Re: Won't be a problem...

      Rather like ... phones are right now?

  23. trevorde Silver badge

    Two out of three ain't bad

    It's always a tradeoff of: time vs quality vs functionality - choose two, except for one company I worked for where they wanted all three. We called it the 'Iron Triangle'.

  24. Roo
    Windows

    Fads come and go, the speed of light is a constant.

    There is a very good reason to reduce memory footprint:

    - Power consumption (and dissipation).

    The hardware pushes less bits - and because there is less *volume* required to store those bits they don't have to travel as far.

    -> More speed, less power

    Less data -> less time to process the data.

    -> More speed, less power

    In the old days to get those benefits you waited a couple of years for a new gen of semiconductor fab process - those gains are no longer to be had - and the speed of light is still the same as it ever was, so if you want to make things faster you need to be more efficient.

  25. that one in the corner Silver badge

    I'm looking down the barrel of a running a "Learn a bit about MCUs" course

    I am tempted to stick with C or C/C++ 'cos you (needn't) waste any space or execution cycles and I could use the good-old Atmel MCUs[1] or to use some larger devices and start them off with Python[2].

    The current pundits tell me to use Python, it is easier for the little darlings. But, as did others here, I started off with 256 bytes[3] in My First Micro and still do things like print out sizeof(MyClass) for everyting in "my collected personal favourites" library, weeping if they are forced to grow larger. Seeing the space needed by - and the slow execution speed of - Python just jangles the nerves.

    But if I can start them with Thonny and not worry about "this is what a compiler & linker & uploader do, you'll need to know this to understand the error messages" it will be easier.

    But the bit-packed C structs can have a 1:1 with the hardware and are efficient. You can fit data for more RGB LEDs.

    But the Python-capable MCUs have enough RAM to cope with any beginner's needs (AND the slippery slope begins).

    But... Python better. But... C better. But... But...

    (Exit stage left, carrying table whilst banging head on it).

    [1] in other words, do a "Intro to Arduino" but try desperately to ignore all the daftness on recent "Arduino".

    [2] CircuitPython or "plain" MicroPython, either one. Or swap between them, keep everyone on their toes.

    [3] yes, I said bytes, not Kilobytes (or kibibytes), not Megabytes...

    1. Roo
      Windows

      Re: I'm looking down the barrel of a running a "Learn a bit about MCUs" course

      Rather than getting hung up on the language for course, I would be looking judging the toolchains on the following criteria for the build -> upload cycle:

      1) Reliability & consistency - ie: if I hit the go button it should *always build and upload (barring errors in my code), I shouldn't be having to dig through the environment to work out why it didn't work this time.

      2) Speed of iteration - if I'm learning I want to be able to try things out, being able to build, upload & run my change in a couple of seconds makes this easy. If it takes 30 seconds that becomes dead precious class-time - and frustrating.

      I suspect more people will have experience with Python which may also be a deciding factor - but *simple* C with everything in one file should be just as digestible - the problem comes when their coding goes off the rails, C/C++ documentation tends to be less newbie friendly than Python. I was looking for a C++ for beginners tutorial for a colleague recently - I couldn't find anything that really fit the bill - I know this stuff *used* to be readily accessible - but I couldn't find it - the candidates tried to cover every single feature in excruciating detail which just isn't helpful for someone making their first steps beyond hello world. :)

  26. Bebu sa Ware Silver badge
    Trollface

    I can see a Snake Oil opportunity here…

    flogging ram compressor tat/utilities for Windows that either do sod-all or are lightly warmed over zram/zswap etc…… $49.99 Limited offer.

    I recall Windows ≥ Vista would attempt to use any inserted USB flash memory stick to improve performance (ReadyBoost) so I imagine that little nasty is also being dusted off.

    Probably not such a problem for Linux/*BSD users. Win11 machines all seem to ship with 16G ram (or did) even as 4×4G on refurbished boxes which is usually more than adequate for these OSs - 8G is usually enough with minimum paging(swapping.)

    Interestingly Win10 LTS runs very nicely indeed with 16G even with quite old i3 CPUs and isn't too shabby on 8G.

    1. Sandtitz Silver badge

      Re: I can see a Snake Oil opportunity here…

      "flogging ram compressor tat/utilities for Windows that either do sod-all or are lightly warmed over zram/zswap etc…… $49.99 Limited offer."

      Windows has its own implementation of memory compression already. Not that it would prevent selling snake oil software, of course.

      "I recall Windows ≥ Vista would attempt to use any inserted USB flash memory stick to improve performance (ReadyBoost)"

      No it didn't. The Autoplay dialog had Readyboost optimisation as one option along with the other options.

      "so I imagine that little nasty is also being dusted off."

      Nasty? At the time when SSD's were generally not available and some hard drives were really slow; 4200 rpm wasn't atypical on laptop, a fast SD card on the internal reader did make meaningful difference, it was just a disk read cache, like the hybrid HDDs of last decade.

  27. frankyunderwood123 Bronze badge

    The level of bloated abuse is biblical in scope

    Just one small example of how bad things have become is digital signage, supermarket checkouts and even ATMs.

    An entire desktop operating system lurks behind most of these, windows for the most part.

    The inefficiency is staggering given just how much computer power is required globally to run millions of these types of systems.

    And what of IoT devices? What lurks within many of these? How inefficient are they?

  28. Blogitus Maximus
    Linux

    Use the memory luke!

    'This is the OS of an IT Knight. Not as clumsy or random as a mobile app or Windows install; an elegant OS for a more civilised age.'

    https://www.theregister.com/2025/12/23/unix_v4_tape_successfully_recovered/

    "...the kernel was some 27 kB of code." Kinda extraordinary when you think about how much we use now.

  29. Greybearded old scrote
    FAIL

    Plus ca change

    I remember one journalist predicting that the new Pentium processor would cause a return to writing efficient code, because programs that fitted into the 32kb cpu cache (a new feature on desktops then) would be blazingly fast compared to those that didn't.

    Before that Windows 3.0 was going to save masses of disc space. After all, applications would no longer have to ship with their own set of print drivers.

  30. zapgadget
    Go

    Demoscenes

    There are a bunch of amazing coders in the demoscenes. 1k or 4k executables. These have been going on for decades. Take a look at https://www.pouet.net/party.php?which=2138&when=2025

  31. Anonymous Coward
    Anonymous Coward

    Less Snap, Flatpak and AppImage

    I hope this motivates more support for standard repositories and packages such as APT with deb by package maintainers.

    Snap, Flatpak and AppImage pull in Christmas trees of dependencies of different versions of libraries eating your RAM.

  32. disgruntled yank

    pre-bloat

    I can remember my employers' customers upset because the new release of the software would require the machines to have 4 MB of RAM. So I "grew up" on command-line interfaces, modal editors, etc. I can still edit more or less efficiently with vi. Honestly, though, I find it more comfortable not to.

    The 4 MB requirement goes back not quite 40 years. In those days I would also haul my groceries a mile or two home in backpack. There was definitely a savings in gasoline over using a car; but one gets used to convenience.

  33. DrewPH

    So many sketchy yorkshiremen in this thread. When I were a lad we 'ad a byte if we were lucky, and we programmed a complete space invaders game in it... wonderful stuff.

    I'm doing my bit though...

    I recently built a Pi-based NAS. Normally for Pi projects I always buy the model with the highest RAM, just because, you know, they're cheap, so why not. This time instead of getting a Pi 5 16Gb I realised the project would never use about 14 of those 16. So I bought the 4Gb one (couldn't find any 2Gb models available). Fight the "must have the most" mentality!

  34. Henry Wertz 1 Gold badge

    -Os?

    I wonder how much of a slowdown you'd get from a system compiled with -Os? Apparently this can save 25-50% on the size of the executable, and I do wonder if memory allocation could have plenty of padding and 'bubbles' in it when programs are making many smaller allocations rather than a few large ones.

  35. lordminty

    Back in the late 1980s

    When I worked as a Sysprog (Systems Programmer) on IBM and compatible mainframes, my site ran two OSes concurrently (MVS/ESA and VM/HPO) each running different online applications supporting thousands of users, along with a hefty batch schedule to produce thousands of customer bills a day.

    All in a massive 128MB - yes, Megabytes - of RAM. Partitioned as a whole 96MB for MVS and 32MB for VM.

    Of course the kit that it ran on took up a room the size of a football pitch and needed 3-phase power.

    I feel old now.

  36. StrangerHereMyself Silver badge

    KolibriOS

    KolibriOS has forever changed my view on software development. An entire operating system including GUI, applications and device drivers that fits on a single 1.44MB floppy?!! How did we get to this mess where a simple editor requires 4GB (!!!) of memory?

    Many contemporary software developers only know JavaScript and other wasteful languages and have no idea how it all works under the hood. When Electron uses gigabytes of memory to display a simple editor control they just shrug.

    Yes, knowledgeable people aren't cheap, but they can save money in the long run.

  37. Anonymous Anti-ANC South African Coward Silver badge

    OS/2

    Time to bring the lean Warp Merlin back?

    Warp v3 was optimized for 4Mb RAM back then, but 6Mb was better.

    And it's mindboggling to note that we now need 16Gb of RAM for weendooz to run Office365 properly...

    C'mon guys, 16Gb HDD space for Warp v3 was oodles and oodles of storage space back then...

    1. Sandtitz Silver badge

      Re: OS/2

      "Time to bring the lean Warp Merlin back?"

      No. Unless you like single-user operating systems and with probably zero considerations for securing the thing down.

      "Warp v3 was optimized for 4Mb RAM back then, but 6Mb was better."

      4MB was the minimum and IBM did some optimisation for it, though I never tested it as such - I had at least 8MB since OS/2 2.1 ("Borg")

      I remember a curious case of Microsoft comparing performance of Win95 vs Warp 3 on 8MB system, and IBM then claiming they installed it with 4MB and later upgraded to 8MB which resulted in subpar performance. Ref: comp.os.os2 - around 30 years ago...

      "And it's mindboggling to note that we now need 16Gb of RAM for weendooz to run Office365 properly..."

      While Linux runs with somewhat smaller memory footprint, the accessory software likely eats about as much memory on every system. If MS Office ran on Linux, the computer would end up requiring about as much memory.

      "C'mon guys, 16Gb HDD space for Warp v3 was oodles and oodles of storage space back then..."

      Yes, 2GB HDD space was quite reasonable back then...

      I don't think 16GB hard drives were available until about year 2000 or so.

      1. Jou (Mxyzptlk) Silver badge

        Re: OS/2

        > installed it with 4MB and later upgraded to 8MB which resulted in subpar performance

        There was a time in the 486-era when the CPU, depending on the mainboard, did not get enough cache-tag-RAM for the L2 cache anything above 4 MB. You could add an additional SRAM to make more RAM cache-able. I know 'cause I did add such one tag RAM in my machine back then when I upgraded from 4 MB to 16 MB.

        The result with not enough tag RAM: Any random memory access above 4 MB was very very slow. I think some RAM-drives were able to specifically block that part of the memory for themself, since it was still a lot faster than disk. If you can make the OS aware for such memory speed differences you can, of course, win easily on the same machine by using the slow memory exclusively for disk cache.

        More technical information (example, not the only link): https://www.dosdays.co.uk/topics/cache.php

      2. BinkyTheMagicPaperclip Silver badge

        Re: OS/2

        There's certainly not a lot of security in OS/2, and much though I liked it, even if IBM hadn't made so many mistakes it would have died eventually unless its architecture changed to be more like NT or Unix including proper multiuser.

        I would be truly amazed if installing on 4MB and upgrading to 8MB made any substantial difference to Warp 3. The only difference I'm aware of is disk cache allocation, which if it uses 'D' on FAT is dynamic, and for HPFS it's limited to 2MB cache anyway. IBM did a fair bit of optimisation, including coalescing multiple DLLs into one, but like you I used OS/2 since 2.1 and had 8MB RAM because it was very clear that was the usable minimum.

        As to disk space, I think we need to remember that sometimes BIOS limitations affected the size of the OS/2 (or NT 3.51) boot partition[1] to be contained within the first 1024 cylinders, or around 504MB. That was *still* perfectly sufficient to run the entire OS with networking. True, you'd want to install large applications on another drive, but it wasn't a huge issue.

        [1] OS/2's boot manager would not let you install OS/2 if it broke BIOS limits. On the other hand NT 3.51 would let you do whatever you wanted - and then just fail to boot.

  38. Juha Meriluoto

    My thoughts exactly!

  39. QuienKendra

    It’s funny how we went from literal kilobytes of 'wetware' ingenuity to having 16GB of RAM as a minimum requirement just to run a chat app. We used to spend hours optimizing loops to save a few bytes, and now it feels like modern devs just throw more hardware at every problem because memory is (usually) cheap.

    I’ve been doing some deep-dives into legacy code lately, and if you’re ever trying to spot the exact differences between a 'lean' old-school script and a modern 'bloated' version, I've found https://comparetext.app/ surprisingly handy. It’s a clean way to diff text without some heavy IDE eating up your precious remaining DRAM.

    Anyway, looking forward to the day we need a petabyte just to boot a calculator.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon