back to article New software sells new hardware – but a threat to that symbiosis is coming

A few months back, I wrote that buying software is a big lie. All lies have consequences, of course. The worst kind of consequences are the ones you didn't see coming. So let's look at some of those, and some other lies they conceal. As we said last time, you can't really buy software. Commercial software is mostly – but as …

  1. Pascal Monett Silver badge
    Trollface

    "get just a few new features, but peace of mind"

    Peace of mind with Borkzilla updates ?

    I don't think so.

  2. Rich 2 Silver badge

    Slow software

    There once was a time when really useful software would fit on a 720k floppy disk and run in 32k off memory.

    These days, a similarly useful application is too big to even distribute on physical media, will require gigabytes of disk space and a PC that is, frankly, ludicrously fast and powerful.

    We need to look at why this is so. An ugly trend is the use of off-the-shelf modules and packages that do some particular thing that the application needs. Rather than write a tiny bit of code to open a network socket and pass a bit of data through it, for example, the application writer will find some huge bloated blob of code on GitHub or some such that implements a ‘network connection’ class. And yes, it might work, but 90% of its functionality is probably redundant. And what functionality is used is so general that it imposes glue overhead that just adds gloop to the application. Multiply this “modern” approach to writing application code to all the other bits the application needs and before you can say “out of memory”, you have a monster sized program that runs horribly slow.

    Another issue is that it seems that every time one bit of code needs to talk to another, or it needs to save and recover data from disk, the application writer feels the need to use some bloated horrible ugly comms mechanism and/or wrap the data up in some hideous format (like XML) that needs a lot of code to create and decide again.

    I’m not suggesting abandoning code re-use - that works be nuts. But a bit of common sense and intelligence would work wonders

    1. JohnTill123

      Re: Slow software

      The first C compiler I ever bought fit on 3 x 360K 5.25" floppy disks. Years later, the first MS C++ compiler I saw was on a DVD. And "Hello World" with MS C++ was over a megabyte.

      Ridiculous bloat.

    2. that one in the corner Silver badge

      Re: Slow software

      One of my "favourites" was finding that, in an externally written "modernisation" to our embedded system, all of the config options for all of the separate processes were placed into a big XML file (so it could be a "single system config" - which isn't a particularly bad idea). A convenient utility routine allowed a process to read a value, by the simple method of reading the XML into an in-memory DOM[1] and walking the tree to the named value. Then the process would read a second value, by the simple method of reusing the convenient utility, hence reading the XML into an in-memory DOM and walking the tree to this second named value. Then the process would read a third value, by a simple utility call, which read the XML into an in-memory DOM and walked the tree to this third named value.

      And so on, until it had effectively read all of its options at startup: saves using tricky command line options - even though the processes were all started by a master process (don't call it an "init") which had learnt the list of processes to start by reading the XML into a DOM and iterating over a certain node's children...

      Of course, not all of the coders thought of getting all the config values ready at the start of a process, so some of them would, after running for a period of time, pause doing useful work, read the XML into a DOM...

      [1] use of a stream parser - Expat - was deemed "too complicated" for an embedded system, given you could just build the DOM in one simple call to this library that everyone uses in all the StackExchange examples, rather than some weird library no-one had heard of...

      1. bombastic bob Silver badge
        Linux

        Re: Slow software

        I've written hyper-efficient XML processors before. As long as you do not do anything exotic with it, the code is pretty simple, infinitely hierarchical, etc. Similar for 'INI' style. I am pretty sure I posted the source online on github, too. Written in C, with C++-like thinking.

        (I use a similar github handle as I use here if you want to look for it)

        The original reason for writing this is I do NOT like 3rd party bloatware libs for doing simple things. And this code is efficient enough to put into a kernel module, which was being discussed at the time I first came up with the original (and it morphed from there).

    3. localzuk

      Re: Slow software

      People are taught that code reuse is better than reinventing the wheel, even when it comes to basic aspects of an application that could be coded fresh quite easily and efficiently.

      In a purely time/cost benefit terms, it makes total sense. The fact that a piece of software needs powerful computer equipment is, to most developers, irrelevant - they just slap "needs XYZ" on the requirements document and then its the customer's job to sort.

      So, the answer to your "We need to look at why this is so" can be summed up with "modern capitalism".

    4. LybsterRoy Silver badge

      Re: Slow software

      Wow, I thought I was the only old fart here. You forgot one other important item - relying on "smart" features in the compiler/OS/whatever. It may or may not enhance bloat it may/may not also increase slowness in execution.

      Many moons ago, on one occasion doing data transfer I contracted out the read tape and convert and write to disk to soneone who had worked on the system and knew formats and layouts. The only problem was that the compiler he produced the code on was a bit smarter than ours and inlined a loop using record count. Our poor old compiler (the OS was PICK if you want to know) counted the number of records to be transferred on each iteration of the loop, his realised it was static and got it once only. It took a few seconds to bung a variable in, calculate the number of records and use that variable in the loop control. Merely shaved about 3 days of the transfer times.

      I just wonder if this sort of thing still happens now.

    5. bombastic bob Silver badge
      Devil

      Re: Slow software

      We need to look at why this is so.

      I have some ideas

      * object-oriented to the point that objects do WAY too much 'just in case'

      An object that represents the name of a file does NOT have to open and read the file and figure out what it is UNLESS you need to display an icon or a preview (and then it should be done in background on demand).

      Mate and Gnome standard 'open file' dialog box is guilty of this, as are common dialog boxen on Windows

      * Just because you can, does not mean you should

      Unnecessary UI "eye candy", typical for websites that use bloatware javascript libs

      Using a web-browser based UI for something that really needs a native interface.

      allowing applications to eat RAM for cache and do garbage collection whenever they FEEL like it, thereby causing OTHER applications to run slowly or swap too much. That would be YOU, Chrome, Firefox.

      * use of OS memory allocator rather than sub-allocation for tiny little chunks of RAM.

      Kinda self-describes

      * Use of C++ objects that use 'new' with pointer members, rather than the actual stored type, for non-arrays

      Lest we forget, the constructor will get called if you declare a member as a class rather than a class pointer, and you will skip an extra 'malloc' and instead do one for the entire class and all of its members, all at once

      * Excess reliance on exception handling and unwinding of things

      What it says on the tin. Kernel code often has a 'goto error_exit' where the 'error_exit' label has a cleanup section for managing all of the unwinding and freeing up resources. Better, and more efficient. 'goto' may have 4 letters, but it is NOT a '4 letter word',.

      * Use of garbage collection where reference counts would make a LOT more sense.

      Objects that clean themselves up when the ref count goes to zero can persist as long as they need to, even across threads [if you design them properly]. Remember OLE 2.0 ? It is based on other things found in the UNIX/Linux world (CORBA was one as I recall) and the ref count is the smartest part of it.

      * BLOATWARE "Rapid Development" systems (think UWP)

      They add layers and are not always very efficient

      * Anything involving '.NOT' or NodeJS

      'Nuff said I think

      * Trying to use Python as if it were C or C++

      Python is a useful wrapper but performs poorly in a loop. I recently used Python to display an animated wallpaper until you touch the screen, then it exits but tells the main application you did it so that it changes its behavior with blinky lights. Maybe 20 lines of Python, the rest managed by GTK and standard utilities

      NOT all inclusive, but a good start I'd say...

  3. Anonymous Coward
    Anonymous Coward

    Server needs drive now OS developments, at the expense of desktop users

    It clearly visible today that server needs drive OS development - because money come from those companies who sell services and/or hoard data. Since most of their user interfaces are web ones, they care little what desktop (or mobile) OS the users are using.

    It's very visible in Linux where server software is light years ahead of desktop one - but what happened to Windows since Windows 10 shows it as well. The UI has been simplified (and "clumsified") IMHO in an attempt to lower development costs and assign less skilled developers to it. Even having its own browser is no longer a need in Redmond, it's OK to use Chrome since the money now comes from Azure - and what users use to access it matters little. Only Office is somehow cared for, but even server application management tools became far worse than before. Just like Linux, you have to figth with cryptic command line commands, even for simple tasks or tasks which would be better managed in a UI because of the display needs of data involved.

    Developing OS UIs is expensive - it requires not only developers, but developers with specific skills in graphic designs and all that is required to make a UI not only responsive - but looking good also. and that's where you see FOSS often failing at, because that kind of developer is not easy to find, won't work for free - and this kind of software development has not big sponsors. Probably they are working on games/mobile apps where they can make more money.

    1. tiggity Silver badge

      Re: Server needs drive now OS developments, at the expense of desktop users

      @LDS

      "Developing OS UIs is expensive - it requires not only developers, but developers with specific skills in graphic designs and all that is required to make a UI not only responsive - but looking good also."

      You would hope they don't rely on devs for design! In a commercial oriented company would expect a graphic / product design team to be specifying the look and feel, then devs code to the spec...Though would expect devs to use their knowledge & skills point out any problematic areas of the spec they are given as often a clash between a spec / design & timescale / difficulty of spec implementation.

      .. Obviously some smaller (especially open source) projects may do it differently if there's not a group specifying design rules / requirements from "on high".

      1. that one in the corner Silver badge

        Re: Server needs drive now OS developments, at the expense of desktop users

        > graphic / product design team to be specifying the look and feel, then devs code to the spec

        Surely the product design team use a "no code required" UI generator that lets them test all of their UI screens and just spits out a massive lump of code filled with comments /* to-do : fill in button press action */ Any fool of a dev can replace with those with the obvious database calls, after all, we've done all the tricky creative stuff for you already!

      2. Anonymous Coward
        Anonymous Coward

        Re: Server needs drive now OS developments, at the expense of desktop users

        You need people with specific UI design skills. If you put people with graphic design only skills you get what is web today - street ads designs passed off as "UIs". If you put people with programming skills only you get Linux UIs.

        Implementing a UI is not design only, it's not programming only, it's a skill on its own which requires a deep knowledge in graphics, programming and, ergonomics - a UI needs to be pleasant to see, performant, able to display all information properly, and easy/comfortable to use.

        The problem with graphic designers is they are usually trained to design static elements. A book, magazine page, street ad is static and doesn't need interaction. A GUI is dynamic and highly interactive. The problem with programmers is they may have little knowledge of the rules developed over centuries to display contents. And they believe all users are programmers. You really need people trained in GUI design and development, and they can't be separate departments, they need to interact from the very beginning.

        1. LybsterRoy Silver badge

          Re: Server needs drive now OS developments, at the expense of desktop users

          Actually I'd like my UI to be static - I'm fed up with them moving bits of it around!

      3. LybsterRoy Silver badge

        Re: Server needs drive now OS developments, at the expense of desktop users

        -- You would hope they don't rely on devs for design! In a commercial oriented company would expect a graphic / product design team to be specifying the look and feel, then devs code to the spec... --

        I would hope they would use an ergonomics specialist but looking at the current crop of UIs I don't think my hope will be resolved.

        ps: I also hope they will remove Jackson Pollack from the graphics design team.

  4. doublelayer Silver badge

    64-bit not just for more RAM

    The article's claims about the replacement of 32-bit by 64-bit is understating what we got from that switch. It's not just about addressing more than 4 GiB of memory. That was part of it, but the 64-bit instructions also help a lot. With 64-bit registers and operations, some things involving larger chunks of data can be done in orders of magnitude fewer instructions, which means much faster execution. We didn't adopt 64-bit processors to make OS vendors more money; they had 32-bit versions for over a decade after AMD64 parts became available. We didn't do it to make the hardware manufacturers more money; the existing 32-bit parts would keep working until mechanical failure, software support lasted enough that basically nothing would refuse to run, and although it has been dropped from some distributions, it's quite easy to compile many of them for 32-bit if you need it. We did it because it produced better processors, both for servers and for desktops. There's a reason why other 32-bit processors for non-embedded purposes have been superseded, including nearly all smartphones.

    1. Richard 12 Silver badge

      Re: 64-bit not just for more RAM

      The key thing is that a 32bit process cannot directly address more than 4GiB of memory.

      A fair chunk of that gets used by the OS, and yet more by all the actual application code, reserved for the stacks of all the threads etc.

      Windows with PAE allowed the operating system to give each 32bit process its own 32bit address space so you could run many more processes on a single server.

      All 64bit editions of Windows still do this, it's called WoW64.

      AWE was a "cheap hack". It lets a 32bit application temporarily map a small section of a much larger block into the process' 32bit address space. Very similar to memory-mapping a portion of a large file, except that it has a lot of surprising limitations and can easily totally exhaust physical memory.

      So hardly anyone used it.

      64bit makes it all simple again.

      I support a legacy 32bit application that occasionally literally runs out of memory. It ought to have been ported to 64bit but there aren't 64bit versions of key libraries, and several customers insist they cannot lose those features. So it struggles on.

      1. Anonymous Coward
        Anonymous Coward

        Re: 64-bit not just for more RAM

        AWE is much alike using EMS/XMS under DOS - for those who can remember that. Better than noting if you really need more RAM, but something you are happy to forget if there are better solutions. PAE as you point out doesn't help with applications that really needs to allocate a lot of RAM, and today even some desktop applications may use a lot (i.e. video editors).

      2. RichardBarrell

        Re: 64-bit not just for more RAM

        It's worth mentioning too that PAE only got you to 36 bits of physical address space, i.e. 64GB. That was a lot at the time but workstations exceed that now. Servers vastly exceed it.

  5. Robert 22

    "Since August, if you don't have the $15-per-month subscription, all the Pantone colors in your old files just… turn black." - Henry Ford would have had no problem with that!

    1. oiseau
      Facepalm

      ... Henry Ford would have had no problem with that!

      Wait ...

      Henry Ford was also a consumed corporate extortionist?

      O.

  6. Anonymous Coward
    Anonymous Coward

    Maybe we all have to go back to RISC OS

    It looks good on a Raspberry Pi

  7. An_Old_Dog Silver badge

    Dropping Older Architectures

    It's pissed me off that Linux is dropping older architectures. The reason given was "it's too much work" to test on older architectures. Reading that tells me the devs aren't using automated regression testing. I've got perfectly good older kit, which runs well enough, and would still be useful if the devs weren't dropping support for the older stuff. One of the -- IMHO -- points of Linux was to make working older kit continue to be usable, and to help people get off the manufacturer-driven "buy a new thing and throw out the old" cycle..

    1. that one in the corner Silver badge

      Re: Dropping Older Architectures

      First of all, your old kit can still run using the older kernels etc, so there is still plenty of scope for doing useful work - although what one person considers useful may not suit another, of course.

      And there are OSes that take pride in supporting as wide a range of kit as possible (NetBSD for one).

      Otherwise, to keep older hardware from being dropped, you can consider volunteering your time and working kit to be used in the regression tests. Or getting together with other people who have the same old hardware and taking a group effort of it. Or just handing over a suitable pile of moolah to convince someone else to take on the challenge.

      After all, my understanding is that the basic problem is the simple one of too many hardware variants and too few testers and developers to give all of those variants the attention they would need to support the latest kernel changes; changes that are required to actually make use of the current hardware.

      Not only does the new hardware support have to be added but at worst the changes have to be benign on older CPUs, at best simulated in extra software written for the old boxes. Time and energy (aka "money") then has to be devoted to keeping that extra module tested and in step with ongoing progress, especially as once the clever h/w feature is in place, more and more of the other "unrelated" areas of code will start to rely on it, and all its corner cases, putting more strain on the simulation module or breaking because the "benign" ignoring of the feature is no longer good enough.

      It takes more than just automated regression testing: they need the working kit and enough of it, the ability to fix regressions (see above paragraph), time and money to keep it going (where does extra kit live, on whose mains?).

    2. doublelayer Silver badge

      Re: Dropping Older Architectures

      Testing is not the only problem. It is a problem, but another factor is that, any time something is added to the kernel, someone has to consider how it will affect the paths that are custom to the old hardware. For things that aren't supported by the hardware, the writer either has to implement a software shim to support it anyway or a bypass to disable the feature. That's extra work which slows development and can break things. Automated testing will not automatically detect it.

      Even without that cost, testing isn't a simple task. You have to have a lot of examples of old hardware lying around. I have some old hardware, but I don't use it to test every patch to the kernel and I'm guessing you don't do so either. I'm also not about to maintain a museum of archaic technology which I have configured to run tests every time a patch comes out in the hopes that someone is still using it with the latest kernel and thus benefits from that effort, because it's a lot of work for almost no benefit. I'm prepared to guess that the older hardware you've got is A) not so old that they've dropped it yet and B) doesn't run the latest kernel version anyway. If I'm wrong about both those assumptions and you value that, maybe you should assist them with running the tests and verifying functionality on your platform.

      1. RichardBarrell

        Re: Dropping Older Architectures

        > You have to have a lot of examples of old hardware lying around. I have some old hardware, but I don't use it to test every patch to the kernel

        Theo de Raadt did (still does probably? I haven't checked) this for OpenBSD for years. It cost a lot of money in electricity bills.

      2. An_Old_Dog Silver badge

        Re: Dropping Older Architectures

        For things that aren't supported by the hardware, the writer either has to implement a software shim to support it anyway or a bypass to disable the feature.

        Not being knowledgable enough to work on the kernel myself, I may have an incorrect mental model of how things are done, but aren't there already multiple kernels for similar-yet-different architectures: i386, i486, i586, i686, etc.? Wouldn't those different kernels sufficiently split out the incompatible differences you're talking about?

    3. localzuk

      Re: Dropping Older Architectures

      Seems like quite an entitled attitude to FOSS?

      What are you doing to keep that old kit supported? It takes time, effort and money to maintain every project.

  8. Downeaster

    Good article that covers a lot of ground!

    I agree with many points in the post. People have to give back open source projects. I use LibreOffice a lot but I do not contribute code. I support them by giving them money each month. I try to give what it costs for an Office 365 subscription per month. I have no coding experience but see the value of the software and am a user. The subscription model for software annoys me. Having to pay to use something like MS Office or Adobe products is ridiculous. For many years, people were happy running older versions or couldn't see the value in upgrades, Subscriptions also run the risk of a user continuing to pay the bill for a product but not using it. I try to look for alternatives and support them. Using older hardware is fine. This is written on a 2009 Core 2 desktop that functions fine with an SSD upgrade and maxed out RAM. Looking for alternative software takes a while but once you find a few favorites you can use the software on different computers. Change and improvements drive the computer industry. The biggest factor is money, steady income, and profits. Software subscriptions provide a steady income to companies. Also having things in the cloud which are often subscription based services.

  9. Anonymous Coward
    Anonymous Coward

    Pantone

    I think Pantone changed their licensing conditions a couple of years ago to make it more difficult/expensive to bundle their software with third-party products. The Adobe change probably coincides with the end of their existing contract with Pantone and they've decided the new terms aren't worth the hassle/costs. I'd guess they have to black out the Pantone colours because the Adobe product is subscription-based.

  10. Barry Rueger

    Less is more

    This week I needed to fill in a form from the Alberta Government. It was, of course PDF. One of those somewhat older PDF types that refuse to even display itself on a Linux box, or even in the on-line Adobe version of Reader. Reader of course is no longer available for Linux, so I ultimately had to reboot to a Windows box to fill in the ten essential lines, then print it.

    All of which is to say I am very happy with Linux (Mint in my case), and with the Open Source programs that install with it, and with the fact that it all just rolls along year after year with few changes, and few problems like those described.

    Yes there are times when you need a specific application like PhotoShop, or a specific version of Adobe Reader, but beyond that I really don't understand why anyone sticks with that horrid, advertising jammed mess that is Windows. Or even (IMHO) that wall of vendor lock-in that is Apple. I just cannot see the advantages to any but those with a specific narrow use-case.

    All I know is that once I had that form printed yesterday I was very fast to reboot to Linux and safety, and thanked myself for having the good sense to spend fifteen minutes installing it immediately when I bought my system.

  11. Electronics'R'Us
    Holmes

    Memory vs. Data size

    There is no real hardware reason that a 32 bit processor have a 32 bit address bus. 8 bit devices almost universally had a 16 bit address bus. One practical reason is the size of the program counter (if it is 32 bit than the maximum size within a given space is 4GB) but that is an implementation detail.

    There is no consensus on what determines whether a device is 16 bit, 32 bit or whatever. The two most commonly used definitions are internal register size [1] or ALU data width.

    The PowerQuicc 3 series (from what was Motorola -> Freescale -> NXP) have an internal 36 / 40 bit address space (device dependent) and exposes a 32 bit address for non SDRAM memory but with multiple chip selects. Boot flash, static RAM and so forth come to mind. Peripheral mapping can also be done using that interface.

    SDRAM of all flavours is a different type of beast as the address space can be much bigger than might be expected as there is a row and column value latched on different parts of a memory transaction. Thee data interface can be 72 bits (64 + ECC) for those devices (That has been true for some 32 bits devices for over 20 years). SDRAM (DDRx included) interfaces are always separate from the main address (physical) bus due to the hardware requirements.

    It can also assign a 32 bit address space for PCI/PCI express interfaces and so on which can then map much larger spaces behind each device (depending on device).

    So it wasn't just memory space.

    1. Register access width, anyway.. Many internal special function registers, although accessed as a 32 bit device only expose a few bits with any meaning.

  12. Matthew "The Worst Writer on the Internet" Saroff

    If You Quote Myhrvold

    He should not be described, "Microsoft CTO Nathan Myhrvold," he should be described as, "Microsoft CTO and notorious patent troll Nathan Myhrvold."

    His company, Intellectual Ventuires, is one of the most aggressive patent trolls in the world.

  13. Martyn Welch

    Sorry, but this seems like quite a bad take. There's aspects of this I can 100% agree with and others that smack of opinion with a lack of research or knowledge.

    You make the assertion that "some freeloaders just use this stuff and don't contribute anything back. Many might not even realize that they're doing it. Apple's macOS is built on a basis of open source. Android is too, and ChromeOS". When it comes to Linux and hardware support, we are primarily talking about kernel level development. This is an area that I'm fairly familiar with. Let's just take a look at the development statistics for a few of the last kernel development cycles (data from LWN):

    6.0: Google, 85886 lines changed, 5.4% of total, 4th overall in this metric.

    5.19: Google, 30767 lines changed, 2.5% of total, 8th overall in this metric.

    5.18: Google, 103801 lines changed, 8.8% of total, 3rd overall in this metric.

    5.17: Google, 24971 lines changed, 4.1% of total, 6th overall in this metric.

    Broadly the same looking at changesets.

    That also probably misses a tonne of work that they finance in the kernel space alone through outside consultancies (which I know that they do).

    I know that their contributions also extend to at least some other FOSS components they use, though I have far less knowledge there as it's not my area of focus. There is probably broad swathes of the Linux ecosystem that they don't contribute to, however there's large portions of the ecosystem that aren't used in the products they create. There certainly are vendors in the FOSS ecosystem that use and contribute nothing back, but (for all their sins) I don't feel it's remotely appropriate to try and paint Google in that light.

    (For transparencies sake, I don't work for Google, however I do work at a consultancy that does work for Google so have some awareness of some of the work done for them although I'm not currently involved in it.)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like