@Northern Monkey & Others
"Just because computers are getting faster does not mean we should write our code less well, less efficiently 'because we have room too'. Write the same streamlined efficient code you wrote for old, "slow", memory challenged machines "
This sort of thing shows you up as 'not a software developer' or if you are, please god don't let you be one that i have to work with.
Old 'streamlined' software, was streamlined because it had to be, it sacrificed stability and maintainability in order to maintain execution speed and a small memory footprint. People don't write code that streamlined anymore 'on purpose' as it leads to problems. It's often easier to re-write large chunks of it than make the smallest updates. People were writing it like that because they absolutely had to, not because they wanted to!
Would you rather your software crashed constantly, or whenever it did just gave you an unhandled exception, like in the good old days? Or would that the coder used a few, extremely innefficcient, checks on responses to make sure it either continued working, or gave meaningful errors.
As a developer, i know it's much slower to read things dynamically from config files, than to hard code them, but i still read them dynamically, as it means i can update them in seconds, rather than searching for masses of hard coded values, and recompiling the whole lot.
Remember the Y2k bug, 2 digit dates were used to streamline memory usage and processing power as it was expensive back then! If that happened again with modern code (significant change to the date format), i would hope it would just be a small update to the date type/class and a recompile.
Perhaps kernel code is different, but any halfway competant developer of most other software will be continually sacrificing execution speed and memory usage, for maintainability and stability, not just because they feel like it. Functions, no chance, all that mucking about swapping values on and off the stack is inefficient! Try/catch blocks are horrifically inefficient, but the core functionality of most error trapping!
There are reasons that companies/banks hire armies of assembler developers to make the smallest changes to their batch processing, as small inefficiencies can make hours of difference in those volumes, but it takes a long time to make negligibly small changes, and is mostly indecipherable to another person without taking a lot of time to investigte. Not to mention the smallest error causes the whole thing to fall over.
Yes, some software is inefficient without any need, and that should be eliminated, but don't just assume that because newer software has more of an overhead, and runs slower than older stuff, that there is necessarily anything wrong with it!