"Which of your bêtes noires did we miss?"
Internet of things. It is getting ridiculous the household appliances that are being hooked up to the internet with associated apps (and terrible security). e.g. dishwashers, fridges and toasters.
The real opponent of digital sovereignty is "enterprise IT" marketing, according to one Red Hat engineer who ranted entertainingly about the repeated waves of bullshit the industry hype cycle emits. During a coffee break at this year's CentOS Connect conference, The Reg FOSS desk paused for a chat with a developer who was …
In addition its how devops seems to think by saying that they can bypass change control. Same goes for security. The important thing about change control is that you get other eyes on the change, off load blame when things go wrong and more importantly deal with conflict management before sh*t hits the fan.
I still maintain that at least some of devops was a scheme/scam by management and/or finance/payroll to reduce headcount 2:1 or more by making sysadmin types do developer jobs or visa versa.
The concept may have sprung from nobler ambitions (though I have some doubts on that point as well), but once bosses and beancounters got hold of it all was lost.
Long ago there were such things as all-rounders. Develop, dbadmin, system admin, user support. You didn't develop something that would give you grief running. You didn't have dbadmin waiting for system admin to release more space.
It was a complete pain running into the segmentation of functions on a short contract to provide holiday cover. We needed to add more space to the Informix database and memory says I spent at least 2 weeks of it just doing the paperwork to get the system admins (who I never saw) to to get the extra space to do that. Maybe it wasn't quite as bad as that but it felt like it. The worst part of it was that this was in the same industry where I'd overseen a team (but also been hands on) which developed the application and run all the admin seamlessly.
Actually, most of the time I see using DevOps to switch sysadmin responsibilities to developers - so sysadmin could do even less.
Actually the whole Ops part must be sysadmin responsibilities - Dev ones must end at merging code (including automated tests), from there on deployment (test/staging/production) and related tests should not be their responsibility.
But the "it works!!! Dont touch anything, never!!!!!" crowd can't work that way.... they can't be bothered to install required patches, often....
there's no responsibility in devops. they do everything they want because marketing has decided "move at the speed of the developer" & every twat thinks they work for google. more importantly they KNOW they can blame ITops & security for any breaches.
the fact that they continue to spout the "move fast and break things" bull mark "i know we literally killteenagers with our product but why doesnt apple get shit" zuckerburg said just shows what utter shite it is.
go to any tech show at excel & you'll see a plethora of firms that only exist to add guardrails to developers who shouldn't be allowed near a speak and spell
Devops as sketched in the DevOps handbook, like ITIL and the unwritten system admin constitution from the LISA conference is a framework. None of the movements mentioned say to actually be a sysAdmin, ITIL practitioner, DevOps operator you must do X in Y situation. These ideas invite you to build teams and processes that work for your organisation as well as gathering useful information on how this might be done. The habit among people implementing it of coming up with maxims like uptime is sacred, raise a ticket before intervention always even when the building is on fire, or servers are cattle not pets. Those are on some of the people in the movement. You can't blame the authors who spent time trying to help or the more flexible implementers for their misdeeds any more than you can blame the teams who plan and build roads and place signs, or careful drivers for road rage or dangerous driving.
These ideas invite you to build teams and processes that work for your organisation as well as gathering useful information on how this might be done.
The best way of doing that, of course, is to pass on the Dev/Ops skill sets to your HR department, along with your hiring budget and they will advertise for someone with at least 5 years experience in all of those. In the unlikely event they are not flooded with CVs then they will take the ones who say they have the highest coverage of all areas and find which of these will work for the least money until they can meet some other companies requirements.
> Internet of things.
Excellent point. Added to the notional list.
I think RH isn't really in that market -- yet -- but it's getting very keen on car OSes. It has a new offering called RHIVOS or something.
Saying that, and with my sceptical face on, if they want to lend me a review car, I probably won't say no...
I think specifically the Smart Home IoT.
Almost before smart homes really were able to get going, the idea was enshittified by overlaying the internet on it. I'd been in love with the idea of smart homes since reading Ray Bradbury's "There Will Come Soft Rains" in about 1969 or so, but after seeing it inexorably tied in with the internet, my heart fell.
It's actually possible to have a smart home which runs on your own server and never has to touch the internet, but because of IoT, there is never any discussion of that fact. Whenever I'm settled into my forever home, I'm fiddling with the idea of doing that, and assuming it works, I might see if I can write about it and let other people know how to do it.
BTW, bêtes noire doesn't need an e on the end.
I'd toss in "Knowledge Engineering," something which I could never quite figure out what it actually was except maybe databases.
I'm also old enough to remember the "distributed computing" fad which was somehow going to solve all our problems once we managed to conquer the latencies by somehow sidestepping the speed of light.
Academia, where I toiled for lo those many years, had its own set of fads and fancies.
Ontologies were supposed to somehow organize our knowledge but seemed to only organize interminable seminars and short courses1 complete with Powerpoint decks of circles withing circles interlocking in a grand mesh of. . . (slaps own face). . . sorry, I got carried away there. Though I guess I shouldn't complain too much since it kept the grants coming in for a few years.
Data Mining was supposed to "federate"2 something something something data sharing something something but you get the idea.3
How objectively brilliant scientists seem to repeatedly fall for this rubbish is left as an exercise for the reader.
__________________
1 I spent a week at Stanford once trying to stay conscious while attending an "Ontologies" bamboozelfest that seemed to boil down to jiggery pokery involving interlocking and interlaced XML files to some end or another. I never want to think about pizza toppings (their favorite example) ever again.
2 Another buzzword guaranteed to euchre a few more FTEs-worth of grant money out of the funding agency.
3 If you do, you're far smarter than I am, since all I saw was a giant hiring fest that collapsed two years later when we lost the grant to a collaborator. But that's another story.
household appliances that are being hooked up to the internet
That's not a corporate trend so much as a consumer trend. The kind of people who read The Register or Slashdot know what a bad idea this is. The average person, frustratingly enough for us, think that smart TVs, cameras outside (and often inside!) their home hooked up to the cloud, "smart" bulbs, garage door openers hooked up to the internet, even fridges are a fine idea. I know this because I see this stuff in the homes of many many many friends and I gave up long ago trying to caution them why this is a bad idea. Their eyes glaze over and they wonder how someone who has been using computers since the age of 13 or so and worked in IT his whole life can be, from their point of view, a Luddite.
Companies are, unfortunately and frustratingly for us, responding to consumer demand. They are able to charge a higher price for those "smart" appliances and it is difficult to avoid on the high end models. Heck we're such a minority you can't buy dumb TVs at any price from consumer lines. You have to either buy commercial TVs or monitors if you want to avoid the "smart". If we were a big enough market there would be some sort of crippled dumb TV available for special order direct from the manufacturer on an SKU that is always sold at list price and was never discounted so they could profit handsomely from us. But we're too small of a market even for that.
Like "We have an app for that!". "Basic life essential now available exclusively on a mobile phone near you!" And then the surprise that, if power goes, all grinds to a halt. (Here, have a sip of subscription water to swallow that.) But who cares? Like Liam and the "Grumpy Thoughtful Society™" conclude, it's all about the margin and bottom line of somebody else. But hey, you old fashioned git, you don't understand...
Yes, but for how long?
Aaaargh! Car parking apps. Why can't all car parks agree on using the same app! I get that these dodgy 'scan the QR code and pay' are very easily open to abuse by the morally bankrupt intent on ripping off innocent hard working people.
However, I have 5 car park apps installed on my phone (so far!) just to be able to pay for parking at different locations.
And some of them are in places where I can''t get a mobile signal so the app doesn't work!
> "Heck we're such a minority you can't buy dumb TVs at any price from consumer lines".
You're right there. This weekend I had to repair our 90s stereo, of which the sliding doors had stopped working...
The shock and awe: the thing has screws! I was able to open it! I could remove and replace the broken parts! Which I bought! And now are mine! My property! FOREVER!!!
"Companies are, unfortunately and frustratingly for us, responding to consumer demand. "
In a lot of cases, they aren't doing that - they are dictating what the comsumer shall have! "Smart" everything provides huge opportunity for data capture (and using that data for sales and marketing), so there is a massive incentive for everything they sell to be "smart".
It also allows them to dictate replacement cycles - "sorry, that's out of support and our app no longer works with it, and it won't work at all without the app so you'll have to buy our newest, shiniest offering (which will also go out of support in a fairly short timescale)".
"Companies are, unfortunately and frustratingly for us, responding to consumer demand. "
In a lot of cases, they aren't doing that - they are dictating what the comsumer shall have!
That - I don't think any consumer, anywhere, decided they wanted an IoT toothbrush before it was flashed before their eyes. "Oooh, shiny!"
"they wonder how someone who has been using computers since the age of 13 or so and worked in IT his whole life can be, from their point of view, a Luddite."
I have an imperfect answer to that...
If you bang your head against the wall every day, you won't ever learn to like it, but eventually you will get used to it. If the day finally comes when you can stop, the idea of ever doing it again is unthinkable.
The corollary answer is, precisely because you've been in IT your whole life, you know how bad some tech really is, and it's best to have nothing to do with it when you can.
One particular sneaky one I've recently discovered is my Samsung washing machine, which has many functions which are not available on the machine itself, but only via an app, and *only* when the app is installed on a (post 2020) Samsung device are all the app's functions actually available...
SpeedQueen, the folks who make the coin-op machines at laundromats, also make home machines that are just as bullet-proof as the commercial cousins. Better, they are NOT computer controlled, so you can actually fix them on the rare occasion that they break.
They are not inexpensive, but chances are you'll not have to replace it in your lifetime. If anything, they clean better than mass-market machines. If you care about such things, they are made in the USA. And no fucking "lid lock"; either. Recommended.
Hint: Spend the extra money and get the big one. On the rare occasion you need it, you'll be glad you did ... and when you use it with lighter loads than it is designed for, you under-stress it, making it last longer before wear-points need replacing.
If the obvious escapes you, try speedqueen.com ...
Not affiliated, don't own stock, just a satisfied customer, yadda ...
Meh.
Their quality has gone down. I've seen LOTS of reports of Speed Queen failures recently. Turns out mechanical timers are ALL low quality garbage now.
Meanwhile my computer controlled Whirlpool front loader is about 20 years old now. It was a LOT cheaper than a Speed Queen, and has outlasted the current batch of Speed Queens with NO failures. Made in Germany, dunno if they still make them there or if quality has suffered like it has with German garbage cars.
Many white goods manufacturers were still making reasonably well made and repairable items back then, our 2002 Miele has seen three sprog from nappies, through school (two runs a day every day) and now is doing the same for grand-sprog. total spares needed - two sets of replacement motor brushes (15ish mins to swap out), one set of shock dampers (also a 15 min swap) & one motor relay on the PCB requiring a machine strip down and attention from the solder iron (£2.50 in parts to avoid a £250.00 replacement PCB assembly). I've no idea how long the drum bearing will last but I'll wager its replaceable as well.
Samsung's large appliances/white goods have a generally abysmal reputation in terms of reliability anyway. Even allowing for the fact that they're a large company with a large market share, they seemed to get a disproportionately massive number of complaints in threads about that sort of thing a few years back when I was still on Reddit.
Samsung still rides the wave of the time they were good in most areas, not just some. My microwave oven was produced January 2000, bought 18th April 2000 for 129 DM (yes, before €, or rather during the transition). Switched the dead bulb against an LED replacement last year.
My personal opinion is that losing Microsoft products from the desktop (as most of it is now cloudy anyway, but that's another problem) would not have the significant impact Microsoft cult members predict it to be, and would be a heck of a lot cheaper due to the lower impact on productivity that practivally ANY alternative has. But hey, manhours are a separate budget, right?
> As long as it's done right
It is hardly a condemnation of 2FA to add that as a rider, it literally applies to *anything*. And we tend to remember (with good reason) anything that is done badly/stupidly/frustratingly so that is what comes to mind first and colours our perceptions.
Drinking water is a Good Thing, so long as it is done right: forcing people's heads into an ice-cold butt is probably going to put people off.
Mutter dark oaths and raise your pitchforks at the purveyors of the "Chill Dunk 3000 Hydration Station (ducking stool add-on available)" but still enjoy your refreshing glass of perfectly potable aitch-two-oh.
The AC is not lazy sysadmin but deals with computers as a hobby since 1987. He's been head of IT for a couple of small business in the 90s. Now he's an (almost) retired attorney, hosting his own 2 servers, managing 5 domains (web, email, cloud, the usual stuff).
Yes, he uses the same 8 passwords, on a 30 days rotation. Yes, he uses the same password for useless sites that insist on registration. Yes, he uses different strong passwords for each bank (I even managed to get hold a hardware thingie from one of them), important email accounts and key pair for remote work on his servers.
Now please enlighten me why should I use https for a site which doesn't transmits sensitive information or for connecting to my router from my internal IP or for ...
Https has its uses, but mandating it for everything is just idiocy. Same with 2FA
BTW, how is your old equipment dealng with https? Oh, you threw it away and bought the new shiny?
And how do you manage 2FA when there's no internet connection available?
Just today I had your HTTP/HTTPS case. A UPS SNMP-ed an error intp PRTG. Chrome refused HTTPS "no supported cipher combination found", but happily made HTTP...
> And how do you manage 2FA when there's no internet connection available?
You dial in with your direct line at 9600 baud and transmit the second factor this way. If you are good you can whistle your code, though that would be like 2 baud?
But seriously: The RSA stuff with hardware tokens still works, it simply got moved to the authenticator apps for convenience. They did not need internet too - though a good synced clock :D.
Now please enlighten me why should I use https for a site which doesn't transmits sensitive information or for connecting to my router from my internal IP or for ...
Well. Let's hope no hilarious joker snaffles your admin password and locks you out of it / bricks the firmware for the giggles.
BTW, how is your old equipment dealng with https? Oh, you threw it away and bought the new shiny?
# vi /etc/ssl/openssl.cnf
no need to buy a whole new computer! but when you're running a kernel with more than 0 local root escalation, or 0 rce vulnerabilities without an upgrade option, it's time to thank it for a life well spent and switch off the life support.
And how do you manage 2FA when there's no internet connection available?
yw.
"Now please enlighten me why should I use https for a site which doesn't transmits sensitive information"
Because anyone in the middle, the person you evidently don't mind receiving your non-sensitive information, can also write new information and send that to you instead of what you asked for. HTTPS prevents both reading and writing in the middle and comes with useful site identity verification.
"or for connecting to my router from my internal IP"
Probably not so important, which is why your router will often not require it, but it would let someone on your network get the password you're sending to access that router and therefore have that access themselves, depending on how you're communicating with the router but that covers most cases.
"And how do you manage 2FA when there's no internet connection available?"
Synchronized clocks are the most common, or shared secrets without that, sometimes in hardware tokens designed to keep it secure and sometimes just in software. But although MFA often works without an internet connection, does the stuff you're using it on? If that's online, then even internet-dependent MFA should be fine.
Typical 2FA implementations can be a monumental PITA if you don't have (reliable) cell phone reception. Maybe 2FA isn't a problem for you. But keep in mind that you are not everybody. I suspect that many 2FA implementations are actually unusable for some handicapped persons and other edge cases.
Or if you left home without that very phone surgically attached to one hand.
One relative who into all this tech stuff (And is a CTO for a startup based in Santa Clara) recently got locked out of his $2.5M home because he stepped out to put some trash in a bin and a gust of wind slammed his door with him outside. His biometric access was useless without his phone for 2FA. Yes, he has 2FA on his home doors as well as Retinal Scans. Cost him $2000 to get access.
The hi-tech answer for such problems is presumably to require 2FA to access the trash bin. That way your relative won't forget to take his phone outside with him.
(Tangentially, what was said relative's plan for access if/when his phone battery died, was bricked by an ill crafted OTA update, broken, or stolen?)
"The hi-tech answer for such problems is presumably to require 2FA to access the trash bin."
The higher-tech option would be to create a trash robot that moves the trash bin to the sidewalk for you. It would require 2FA authentication prior to activating, as trash is important and needs to remain secure, plus it would IR and motion-scan the area for potential trash thieves before the automated trashrobot door on the side of the house opens.
This way the homeowner wouldn't even have to get up out of his chair to handle the trash. Also this way, after the wanker passes no one would care because they never saw his face day-to-day to begin with and nobody will notice his absence or miss him at all.
The MFA doesn't seem to be the problem in that example. I had a very similar thing happen to me. Let's see what technology was at fault:
1. I left a key in a jacket. I normally don't keep it there, so I assumed I had it on me.
2. I went outside without the jacket.
3. The door automatically locked.
When will we condemn the simple metal key for its unacceptable requirement that I carry it with me to open locks? As demonstrated, sometimes, if the lock locks automatically, you can get locked out that way. Let's fight against those who tell me that I should have a key surgically attached to me when I want to have a door locked.
In the specific circumstances cited, yes a key would have the same issues. However, there are many where it wouldn't - a key doesn't need internet access, it doesn't need its battery charging, it can be dropped / soaked without breaking, it isn't going to malfunction, and it won't be at risk of getting bricked by a dodgy update.
Some of those are true, but some of those are less important when we're talking about MFA access to online services. If your battery is dead, you can't authenticate, but either you were using the same device to access the service, in which case you couldn't get in without charging anyway, or you're using something else which can probably power your authenticator. There are also batteryless MFA tokens which are as simple as keys and just plug in to the device you're authenticating with or get powered over NFC or RFID.
Some of them just aren't true. Internet access, for example. The common TOTP and U2F methods both work completely fine offline, though TOTP will require that you keep the clock synchronized in some way which is easier with but does not require internet access. Updates are kind of a hybrid case because it's possible in principle for updates to break almost anything, but in practice, I doubt you'd have an easy time finding a real example of an update having broken a TOTP app since they rarely need changes and are somewhat easy to test before releasing the update.
If your 2FA setup needs to send you an SMS, you're doing it wrong - or rather they're doing it wrong. Authenticator apps are definitely the way forward; Bitwarden works for me.
Now if you'd said single-sign-on, I would have agreed - it's plague on the internet, and that goes 100-fold when you're trying to set up (eg) an XBox account for a minor without an email account, or trying to grandfather in half a dozen legacy accounts inadvertently created over the last twenty years.
When the RBS introduced 2FA, the code they texted was valid for 5 minutes. To get mobile reception I had to leave the house, drive for two minutes, wait a minute for connection and text, drive back for two minutes and, usually, repeat. My record was six trips.
Thank god for wifi calling.
EXAMPLE: try completing the TrustID process on your phone, for a DBS check, without using Chrome.
"You sad loser, you're not using Chrome, so of course our wonderful app won't work with whatever poxy browser you have chosen to use instead. Nothing personal, just business."
I was going to say much the same. If we're still discussing this in the context of the article- i.e. contrasting the post-2008 era of "bullshit tech" with the supposedly halcyon era preceding it- then it has to be pointed out that this is *exactly* the position IE was in from the late 90s until the mid-2000s.
IE's nonstandard crap and general stagnation held back web standards for *years*.
A few years ago, I read about some self-pitying tossers at MS whining that they had to support the crappy old, dated, nonstandard features and IE versions people were still using, dating back to IE6 and earlier.
You know, the same stuff that *their* employer shoved into IE in the first place, which then became entrenched because IE had a de facto monopoly and because MS sat on their lazy backsides with IE6 for five years until it finally got some competition with Firefox. By which point many people were locked into systems and sites that relied on all that nonstandard and obsolete crap because everyone back then used IE anyway.
Urgh.
Anyway, yeah, fuck IE and any rose-tinted late-90s/early-2000s nostalgia associated with it- I'm glad that shitty, dated POS is dead.
Just not so happy that Chrome has now become its spiritual successor....
The 'Metaverse' hasn't yet become serious retail bullshit, but VR in general has a shockingly long history of hype that's definitely liberated a lot of money from investors. Current market revenue is always paltry, but just look at that 10-year hockey stick of projected demand!
(Pretty sure you're referring to Max Schrems ~~ noyb, if not, please link/expand.)
Soooo.... you'd rather everyone cruise along oblivious to the unnecessary dataslurping?
(As far as I'm concerned, personally, if a cookie is not NECESSARY for proper function of the core site's purpose, it is therefore by definition UNNECESSARY and therefore rejectable without even asking.)
Schrems is still hammering away, this is hardly a done fight from his perspective.
The problem with the cookie issues is that it wasn't enforced hard enough - the yes/no options were supposed to have equal weight and prominence, but what actually happened was that most implementations have a massive 'I Agree!' button, and instead of a 'No, fuck off!' button have a link in small text (witn often unclear wording), leading to a page of toggles and lots of text, with an obscure button to confirm once all the toggles are turned off.
And more recently I notice that a number of websites (particularly news ones) are telling you that you either need to agree or pay for a subscription to their site - very definitely not in the spirit of the legislation, but again there doesn't seem to be any enforcement action.
They are strongly tied to containers. We have a supplier so fixated with microservices that delivers an application built on about 40 containers, each running a microservice using a separate Postgres database. They ask a 8-nodes Kubernetes instances with lots of cores and RAM, even if our telemetry says they don't use more than 10% of it - but hey, thet make their application look bbbiiiiiigeeeer!
Deploy early, rely on the end users to do your beta testing and fix tge errors later if you can be botgered. If your bank deplyed barely tested crud that messed up your account you'd be livud so why is it OK for you to do it?
Full disclosure, I spent my entire computing career working on real time operational support systems for the blue light communitu. Every thing we deployed had to work as expected from day one and go on working 24/7 because sometimes luves really did depend on it. Our test cycle was two or three times longer than the industry average, but our error rates in deployed code were only one tenth of the average.
Metaverse: hardly a technical term in the industry jargon, but it certainly made a (thankfully, very brief) splash in marketing newspeak a few years ago.
I also hear that some company out there actually went and changed their name to "Meta".
That would be the IT equivalent of the guy down the pub with the "Exploited" tattoo on their neck.
Oh, this wasn't meant to be a slur on The Exploited specifically; rather, it's about the kind of people who get so obsessed with their idols that they end up with some rather embarrassing body markings in later life.
For the purposes of my comment, it doesn't really matter whether the bloke we're talking about has an Exploited, Cliff Richard, Angelic Upstarts, Shakin' Stevens, The Human League, Family of Man, Iron Maiden, Depeche Mode, Coldplay, or Spice Girls tat.
while I wholeheartedly agree with almost all the points, I have to put down the foot at containers. Argument that "you can run it as easily without them" quickly breaks apart if yoy try to run a bunch of services (for own needs, not some corporate bullshit that theys so much loath, where yoi can have "one service per machine deployment") that may have conflicting requirements.... "oh, your distribution updated / failed to uodated the database to version X that we supoort and require? tough luck".
With containers I don't have to give a flying Duck about those and not fear that one would break the other.
But I aldo like things nice and tidy and organised...
> With containers I don't have to give a flying Duck about those and not fear that one would break the other.
Or one can use VMs for the same effect.
Now cue neverending series of arguments about why/when you should (not) use containers or VMs; more than a few of which will completely ignore the key line in the above, which is also all that I am concerned with these days:
> for own needs
Now, if an OS would be able to proper partiion and control application resources like some old maniframes did, containers and their whole overhead would not be necessary. Nor VMs. Maybe with some help from the CPU which could *segment* memory properly so a bad application can't do damages reading and writing memory that doesn't belong to it.
But since RedHat is one of the culprits insisting on an OS design from the 1970s, unable to use more sophisticated CPU tehcnology - and the company that doesn't deliver anything that isn't at least aged ten years - here we are....
Fascinating.
Taking the time to follow your usual rant against extant OSes[1] *and* also referring us to a technology (memory segmentation) that does literally *nothing* to solve any of the issues that the OP raised![2]
A masterclass of the kmorwath style.
[1] still waiting to hear about your progress on your replacement OS that you believes works, btw
[2] conflicting dependency requirements which require copies of absolutely everything to be bundled with the application, just in case, OR the application only runs on one, very specific, OS version so be careful you downloaded the correct one from the 1719 different builds that were provided - or not.
Kmorwath will *never* miss even the most tangentially-related excuse^w opportunity to segue into a rant about either of the pet bees they have in their bonnet...!
Namely (a) FOSS being supposedly to blame for any and every failure of competition in the IT market and/or- as in this case- (b) Linux being a "70s OS" because it's based on Unix which first appeared back then and obviously hasn't changed one iota in fifty years!
Of course, you won't see them applying the same dubious logic to Windows, even though *that* too ultimately originated- via MS-DOS- as a workalike/ripoff of another even more basic 70s OS known as CP/M.
I have seen it argued that containerisation can increase security. Each service runs in its own VM, so if one gets pwned then the fallout is limited.
OTOH all the extra complications add to the attack surface and patching burden, increasing the likelihood that something or other /will/ get pwned.
Thoughts welcome.
I think it's a problem when a service is delivered as a container, and that is the only way to deploy it. It's an issue when the design is so fragile that's necessary.
On the other hand, I've been fiddling around with jails under FreeBSD and they provide several advantages. It could be argued that jails are unnecessary if services were better designed - in particular being able to easily support multiple instances, and config files. However that's not the situation, so I have a number of different jails each running unbound and/or other services, each with a separate IP address and all running on a standard port.
It makes it much easier to manage, and the jail and all the services and configuration inside it can be easily brought up and down, so yes, I can see the advantages of some containerisation.
If I *had* to run a service that uses 15,000 python or other web components which are managed automagically by a dependencies manager it also seems safer to stick it in a container. Yes, obviously there are ways of sandboxing Python etc too, but that's also another level of hassle. Of course my preference is not to run a service where you can't easily monitor and check all the dependencies.
> Each service runs in its own VM,
Containers only feel a bit like VMs. Technically they are just groups of processes on your host (VM or bare metal) locked into specific cgroup environments.
However, used correctly containers can in fact improve security by restricting e.g. a shell session spawned by an RCE to that cgroup environment and a mostly virtual file system, preventing infection of the host or other containers.
until your lazy developers download random shite from internet repositories or insist on never upgrading them & having 20 different versions of a container because THIS code only works with THIS library on THIS version of an container with THESE random chinese github libraries.
personally I would block all access to the internet for containerised applications and ONLY allow usage of minimised containers from a vendor like chainguard that are regularly updated automatically to stay CVE free & if it breaks the app, the Devs are to blame & if, as a manager on tge ITmanager said "they just threaten to leave if we insist on security checks"... i would say "off you fuck then!"
Okay, but how would letting said lazy developer deploy to bare metal or a full fat VM make that situation any better?
At least if it's containerized their idiocy will (hopefully) be limited to a single cgroup and not allow lateral movement and escalation elsewhere in the system.
cookiecutter has essentially said that he would never deploy *anything* that hasn't been released by chainguard (or similar).
Which leads to the question that, if he doesn't trust any "developers", who or what does he think works on the material that chainguard packages - or even at chainguard themselves?
spot the developers....
there's a difference between trusting a vendor that creates containers 1/10th the size of the basic docker download, whoever it is & letting developers "move fast & break things" or let them run multiple versions of the same bae OS or K8s because "this library only works with that version of OS & that version of python & that version of the salt we threw over our shoulders in the hope it works before chucking everything in the JIRA backlog queue never to be seen again"
or letting them download public VSCode extensions randomly which have been shown to be exploits in disguise or libraries on github which have been shown to have backdoors.
If Devs want to be "fast moving rock stars" deploying code to production 1000 times a day then THEY should be the fuckers in the office at 4am because their shitty code or laptop with PLEX loaded on it ended up getting the company ransomwared
My favourite scrum adventure was working at a place where a veteran project manager (now scrum coach) held his morning standup.
15 minutes a day isn't much, per team. He had several teams and for efficiency he got them all in the same room so he could do them together.
The 15 minutes stretched to 30, then 45, then eventually 90 minutes each day with about 30 people explaining what they had done the day before and being told what to do for the rest of the day.
But it was still Agile because they all had to stay standing up.
1944 CIA simple sabotage field manual. Specifically Page 8ff... Specifically Page 28, Organizations and Conferences, point three:
(3) When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committees as large as possible — never less than five.
https://en.wikisource.org/wiki/Simple_Sabotage_Field_Manual/Specific_Suggestions_for_Simple_Sabotage
Nowadays it is called the "Common management principles manual".
Yes * 100. Agile is an excuse to throw away the changelog and release notes for each release.
Now what do we do to install? Look at all the Jira tickets and execute the commands given in each release consideration, apart from when they've been repeated in this minor version (not major version though). Fun and games happen with reopens which also change the fix version and wipes the previous fix version from the ticket. Who's going to remember all this a few month later when someone has to install an older version for testing to compare against a newer version?
It's chaos and the installation engineers are rightly pissed off, developers are pissed off because it's just throw all the code in there on build day and hope the merge works, and QA are pissed off because they're only testing individual tickets in isolation.
All agile is good for is pretending that a terribly managed project is being managed well.
Hit in the early 00s, suddenly everything had to be remote. It was a relatively short hype cycle before things settled down, it started to be used properly, and the hype died.
The funniest part was some architectures where an application sent a huge print file up a not particularly fast broadband connection to the print server, which then sent the data back down the line to the local printer..
Second Life/Metaverse. More a consumer thing, but there was the first wave when everyone would supposedly spend all their time in a virtual environment, and it turns out the public don't actually want a permanently online avatar based cyberpunk existence.
Then VR hits, starts to get some actual traction beyond earlier experiments, and Meta think they can do Second Life but in VR, and that everyone will want to interact that way. Turns out again, they don't, and people fundamentally don't want to strap equipment to their head all the time.
There are other options - there's all the HTC Vive stuff, but that doesn't have a great price performance. There's the PSVR2 with PC adapter (it works well, but make sure you're using a Bluetooth adapter on the supported list, I'd recommend an Asus one). There's also the upcoming Valve Frame.
At the Microsoft end, WMR has been discontinued, but there is a WMR driver for Steam so it *shouldn't* need any Microsoft account at all.
If by 'tied hard' you mean 'requires Windows', then it is possible to get some of it working on Linux but you will be make life harder for yourself.
Perhaps best to wait out the Valve Frame and see how good it is.
Classic at morning stand up:
"Does the JIRA load for you guys?"
"Where is this ticket?"
"Could you let us know the ticket number? I can't find it!"
"It's in child tickets. Open this one and go down to list, 5th from the top"
"Shall I close the other two tickets as we seem to have duplicates?"
"Why these tickets are assigned to me?"
"Who moved that ticket?"
You are so spot on you’ve probably got measles.
I absolutely loathe jira and it’s ilk. It’s just a massive time waster. A little anecdote - I was once on a weekly teams call to discuss jira tickets with the usual suspects talking endless bollox about …bugger-knows. Anyway, the discussion descended into half an hour of utter nonsense about what to call the ticket I was working on. At the end of the half hour, I pointed out that while everyone (well 3 of the 6? people) were talking, I’d finished the job. I completely ignored all the jira meetings after this until I left the company
A few companies ago, the devs we supported were allegedly using Jira for their work projects.
After a while, we noticed that none of them ever seemed to refer us to Jira ticket numbers (whatever Atlassian calls them) when they needed something, and when the odd manager would ask us how some project was going (perhaps because they'd heard nothing about it), we found they'd never seen a Jira "user story"(?) or the like either.
Turned out most of the devs were putting in the bare minimum mandatory Jira fields, presumably just enough to make a tick mark in some PM's dashboard, and then apparently never touching it again.
One wonders what the daily stand-ups were like, if they were even happening at all....
Browsers.
I hate browsers.
All of them.
They all have their own particular ways of irritating me but they all have them.
They all crash.
They're all vulnerable to thousands of exploits that we don't even know about until we read about them in el Reg.
I fucking hate browsers.
I know, I know, I'm using one now. It crashed yesterday for the first time in nearly a week (when I tried to look at the rainfall radar to see if I'd get wet when I went to the shop on my bike).
SeaMonkey is now just about the only one* that attempts to use icon themes so that it look as if belongs on just about any desktop. It works on fewer and fewer sites and the Grauniad crashes it instantly. So I'll nominate not browsers but web site developers that are too clever by half and not half clever enough.
* Possibly Falkon might but I've only tried in on one desktop.
Seamonkey is currently reduced to be the email client only since WAY TOO MANY webpages don't work with it any more. Luckily there is the extension "standalone-seamonkey-mail" which opens the "system defined" browser when you click on a link in a mail.
I miss it though.
Oh, and I am still using sunbird...
I've been using Netscape browser+mail since 0.9something, which then went to be Mozilla browser+mail, which then went to be Firefox. But I refused Firefox and hated the dumbed-down-outlook-express-alike-Thunderbirdand stayed with the mozilla browser+mail, and then luckily Seamonkey came up. Seamonkey is still my main mail client, and the main browser is currently Waterfox. But a few secondary are needed now... Vivaldi for Fecesbook since I don't trust any browser there, and Edge for the crap which requires chrome but is not Fecesbook.
There's not a lot of difference between SaaS and normal services. Dividing them into groups, especially if one group is bad and the other one neutral, is a fool's errand. Webmail can be software as a service since it includes a mail client, a spam filter, and various other things which you get from the provider instead of running your choice of software and configuration on a rented server.
Often when the SaaS conversation comes up, the relevant question is lock in. Does the provider have a copy of your data which you can't get back in usable form if you don't want to use their service anymore? That potential is present no matter what it is you rely on this place for. If they were running a completely normal mail server but refused to give you a copy of your email, then you'd still be locked in, whereas if it's a subscription office program but all your documents were easily downloaded in an open format, then you're not. The latter is the more likely to get the SaaS label because the thing being rented does include a lot more software, but that's not the cause of the problem here.
The point is that if one already considers GMail a SaaS- as the article does when it points to it as the first example aimed at the general public- then there doesn't seem to be any fundamental distinction between it and Hotmail, which had already been around for the better part of a decade by then.
There's a lot of things where huge numbers of businesses and consumers need it, but do not have the time, experience or even level of usage to run a dedicated server.
Email and basic webhosting are commodities. Every organisation and consumer needs a basic email service, and most organisations need a basic website. Hiring a specialist to do them is the best - perhaps the only sane - option for almost everyone.
I don't have the expertise to run an email server or the time to learn, so I pay someone else to do that. Same as I pay someone else to make the toilet paper.
Where it goes wrong is any time the service needs more than trivial customisation. If you can't "set and forget" simple customisation once and then leave everything to the provider for the next couple of years, then you shouldn't be buying it as SaaS.
And if you can't easily export your data and go elsewhere, limited only by bandwidth, then you absolutely must run away, screaming recommended.
So ERP SaaS is completely insane.
The Over-Hyping of any of these as "Everything and Everyone must use it" is the problem.
I also dislike the marketing terms: NextGen (We think we're the next hot thing) , BigData (Collect details you normally don't into more storage & you write the proper reports to get the data summary, & you pay $$$$ to use BigData's collectors) , Automation (fine, if you can learn the branded coding; many are just re-writing to brand Recipes as Cattle )
TrapC is just the latest in a long line of numerous attempts at solving this problem for C and/or C++. (It's not even the only current attempt (*)).
Their lack of memory safety- and the impetus to solve it- has *already* been an issue for decades. If any of those earlier attempts had resulted in an easy, universally-applicable solution, they'd have become standard long ago and we wouldn't be discussing this or bothering with Rust. But, of course, we are.
And honestly, if the issue was as straightforward as it appears superficially, it *would* already have happened by now. The fact it hasn't likely makes clear that anything resembling a universal solution is going to be very difficult to find if it's possible at all.
Regardless, I certainly wouldn't hold out hope that- even with the magic of LLM AI(!)- TrapC is going to be anything more than just another attempt.
(*) It's not even the only one in recent times; there's also something called "Fil-C".
"Containers simplify" - Not that much. They are still very dependent on their host, and especially in linux environments hosts can have a wide variation, so they are locked to a dist, down to the version. Just a wedge in between chroot and virtualization. Current trend is back again to NOT have tons of different docker images running in one linux installation, but to split it up again.
"Kubernetes" - just another layer of abstraction, tied to a quite narrow area of use where "cloud" as new buzzword for old technology fits in. So you need, in a real scenarion, at least 8 VMs for Kubernetes? Where is the efficiency?
*AAS: Yeah, buzzword for "Switching to subscription". Applied to a log of things it should not apply for. Mostly not an AV scanner which needs constant updates "As A Service" <- one of the rare cases where it applies, but was never called this until the propaganda department commanded a new buzzword.
Blockchain: Besides money laundering the idea of track-able document/contract changes with tamper protection is fine. But there is no good standard on how to do so, so meh, scamming and money schemes...
AI: New word for > 30 years old technology. Implementation details got improved, not the concept itself. Many companies (not MS obviously) are calling it automation for over a year now, and scientists still use it what it always has been good for: pattern recognition. The only actual improvement which appeared 'round the 2010 years was the pattern generation, that got much better than 30+ years ago.
we can attribute the wet dream of low code/no code to 4GL's.
I had the pleasure in the mid and late 80's of being called in to several places to debug/fix 4GL code at the raw level cause the 4GL front end provider had gone bye bye, licences had expired and the 4GL wouldn't run.
Just a disgusting mess of spaghetti.
Related to this....Me, I am a "top down" kinda guy (of course subroutines/functions for repeatedly called logic are perfectly fine/expected) and 4GL's aside, I hate "micro fragmenting" of code: which is to say you have some compound work to do and each UOW is made into a function call.
Each function call does a tiny bit of the solution and is never ever called by anything else but the programming paradigm says you must devolve what would be a simple "top down" section of code into called functions cause MAYBE you might need to reuse the function elsewhere.
But that never happens.
What gets missed in this "micro fragmenting" of code is the guy/gal that comes after you needs to be able to easily read and parse what you are doing and shotgunning logic here and there in multiple code locations is totally counter-productive. The time lost in tracing where all these fragments live (in many cases outside of the mainline) is substantial.
I am not sure if "micro fragmenting" of code is due to IDE's forcing this (I have never used an IDE) or whether it is something taught in Uni's or is something considered industry best paractice but IMHO it is counter-productive.
Bluck
> I hate "micro fragmenting" of code
I call that functionitis. I have to to find a way to cure people from making a part of code, which is only used once, a function, or several functions which then have to be called each once in a specific order.
EDIT: The famous Kevlin Henney speech must be inserted here - skip to 58:07 if you are impatient for a classical functionits example...
It's supposed to be about encapsulation and making the intent clearer.
The function name says what it does, and the parameters and return value(s) specify everything it could touch.
This should mean that you can scan the upper level and quickly ignore all the functions except for the one or two that you may need to change. In a large codebase this is a lot faster than scanning a massive function that may change any variable at all at any point.
In a compiled language it makes no difference to release performance, because the toolchain will inline them by default. (Debug flags and explicit function decorations may disable this)
Every IDE has a trivial keyboard shortcut to jump to the implementation and back, whether in the same file or another. A few will even show it in-inline.
As with everything, it can be abused or used poorly.
> The function name says what it does, and the parameters and return value(s) specify everything it could touch.
> This should mean that you can scan the upper level and quickly ignore all the functions except for the one or two that you may need to change
You missed taking a copy of that "should" from the second of those two lines and inserting it into the first:
] The function name should say what it does, and the parameters and return value(s) should specify everything it could touch.
> Every IDE has a trivial keyboard shortcut to jump to the implementation and back, whether in the same file or another. A few will even show it in-inline.
Skipping over "every" (and, btw, that is the job of an editor; if it also has to be tied into an IDE then, well, hmm): if your first contention is correct then what need is there for such an immediate review of the called function's code? Especially in-line! That latter really starts to sound like the response to "functionitis" getting out of hand ("I have absolutely no trust in any function's author"). Perhaps, for real life work, which trumps "it should be" every time, the sentence ought to be:
] The function name hopefully suggests what it does, and the parameters and return value(s) possibly specify everything it could touch.
I am not sure if "micro fragmenting" of code is due to IDE's forcing this (I have never used an IDE) or whether it is something taught in Uni's or is something considered industry best paractice but IMHO it is counter-productive.
Half arsed coding standards with very low 'complexity' metrics, enforced by static analysis tools, which insist any function with a loop and a couple of if statements is split in to 3.
3? Split into 3?
You can do better than that!
void loop93(loop93_parameters *p) { if (loop93_can_even_start(p)) for (int i = loop93_start_index(p); i <= loop93_end_index(p); i += loop93_increment_when_at(p, i)) if (loop93_condition_ok(p, i)) loop93_do_stuff(p, i); }
The whole movement to an 'application' being a bunch of cats fighting in a sack, with weak dependency management, wildly inconsistent project structure and a state that defines any rational analysis or testing.
Though you could argue that a lot of the worst excesses are reactions to holes we've already dug ourselves. React as a response to the crude mashup of JavaScript and HTML, Containers as a reaction to dependency hell of Python-like environments, Agile as a response to unmanageable requirements and estimation demands etc. etc.
>Though you could argue that a lot of the worst excesses are reactions to holes we've already dug ourselves. React as a response to the crude mashup of JavaScript and HTML, Containers as a reaction to dependency hell of Python-like environments, Agile as a response to unmanageable requirements and estimation demands etc. etc.
Proof, if any was needed, that papering over fundamental problems doesn't fix them.
All that stuff existed before, including the "phone+photo+musicplayer+internet+GPS" stuff. The innovation was the ease to use, the touch display, and the genious move was to enforce the Apple marketplace, like they did with the ipod. Oh, and most important: The BLING factor. Fujitsu Loox 720 and Fujitsu Loox T810 / T830 for example had all that as well, and other manufacturers too, but the touch and usability could not compare.
Back in the day (late '70s) one of the kids learning about virtualization on the Mainframe asked if self-virtualization was possible (i.e. running a VM in a VM). The answer was yes ... So naturally, someone (unnamed, to protect the guilty) decided to see how deep they could get the virtualization to go.
Turned out you could bring a very high-end IBM mainframe to its knees almost instantly ... much hilarity ensued.
I would nominate your computer being tied to an ecosystem. Whether your ecosystem being Windows, Mac, iOS, and Android to name a few. I still would rather have the programs I use be on my computer and not in the cloud. Also sharing all your personal information with different companies for marketing purposes in your car, on your TV, and on your phone, and computers. I also think local accounts on a computer need to still be available. Privacy is becoming a thing of the past.
A wonderful introduction to programming that even your 10yo child can master.
Still seeing production apps written in obvious VB4 or VB6. Scary.
Of course this applies to Logo, Turtle, Software through Pictures (https://arxiv.org/html/2403.08085v1)
I think Apple's Hypercard was actually pretty successful - must be why they killed it off.
Not the original architecture, which was well reasoned, and seems fit for purpose,
But the cult of brain damaged zombies who reasoned that because they couldn't figure out soap, and because their server used http for client connections, they should adopt a website schema intended to address massive scalebility of global, multi region hosted sites.
Hey, I'm going to defend clouds here. I've helped run a moderate-sized cloud myself, in a shared data center, oddly enough with a fair amount of help from Red Hat support.
There are two reasons for piling up lots of computers in a data center - buildings and people. Or maybe the cross product of the two.
Buildings built for people suck for computers, and vice versa. (ask me about working in a 100F hot aisle...) For a lot less than the cost of a multi-story office building you can build a 1-acre machine room floor, with 10+MW of power and efficient cooling, and put it a drive away where power is cheaper than in the city and real estate is virtually free. (e.g. a dying industrial town in our case) Most of the electrical and cooling equipment we have just wouldn't fit anywhere in a standard office building, and the smaller gear just isn't as efficient or good.
Perhaps more importantly, people come in integer units, each one knows a limited number of specialties, and you need to cover a lot of specialties to run a bunch of computers. (well, unless you restrict your app programmers to a very limited environment, e.g. the MSFT-centric IT shop of the 90s-00s) For a given library of services it only takes a few more people to run $100M of computers than $1M of computers, plus they'll be challenged rather than bored so it will be easier to hire and retain them.
You have correctly described the value of gathering hardware into a data centre, getting all your physical plant together and taking advantage the physics in doing so. And the equivalent in human resources to keep the plant running efficiently.
HOWEVER
whilst Cloud implies data centre, DC does not imply Cloud.
Even your statement: you ran a cloud in a shared DC. It was a cloud to your clients, to you it was a bunch of hardware, either purchased & colocated or hired by the rack from someone else who decided to pop it into that DC. The software running on it then determines if it (today) is to be "cloudy" or or not.
my argument has always been pro DC. YOU run that infrastructure & YOU know who to slap when the cleaner unplugs a server to plug in the hoover.
There is literally nothing you can do when microsoft hires minimum wage devops twats in some 3rd workd country who runs a script on the assumption that "no one will have any sql snapshots in production" & takes out the entirely of SQL for South America for 10 hours! Or when they let Chinese engineers troll around your Defense department Cloud for months (years) before someone mentions that it is probably a bad idea to do that
"Cloud" makes part of the software stack shared, too. If you don't have a vast amount of compute, the people are going to cost more than the computers. Plus they'll be bored, and they'll probably suck because you can't hire great people to run systems for a random mid-sized non-tech enterprise.
In the 20th century enterprises used to outsource that layer of the software stack to IBM, and later to Microsoft. Not anymore, and if you roll out a hundred servers in a data center there's a lot of "stuff" that needs to be done so your application programmers can do something on top of it.
It is the way of Register commenters to sneer at things, especially if they don’t know about it and especially if they see it as either a threat to their livelihood or something they will have to put effort into learning.
But everything in the list has its place and where it is genuinely useful it survives. The problem is where it used where it is not the appropriate solution.
People will often do it unnecessarily because they want to understand and learn it, which is opposite to my first paragraph but the better option of the two. Learning is better than sneering any time.
Grumpy miserable gits won’t like this, because of course they are experienced and wiser and always know better (superciliousness). In fact they have been dismissive without having taken the time to learn.
Great comment: just blithely state that the Register comments are negative because we can not be bothered to learn and are just scared of the upstarts coming to take away our jobs.
OR you are a shallow youth who is the one who can not be bothered to learn, to actually go back and read about what the grumpy old farts actually lived through and see how it really is just one damn stupid fad after another, making money for the hypesters. To heck with comments, that is the entire thrust of TFA you are responding to!
> where it is genuinely useful it survives
And fades into the background, needing no hype to survive and generating few, if any, emotional reactions aka rants. When did you last get worked up about the Turing Architecture?
> People will often do it unnecessarily because they want to understand and learn it
Now there is a fascinating statement.
If you are truly interested in learning about it, giving a technology a work-out is not unnecessary. So - contradiction?
Unless you are "doing it' so too early, without having first picked up enough background to comprehend what you are seeing (and by "work-out" I don't mean running the installer so that you are ready to try out the examples in chapter one). The worst case of which is the en masse move of your company to the New Shiny, with the need for increasing numbers of Memos From On High demanding that everyone has to use the Shiny in order to get the company's money's worth from it - does that one ring any bells from any Register comments you have so casually brushed aside?
Great, fight personal attacks with personal attacks. A good way to make sure you're both wrong. Five seconds could tell you that that user has been posting here since 2011, not so compatible with the "shallow youth" assumption. And your correct points about the blanket accusations are somewhat weakened by the blanket accusations you've made yourself.
Nor does your primary argument against theirs hold much water:
Them: "where it is genuinely useful it survives"
You: "And fades into the background, needing no hype to survive and generating few, if any, emotional reactions aka rants."
Using that logic, many of the fads called out in the article weren't. Cloud providers are used by tons of users without remark. Containers and Kubernetes are frequently put to use, whether or not they should be, without being the headline feature. Blockchain is an exception; everything I know that uses it does trumpet that they do so within the first two paragraphs. AI is both the newest re-fad in the list and poorly defined, so I'd exclude it from consideration until after the next AI winter has set in. The rest, not so much.
You might have a point if it was about learning, but it pretty much never is. It's almost always about making money by shoehorning technology into a situation where it's either not suited, or isn't the ideal situation. 'LLMs' where an algorithm would be easier. NFTs /blockchain where in pretty much every situation a database would be better. Not to mention trying to hand wave away the social and environmental impact.
The alternative is the joy of CV padding, where an over complex solution is implemented because it gets the recipient kudos, something to stick on their CV, and a possible pay rise, even though it's not a good long term solution, or particularly efficient. Then the implementers move on, and some other schmuck has to fix their 'solution' properly.
Once the hype cycle ends, and the technology is used for the areas it's designed for, things are generally OK. Although then you'll get idiots saying a technology is useless because it hasn't taken over the world, ignoring the reality it works just fine in its specific niche.
"Grumpy miserable gits won’t like this, because of course they are experienced and wiser and always know better (superciliousness). In fact they have been dismissive without having taken the time to learn."
In a lot of cases the 'grumpy miserable gits' have been there, done that and had to sort out the resulting mess, so are acually pretty clued up on why something is shit!
E.g. Containers absolutely have their place but they are now thrown everywhere. Browsers as a sandbox are great but electron-every-app so we can save dev and support costs (with multiple real costs to the user that are gaslit away by marketing) is a real problem (just installing tens of these each requiring over a gigabyte, not to mention updating can fall spectacularly). It's more the misuse of tools on an extremely widespread scale, driven by ability to sell more and push more of the real cost to the consumer without them realizing.
As someone who has to deal with the godawful SQL generated by ORMs (in this case Entity Framework), I am nominating ORMs.
ORMs let a lot of developers think that they don't need to learn SQL and, as a consequence, they cause their applications to work less efficiently on account of a lack of understanding of how RDBMS work.
Or better still, they let the ORM design the database for them.
Of course, developers have Claude and friends now. Like LLMs, ORM only works well when you know what good database design looks like and what good SQL looks like.
It means that you can & will fix the output of your ORM.
Ooh, right. ORMs are a lot like LLMs, in that they look like they'll solve your problem - but, actually, that's only true for trivial problems. Every time I've tried to use an ORM, it either turned out to just be replacing a bunch of one-line SELECTs, or I ended up wasting more time wrangling the ORM than it would've taken me to hand-craft the correct SQL.
You’re also paying for access to your own software if you buy physical servers? So that argument doesn’t hold up. But it’s true that physical servers can be a lot more cost effective, if your workload doesn’t scale too much over the long term. If it does you’re spending significant time redesigning your infrastructure, whereas the cloud solves that already. I guess time is money in that sense.
In the world of business not the validity of the technology is what counts - it is just the money. And if you look at the Hyperscalers, the Cloud, Kubernetes and basically all what's on the list, is moving a lot of money. So, yes, one might not need anything invented past 2008 to operate a service, but the business needs it. The biggest SCAM of all time is actually Microsoft, because they sell Office applications over and over with actually no tangible benefit other than that vast numbers of employees are able to entertain each other with work stuff.
Microsoft's .Net: a monument to the philosophy that if one abstraction layer is good, seventeen must be revolutionary. They've generously given us not one language, but a kaleidoscope of variants - because apparently C# wasn't confusing enough without F#, VB.NET, and whatever else they dreamed up on a Tuesday.
The framework itself reads like a phone book written by committee - thousands of classes, each with methods nested in methods, properties referencing properties, all interconnected in ways that are impossible to navigate and require a lifetime to master. It's complexity as a feature, not a bug.
They took the straightforward task of writing software and transformed it into an archeological expedition through documentation. Somewhere beneath all those layers is probably a simple solution, but good luck finding it before your deadline.
Once again, could you *please* say what you mean by "Fifth Generation Languages"?
The Wikipedia response is total bollocks[1] and has, of course, fatally posioned every other result in my quick web search :-(
Are you using the term to refer to Rational Rose (as you mention *a* Rose), intended to generate code from diagrams etc?[2] We used that for a while (I moved on) but to talk about its output as being "unmaintable" is about as sensible as talking about the assembler from GCC as being unmaintainable: if you need to "maintain" the output, instead of modifying the input and re-running the compiler (Rational Rose is just another compiler[3]) then you are Doing It Wrong. And if in the end you can't make it do anything useful without trying to fiddle with its output then the "Doing It Wrong" means "trying to use this tool to solve your problem, go use something more sensible".
[1] confusing, as it does, "a language to be used by a proposed fifth generation of hardware" with "this is the fifth generation of programming language") and then not even being 1% accurate about constraint solvers; sigh, Wikipedia, what will we do with you.
[2] Rational Rose, and other (variously-arsed) "code generators" aren't "the fifth generation of programming language" by any sane[4] measure either
[3] btw, not trying to claim that it is/was a *good* compiler, or a good or even a vaguely useful input language in the first place - before I moved on from that project it didn't seem to be doing anything terribly useful for us.
[4] sane? Is anything from a hypemeister ever able to be considered "sane"?
I heard this in college in the 1980s and it’s still not true enough.
Same thing for DOS, Windows, and now any Linux distro (not so much about the kernel though…)
Everyone who wrote an Excel macro decided they could fix a printer driver with the same effort ended up buying a different printer to actually get a printout.
An experienced programmer can probably adapt something well regardless of OS but knows how to keep focused on the problem involved (avoiding bloat) and that understanding seems to be as rare now as ever.
As a system engineer I have actually written a device driver in X86 assembly for DOS to collect real time radar data, and never did it again. The experience though guided me through real time control system creation for industrial R&D of rolling and casting, and transaction processing of financial data, where deterministic and highly available behavior is expected. But such knowledge and experience is not respected by managers who equate anything Free, as UNIX was and the Cloud is assumed to be by them, and of course better since UNIX/Cloud programmers must be cheap and plentiful, right?
You can get off my grass anytime now…
.... that the path to migrate away from vSphere was to use Red Hat OpenStack on OpenShift - meaning you have to through a poorly documented cumbersome procedure to deploy it (hoping you pay also for their expensive services to have it installed for you, by someone poorly paid in India...) - my recent life would be far better. Even changing the Horizon web UI timeout requires writing and modifying Kubernetes manifests...
i've been railing against cloud for over a decade! you DON'T NEED IT! only devs like it because they're generally don't know how to turn on a server or understand you don't eat fibre cables
suddenly everyone is worried about sovereignty & realising that if Microsoft can tell Barclays Bank "we'll fix it when WE DECIDE to fix it"... you've got no chance!
the fact that i've heard people start saying "well it's Cloud, you've got to accept some downtime" is insane.
It's going to be hilarious when MS , Amazon & Google start randomly putting up costs to pay for that $750 Billion of GPU that no one is using, and I'm going to LAUGH if Oracle goes to the wall with their huge legal promises to pay for stuff while relying on non-legal promises to use it from OpenAI & the UK Government suddenly realises that it's Cloud First strategy of putting everything into US owned datacentres was probably a bad idea
Unfortunately XML was too much of an academic exercise, requiring many hoops to jump through in order to actually use it. By the time you get down to XSLT, transforming one XML structure to another, itself an XML based language, the whole ecosystem disappeared up it's own arse
Yeah, but XML has an advantage over JSON (at least in DOTNET): The type definition is stored as well! With JSON everything is a string...
Real world example here from my solar stuff (except for the last two lines, I added the values temporarily before export to show off):
<I32 N="COMSpeed">115200</I32> = Int32 (singed)
<Db N="BatteryVoltage">52.54</Db> = double (i.e. 64 bit floating point)
<D N="Euros">12.341777777777</D> = decimal (i.e. 128 bit floating point - hey, this is about money, I want precision here!)
<By N="BatteryMode">2</By> = byte
(S N="Time">2026-02-08 01:46:12(/S> = Date-Time as string (I had to cheat with a ( in from of the first "S" here, else the html-strike-through kicks in).
<DT N="Test1">2026-02-08T01:46:12.0227828+01:00</DT> = DateTime object. Note the 100-nanosecond precision - and the ISO8601 style.
<U64 N="Test2">1234567890123</U64> = unsigned int 64
> type definition is stored as well
The type *can* be stored as well (or is well-defined by the DTD or schema or whatever meta-thingy you decide to declare "is being used" at the top of your XML file)[1]
But all too many uses of XML never got anywhere near dreaming about identifying that meta-data and relied on hard-coded assumptions and crap documentation: interoperability down the drain.
But I'll take XML over JSON every time. Especially using expat, where I can avoid cluttering memory with the entire file loaded as a DOM before even deciding it is any use.
Sod it, 99% of the time that I had JSON foisted on me at work, an INI format file would have done the job! Especially for embedded devices that just wanted to send a bit of info but were lumbered with a parser that would happily try to read into core a 500Kbyte file (embedded, remember) that some prat had sent before, again, getting round to spotting it didn't contain a field "fred" with a value between 0 and 255, which was all that the poor thing wanted! Mutter, mutter.
[1] note that your example looks reasonable, but unless you remembered to name your schema/thingy then maybe the B tag is really a boolean, how is the parser supposed to know? Yes, you told *us* but we're not interpreting the XML. Well, except for those times when the easiest way to progress was to get a human to translate the XML into something *really* "machine readable"!
You are right.
$true | Export-Clixml test.xml
<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">
(B>true(/B>
</Objs>
Bool. I had to CHEAT AGIN with ( instead of <. The Register should ignore html tags inside a code block.
I don't need to remember, DOTNET-powershell import-clixml has to remember :D. And even complex objects survive the export-import (i.e. hashtable with arrays in "default powershell" or System.Collections.Generic.List or whatever...). And for "unknown schema" xml I just prepend [xml] to the string, and it digests quite a lot of crappy xml (for example: GPOs...) into a usable object.
I don't understand the competition. Both formats are ways of encoding some types of structures into a serialized format. You can use structures to store another document that explains what this all is, whether that's a DTD or similar for XML or something like JSON Schema, but if you don't, you just have a structure which you have to process in whatever way fits the use case you have for exchanging data. Neither can make all data portable, but it's likely no format ever will. They can solve problems like how you encode strings and numbers and lists in a portable way so you don't have to define your own format and write one or more parsers for it every time. Neither is magic, no matter how many people pretend one is.
"Everything is XML" was indeed a problem, but really that just reflected what happened when you got a simple concept and gave it a committee to make it self-describing - see The Ascent of Ward. That's a trend that has run its day, fortunately - more power to JSON and CBOR.
As for XSLT, yes it's a pig - but consider it gave us XPath, which was a genuinely useful innovation. So much so it's been copied (poorly) by JMESPath and the clearly superior (cough) zpath
> self-describing ... That's a trend that has run its day, fortunately - more power to JSON and CBOR.
Bleughl Giving up self-describing for something that is totally arbitrary and needn't be carrying anything of any use to the receiver at all: description by random hard-coded checks, assuming you've even bothered to code the checks at all.
Madnessl
At risk of drawing this one out longer than necessary, there's a conceptual difference between blind exchange and exchange with context.
I agree that XML with all its schemas and formality, is (possibly) better suited for blind exchange. But those schemas are also unncessarily complex and brittle and I'd argue that for most machine-to-machine comms on the internet, there is context. XML just makes things harder than it needs to be.
XML is a good example, but it came out before 2008.
The thrust of the article was that everything from the early 90s until the 2008 crash was an improvement and from 2008 on, bullshit.
That might or might not have been broadly true, but the likes of XML- or rather, how it was overdone and oversold- make clear that the earlier era was still far from perfect in that respect.
Everything was better in the past. Compilers? Pah - in my day we handcrafted zeros and ones straight onto the HDD platters with tiny magnets!
Everything listed has some utility - and yes, theres hype, but to allow your dislike of hype to morph unto a blanket decision that the underlying tech is useless is the worst kind of short-sightedness. Especially for people who work in technology.
> Pah - in my day we handcrafted zeros and ones straight onto the HDD platters with tiny magnets!
An unholy trinity of shit implementations of shit ideas that they repeatedly try to fix with a shit wrapper du jour every couple of years to try and pretend that is isn't all shit. I'm not touching that stuff even with yours, and I don't care that I'm going to end up unemployed and destitute because of it, I'd rather keep what sanity and hair I have left and remember fondly when you could just write applications that did useful stuff, using programming languages.
From what I've read so far, I think I'll quit the tech business and sell melons. My measure of a great technology is to pretend you don't have it and see if you can survive without it. CPU - obvious, Database - pretty dicey, Ethernet - you're toast, compiler - bye bye, editor - gonzo, storage - obvious... the rest is pretty much what's been left to the imagination of bright people and suits. That said, people will pay for things that add value otherwise the tech biz wouldn't be where it is today. Granted we probably don't need half the crap that's been marketed down our throats. My point is, there's good tech and bad tech. The problem is determining which is which. Of course there's always that 9x12 cabin in the woods....
As a layman but long time user of computers since the 1970's when it was quite arcane, (working for a financial institution, I remember trying to calculate NPVs on an ICL mainframe that took up a whole floor of the office, only to be cut off every 10 minutes when the foreign exchange department did a transaction) and I have formed tentative opinions from the sidelines on the way things have developed, most of which you have confirmed in this piece, especially the bit on cloud computing. Really satisfying reading, as are the comments. Thanks tonnes.
I honestly can't see why you wouldn't consider containers useful.
Yes, they can be abused in overusage.
For example many years ago, the worlds lightest Python-based API was deployed in it's own 'lightweight' container. There was no need in that instance - boom boom.
But aside from that Containers have given much faster cadence, less worries about conflicts - Python I'm looking at you as a chief offender.
Substantially more pros than cons.
Kubernetes can go do quietly in a corner, it always seemed deployed 'just because', not 'because we need it'
I'm afraid the marketing BS overload has been with us since the late 80s when my artificial intelligence 400-series class was told that Expert Systems would produce AGI in the next decade or two.
The only thing that has changed is the marketing dweebs at OpenAI et. al. don't even bother with a delaying tactic for their predictions and claim AGI is "right around the corner, you don't want to miss out" to shamelessly scare CEO's and boards into spending 100s of millions on a failure.
I blame it all on the Americans, who let the slightest "reasonable doubt" be used to let fraudsters off the hook.
I lost count how many times a previous CIO told the team, make sure Cloud is on your resume.
Then all this AI BS came along and all it's really done is used up more water, power and increased RAM pricing. Certain people really don't like it when you point out, that until it has it's own thinking brain, it's at best, machine learning.
I'm late to the thread but here's my take on this to get it out of my mind.
In my perspective it's not about hype but the inherent self-image of all the organizations and a lot of "power" thinking.
"We are not an IT company!" (practically the self-image of all the organizations I ever worked at).
Consequently practically no IT person ever gets promoted anywhere, instead only people who represent the "core business" get promoted (I'm curious if your experiences differ significantly, I'm from the continent btw).
But with the lot of IT ignorant CIOs and CTOs given the chance to outsource it all to GOOGLE, AWS etc (and nothing else means "going into the cloud" for these people) and thereby being able to relegate all of us to irrelevancy they fell to their knees and praised the lord. "It's other peoples computers" was never an obstacle, it was the main selling point of the cloud. No more responsibilities, no more nagging by pesky Admins and old greybeards. SACK THEM ALL and sell the datacenter. Just a signature at the dotted lines so to speak and a "problem" (YOU and me) less. It's just a few API calls now and a bit of clicking in the Web-interface. Easy-peasy, right? Developers eagerly helped and stabbed us in the back, because containerization meant, never having to deal with Admins about update cycles. No more "But it works on my machine" moments. Now "my machine" *is* the "production machine" in the form of a container that gets created once (bugs and CVEs and all) and only ever updated when the build pipeline creates another (don't believe me? https://www.theregister.com/2026/01/30/java_developers_container_security/ ). So they embraced it and consequently all the organizations they wrote software for, redefining DEVOPS to "DEV now also does OPS".
And with all that knowlegde lost in the companies, there is no way back, who would admit to have made a colossal error in the C-suite anyway?
Am I too negative?
I have been trying to "vibe code" (I didn't even know the term until a couple of weeks ago). It occurs to me that garbage output pays the same as useful output for AI companies. And if AI were an employee, it would last one week on the job. I remember Visio presenting the "cloud" in the 1990s. I remember thinking, "who is stupid enough to fall for this"? Soon people were losing all of their data when "cloud" companies failed. There seems to be a new "it" language every other week. I still write embedded systems code in C, and sometimes use C++ or assembler (rarely). This article hits the spot.
So, I retired [-EARLY-] as a senior AXP zDb2 zOS Database Admin, like, last January, 2025 - in that time, absolutely, NOTHING has changed - ALL a BUNCH of marketing B.S. Bull$H!T, Baby!!
BLOCKCHAIN was gonna be the end all, to all of mankind's problems, for ever and ever more - So, two techs and myself put up a BLOCKCHAIN on an IBM MAINFRAME and all the dist guys said, NO THANKS; cuz, they hate the MAINFRAME - cuz, they don't know the MAINFRAME, of course - FOOLz!!
Now comez "AI" - a topic I'd dealt with, since 1991 at Price Waterhouse - truth be told - So, it's gonna be the end all, to all of mankind's problems, for ever and ever more.....
Good luck, y'all sportsfanz; y'all gonna need it...
former unknown mainframe zBusiness expert!!
;-]]