Re: Percussive or gravitational maintenance.
The LART is reserved for Lusers whose Attitude merely needs Readjustment, though.
Meddling "IT" "professionals" get the spring-operated trap door, or the defenestration ejector.
202 publicly visible posts • joined 8 Jun 2013
My damn 2021-vintage HP printer STILL serves its web interface in HTTP only. It claims to take a certificate that will let it enable the HTTPS server, but I've seen no evidence of any truth to that claim.
Plus, even if I could give it a self-signed certificate, then I'd have the adventure of trying to convince my browser to trust that certificate.
It's also why I keep a few old, beige power cables around, from back in the day before everything turned black.
Those cables are reserved for my systems' case power supply sockets -- it's drilled into everyone's head that you can unplug any of the black or whatever cables, but you NEVER touch the beige ones.
So far, the only unplanned shutdowns any of those machines have had were when the power went out at the source.
When the first-gen AOLNet dial racks were built, they were all specced with these giant, honking 6-rack-unit UPSes at the bottom, because there were a lot of install locations where we couldn't rely on constant, clean power.
A few years in, all of those UPSes were filled with aging batteries that had to be replaced. So maintenance started a battery-refreshing tour. The UPSes were built as drawers, there was a nice handle on the front you could pull to slide it open and expose the batteries for swapping.
Unfortunately, between the vibration of air-shipping and installing the cabinets, their age, the impressive weight of those UPSes, and the fact that they'd never been opened once since the racks were built, many of the drawer mechanisms were... not in the best of shape. All of the maintenance people ended up a little bit haunted by the memory of pulling open those drawers, an act that was far too frequently followed by the sound of dozens of ball bearings clattering onto the floor of the data center like a rainstorm.
(We left the UPSes out of future iterations of the design, deeming them more trouble than they were worth. Redundancy was observed to be the more effective insurance against downtime: If the power went down to an installation, customers could just dial in to a different one. That's why AOL had at least 2 or 3 local phone numbers in most areas.)
Here in the US, the machine's "AI" anti-theft system would not only tip it over on you, it would also automatically dial 9-1-1 to report the accident, ensuring you an expensive ambulance ride to an even more expensive ER for your troubles.
Admittedly, it's weird for bankruptcy to be employed as a theft deterrent. Kind of feels like someone's lost the plot.
Also, nobody broke into any of the baggage compartments on the train, and anyone who tells you different is a liar, liar, pants on fire! Never once did any unauthorized person access or remove any passenger belongings from any secure area of any train.
...Some clever bandits did make off with 6 million pieces of passenger luggage we'd temporarily piled on the siding overnight while the train's baggage facilities were being sprayed for Novell source code. (Can never be too careful.) So, really, that's on you irresponsible people for not being more careful about where we leave your luggage.
But never once was anything taken from the train, nor did any unauthorized persons succeed in boarding the train! (Heck, we didn't even find any Novell code. So, good news all around. You're supremely welcome.)
Unfortunately, unless they're identified by name in connection with the photo, the law is likely to consider the "subject" of that picture to be the tattoo itself, not the person on whose body it appears, so the only one with any rights regarding that image is the photographer. (Tattoos have no representation rights.)
Countless times I've had a cashier that's had to call a supervisor to bring an new roll of receipt paper as none was kept at the till. How much did that cost in lost productivity?
If you define productivity as "completing the day's sales", and assuming customers were content to just wait longer or use a different register, the answer is "none" — which is probably the crux of the problem.
If the cost of a decision is merely to inconvenience (but not lose) customers, most businesses are more OK with that than you'd prefer, sadly.
I can totally see someone (and by "someone" I mean the 1980s workplace comedy-of-errors stereotype that this other kid is portrayed as) manually dragging cut-sheet paper through the feed mechanism, advancing it by hand as each line of output is printed. I'm sure the off-kilter output lines, swooping roller-coaster hills and valleys, and partially overlapped text just added to its... "charm".
One fateful night, Serge's pal forgot to get extra paper, panicked, and started shoving anything faintly paper-related into the machine – including cardboard.
WTH? Suddenly this story veers into bad sitcom territory. I can't really grasp the mindset of someone who panics THAT desperately, at any age. It doesn't seem like the sort of thing you grow out of, either. I shudder to think what that guy is up to these days. I'm sure the hijinx continue to be "hilarious".
That's great... until, of course, Apple rips out CUPS' (already-deprecated) support for printer drivers entirely, in favor of "IPP Everywhere or Bust". Which they're seriously talking about doing.
(Granted, they've been talking about it for years, and it's a very slow process, but that's kind of the point. After several years, they're still talking about it, and if they haven't let go of the idea by now then it's probably going to happen eventually.)
The BOFH definitely should also hate small consumer inkjets, they're just awful.
Over 15 years ago, it was already common knowledge that Your Printer is a Brat.
I once installed a macOS update on a friend's laptop. The damn thing went all the way through the update process, got to the whole "finishing up" part of the performance that comes right before it reboots into the updated OS... and then just didn't. Didn't finish up, didn't reboot, didn't do anything. Just sat there with its skinny little progress bar at 100%, chewing on something.
For obvious reasons I didn't want to risk interrupting the process of completing the OS install. So I sat there watching it do whatever it was farting around doing for a full 90 minutes before I finally threw in the towel and forcibly power-cycled it. At which point, it came right up in mere seconds.
Great and all, but man was I pissed off that I spent AN HOUR AND A HALF watching the thing do... absolutely nothing, as it turns out.
I've mentioned this before, but even worse are the new automated, scripted phone-support systems. Not only can they drag you through stupid, tedious troubleshooting steps, explained in plodding, laborious lowest-common-denominator detail, but (to the extent possible) they're able to check whether you're following their directions. And if there's any sign you are not, you are in for a world of hurt.
Here's an example: The last few times I've had to call my cable company for support, their first line of defense is now a "TellMe"-style automated "support tech" that walks you through the process of restarting your cable modem. (I know, my eyes practically rolled out of my head, too.) Here's how that should go:
Now, that's the ideal scenario. But here's how it can go:
At that point I think I just hung up. The line came back to life a few hours later.
I think we've seen coverage casting doubt on the "improvements" from soldered RAM.
I haven't seen anything of the sort. In fact, the coverage I've seen has all been pretty universal in acknowledging that it's a real need in these modern architectures, and offers significant performance gains (like, speed multiples, not just increments) -- the only debate is how much it really gains you in practice for different workloads, and whether the tradeoffs are worth it (both financially and in terms of future modifications).
But the effectiveness and the real benefits of soldering the RAM right next to the CPU... that part's pretty undeniable, from everything I've read.
The problem with CAMM2 is, a year after the ThinkPad P1 Gen7 announcement and the touting of its new RAM modules as the way of the future,
It might be too soon to call CAMM2 a complete failure, but it's not looking too healthy.
I would bet money it's still active.
Money you have to spare, since presumably you billed them exorbitantly for this service?
"Leaving on good terms" doesn't mean they get to take advantage of you when they suddenly have a problem that needs solving. "Leaving on good terms" should mean that they're more than happy to compensate you reasonably, if not generously, for riding in on your white horse and extracting their fat from the frying pan. Heck, if they didn't offer right up front, then they're kind of being jerks.
Presumably Lu is of Chinese descent and speaks Chinese at least semi-fluently, so can we not describe it as "creative" when someone simply falls back to their native/second language when labeling things, relying on the fact that most English-speaking users won't be able to recognize the meaning behind the foreign words? I mean, he named applications "destruction" and "sleep" — would those be "creative" names if the English words were used?
I dunno, maybe I'm looking at it wrong, but it feels like there are two ways to present that information:
The article is only tangentially about this, but...
I absolutely cannot understand the mindset of Linux users who glom on to "Long-Term Support" distributions like Ubuntu and RHEL/CentOs/Rocky — systems where the library support and system interfaces are frozen in time for the entire 3, 5, 10 years that the distro is supposedly supported — but then they want to run the latest, greatest software on them, and expect it to Just Work™ despite all of their system libraries being 7 years old. It's even worse when they expect brand-new code to support being BUILT on those ancient systems, using the crazy out-of-date dependency versions available.
It's not JUST that it seems like it's defeating the entire purpose of an LTS distro, to want to install unproven, bleeding-edge software on it. The thing that really bothers me is that it ends up holding back development of projects because they feel like they have to keep supporting all of these ancient environments.
Plenty of open-source C++ projects avoided using anything more modern than C++11 until earlier this decade, because Ubuntu 16.04 shipped with GCC 4.8 (no --std=c++14
support) and didn't reach end-of-life until May 2021. Those same projects no doubt have been avoiding using any C++20 features, for fear of tripping up the all of the GCC 7 users still running CentOS 7 (despite it reaching end-of-life last July.)
A Python project I'm involved in still won't incorporate any code that requires Python 3.9 or higher without gating it behind version checks, because while 3.9 is the earliest release that's still officially supported by the Python project itself (until October), the package download statistics still show a significant percentage of Python 3.8 users, even more Python 3.7 users, and a continuing trickle of Python 3.6 users. Even though Python 3.6 has been out of official support since December 2021, it's the version that Ubuntu 18.04 shipped with, so there are going to be people clinging to their Python 3.6 installs until that distro finally ages out of our misery in 2028 (f---ing Extended Security Maintenance!) and we can finally pry it from their cold, gnarled, ancient hands.
My take is, if people want to run 10-year-old distributions, they need to resign themselves to running 10-year-old software on it. So that the developers of that software, who have continued development over the many years that the distro was frozen in time, can soar on the wings of a more modern codebase, without the LTS anchor dragging them back into the dirt. (Or whatever florid metaphor you prefer.)
The linked page opens (in infuriatingly center-justified text),
OBS Studio is officially distributed as a Flatpak on Flathub and a PPA for Ubuntu 24.04 and newer. For other installation methods, including unofficial packages, click here.
Flatpak Instructions
You can install OBS Studio from Flathub here.
If your distro is not already set up to install Flatpaks, you can follow the instructions here.
Ubuntu Instructions (24.04+)
It's clear that the primary distribution model is their official Flatpak build, though they maintain a PPA as well because Ubuntu users cling to their 10-year-old distribution installs. Bluntly characterizing a reasonable summary of the offerings as "poor reporting" is you being either disingenuous or unreasonable. Fortunately, it's also rude enough that I have no real desire to waste any more of my time figuring out which.
Those do seem awesome, and the pricing for the sub-256 GB models is very reasonable. (The jump from 32 GB to 64 GB to 128 GB, for instance, is $5 US each — from $20 to $25 to $30. The jump to 256 GB, OTOH, almost doubles the price to $55.)
But the design of that product page annoys the crap out of me. Here's how the options are ordered. See if you can spot the sorting algorithm used, and why it's exactly the WRONG one:
Do better, Corsair.
The USPTO is definitely not a rubber-stamp capitulator. They review every application they receive, often in excruciating detail, and regularly reject applications.
(Mostly because they get some WILD applications. Like the time Billy Joel's people tried to trademark "Billy Joel", in an astonishingly broad set of contexts. (The dozens/hundreds of other living people named William Joel were brought up in the denial, as you'd hope/expect.)
In the end, LLM-written code is about as good as would be produced by some third world outsourcing shop
That may be true, which is why they're not using the LLM to write code. They're using it to modify existing legacy code. Two of the examples they gave are migrating Java 8 applications to Java 17, and migrating JUnit 3 tests to JUnit 4.
Unlike writing all-new code, these are tasks that are much better defined, much easier to train a model to perform... and much, much more tedious. They are high-effort, low-perceived-reward tasks. (Especially when success is defined as there being no discernible effect on the software itself. A successful migration is one where nobody can tell you've done it, because the post-migration code works exactly the same as before... except, you've sunk thousands of person-hours into making internal updates.)
Scouring hundreds of thousands of lines of code looking for deprecated or changed APIs, then updating the code to newer interfaces, is such a tedious job that many shops end up deciding to never do it at all. Legacy code is just kept limping along in legacy systems as long as possible, until it's eventually replaced with all-new code written to newer standards.
(That all-new code is very frequently a bare-bones affair. It eliminates deprecated APIs and other outdated internals, yes, but also strips away many of the features of the older codebase that — due to time, money, or design constraints — don't get reimplemented in the first versions of the new, replacement system. That's how you end up with software releases that hemorrhage functionality compared to their previous versions, because the "totally rewritten!" new codebase threw away everything that came before — including a ton of the features.)
If Google can use AI-assisted processes to avoid some of that write-bandage-discard loop, and break the ground-up rewrite cycle, I'm all for it.
Effectively, kind of yeah! But it's clever how they did it.
We validate the resulting changes automatically. The validations are configurable and often depend on the migration. The two most common validations are building the changed files and running their unit tests. Each of the failed validation steps can optionally run an ML-powered “repair”. The model has also been trained on a large set of failed builds and tests paired with the diffs that then fixed them. For each of the build/test failures that we encounter, we prompt the model with the changed files, the build/test error and a prompt that requests a fix. With this approach, we observe that in a significant number of cases the model is able to fix the code.
So they have a unit test process with an AI component that can react to failed tests by suggesting fixes. The two AI processes are totally independent of each other.
I know big tech tends not to care as much about privacy as you or I would like them to
Ironically, though, they do still care about PRETENDING they care about privacy, which is what would prevent them from doing anything like that. They reserve breaches of privacy & trust for internal uses, where they can plausibly deny looking at sensitive information that people prefer to assume they don't go poking around in. (And also reserve it for money-making situations like marketing, not actually benefiting society out of the goodness of their hearts. When doing that they become very privacy-conscious, suddenly.)
"Well, Voyager 2 is the one that's been flying the longest, and Voyager 1 is the one that's furthest from Earth. So they both have a claim to fame."
Big "I love all my children equally" energy there.
I actually spent a semester and a summer off during my junior year of college, working at the JPL. This was back in 1995, and they very shrewdly didn't let me mess with the space systems — I spent half the time in the parallel computing lab, the other half in network security. Great place to work, though.
True, CentOS is dead, but Rocky Linux lives on. (It's basically CentOS reborn. Just independent from Red Hat.)
Cleaning up a codebase for release is a huge and difficult task,
That part's certainly true. Open-sourcing a project after the fact is always more difficult than just starting it in the open right from the beginning. Without the scrutiny of their code being public, developers (whether independent or corporate) tend to cut corners and do a poor job of managing their dependencies properly and in accordance with the licenses they were obtained under (assuming everything they're using was even legitimately obtained to begin with).
and if you're about to give the code away, that probably means it's not worth anything to you any more.
This statement, on the other hand, is painfully out of touch with both the commercial software industry and the open-source movement.
Source code, itself, is worth a lot less than you'd imagine, even for commercial applications. People don't buy source code. (Heck, these days they increasingly don't even buy software, but that's another discussion.) And open-sourcing code doesn't make it "worthless", in fact it can greatly increase the value of the applications based on that code. Ask Mozilla. Open-sourcing (what was then) Netscape saved their business. And proved worth it in the long run, despite the initial efforts going not much better than WinAmp's first attempt.
So why spend good money on paying your staff for the time it takes to clean it up?
Well, if you're serious about opening up your code, you spend good money cleaning it up because (a) you have to, if it's going to be subjected to public scrutiny — that's just table stakes for an open-source project, and the reason why it's a lot harder to do after-the-fact; (b) because all of that work is technical debt that accrued over the closed-source lifetime of the project, and in all likelihood was gonna have to be paid down someday regardless (even if it stayed closed-source); and (c) because an open, well-maintained codebase with hundreds, if not thousands, of eyes on it will ultimately lead to better code, and therefore better software. And better software is ultimately worth more than buggy, sloppy piles of outdated spaghetti code that's been slapped together in secret.
...Like I said, that's if you're serious about open-sourcing a project. Remains to be seen whether the LLamas really are, and just need to regroup and do better on their second try, or if this was all just a bit of theater they staged, hoping to slow their product's inexorable descent into total irrelevance.
See, the problem I see with this is limiting it to "when an LLM [...] deviates from fact".
A more useful model for dealing with the output of LLMs may be to keep in mind that they never "tell the truth", and they never deal in "fact". They synthesize their inputs into outputs in ways that may occasionally align with reality, but certainly aren't guaranteed to. Even when they do, it's not because the model is "telling the truth", except perhaps accidentally — it's still telling stories, whether or not they happen to be "based on a true...".
It's like how Michael Bay has, on rare occasion, accidentally made a good movie. (The Rock was great. Pain & Gain was also very well done.) These are stopped-clock outliers, occurrences that happen as much in spite of all his efforts as because of them. So even though lightning has occasionally struck, it's not as if we should expect that he'll repeat that performance with future films, or even improve his average.
LLMs don't ever "tell the truth". They generate stories. They'll probably get better at generating responses that more closely resemble reality more of the time, but I still wouldn't want to rely on one for my supply of facts.
The best manager I ever had at a technical job (or any job) was completely non-technical, an ex-military man in fact. He ran the network security group at CalTech's Jet Propulsion Lab (JPL) in the mid-1990s.
He knew how to do three things, and saw them as his job in managing the team: Hire good people, listen to what they tell him, and make sure they have what they need to do their jobs effectively. Whenever that meant running interference or kicking down roadblocks outside or inside the lab, it was always the same response: "Tell me what you need, and I'll make it happen."
Que? As far back as I can remember, Macs (which is what I assume you're talking about) used a Finder window to browse and launch applications, the same way Windows did in 3.1 and earlier with the Programs folder. The Apple menu has only ever provided access to a few core OS functions like Quit, Log Out, Shutdown, etc.
The primary innovation with the Windows 95 Start Menu wasn't that it provided links to log out of your account or restart the system, it was that it ALSO supplanted the Explorer-based application grid window by providing a dynamic menu of all of your installed software. Apple never seriously went that route at all, they went from Finder windows to -- what do they call it? -- Mission Control, I think? A mobile-style full-screen app grid.
Painful lesson of long experience, messing with computers, the moment you decide something is junk, and get rid of it, because it's collecting dust, it will become the one object you need in a life & death situation. That flat ribbon hard drive / floppy drive / cd-rom ATA cable? The day you ditch it, the next day, you're going to need it. Fifty years that thing can sit in a box, looking malevolent, but the day you toss it will become the day it became life & death critical.
Well, sure. That's why I keep one of everything that might possibly ever again be of some use. Generally, the "best" one, by whatever criteria seem sensible at the time. I still have my best (read: longest) floppy ribbon cable, my best (read: thickest jacketing) parallel cable, my best (read: ...best) PCI sound card, etc.
But ONLY ONE of each. All the others get sent to that great parts bin in the sky.
OTOH, Im looking at the 3 DL380s, the DL360, the Gen10, three Synologies, 2004 Mac Pro, 2009 Mac Pro, 2013 Mac Pro, Sun Blade 2500, shevles full of PCI-64bit / PCI-X RAID cards, a few 32bit PCI cards, a VL-Bus SVGA card.... floppy cables, HDs, even a 5.25" (half height) 20MB disk drive... and thinking "Do I have too much junk?"
Sounds like you definitely have too many associated controllers, for your relative scarcity of tiny storage! Never mind "where are you going to plug all those PCI-X cards into?", but even if you found compatible motherboards, what would you possibly RAID with them?