* Posts by billdehaan

186 publicly visible posts • joined 6 Mar 2014

Page:

Client tells techie: You're not leaving the country until this printer is working

billdehaan

Thanks for the flashbacks, el Reg

the time he was despatched from his UK home to an African nation, where his client operated both a mining company and the national airline.

An unstable African nation (pick one) had a certain government agency "forget" to pay their internet bill. This made the ISP (a multinational) rather upset, so they sent out an expendable *cough* junior tech who "looked the part" to do support on site. And also, to get the customer to pay its' (huge) overdue bill.

When I say he looked the part, the criteria was essentially "Joe is black, he's less likely to get shot, so send him". That may or may not have been true, but "less" did not mean "zero", a fact not lost on Joe.

Joe cleaned up a lot of the bad processes and discovered that the nonpaying customer was not only paying them (or not paying them) for internet access, but also VOIP. So, when they refused to pay for the third time, he pulled the plug on the agency, disconnecting all of their phones.

Unknown to Joe, the agency was the parent agency of an "off the books" security agency. He was told this 5 minutes after he pulled the plug, by terrified local co-workers who realized what Joe had done, as they were fleeing the building before the pissed off Black Ops soldiers arrived.

Sure enough, ten minutes later, about 20 Toyota Technicals arrived at the ISP building, and it was soon surrounded by a hundred armed guys with fatigues, AK-47s, and black sunglasses, as the immaculately dressed leader calmly entered the building with a dozen of his crack troops.

They went into the head office, where Joe was sitting behind the desk. The leader lit a cigar, placed his AK-47 on the desk, and calmly said "I am here to ask why you have disconnected all of my troops' telephones".

Oddly enough, the meeting was strangely uneventful, and even cordial, as Joe explained how much was owed, and how, if he couldn't get some of the money owed back, the head office would simply pull the plug and cut off the entire country.

He was successfully able to get about 30% of the bill paid, and also not get shot in the process. That may not register in the company's financials, but Joe certainly appreciated it.

When the money transfer was confirmed, he turned the phones back on, and the soldiers left. So did Joe, the next day, to non-African pastures.

When he returned, he wasn't exactly covered in glory by his bosses. His direct manager wasn't impressed with the 30% payment he had negotiated, but begrudgingly told him that "you earned your ticket home", apparently thinking that was an option the company could choose to ignore rather than a requirement.

Reading the writing on the wall, Joe quit six weeks later. While the execs were not impressed with the 30% payment he had negotiated, it was 30% more than anything else they got in the future.

IBM Canada can't duck channel exec's systematic age discrimination claim

billdehaan
Big Brother

I worked at IBM Canada in the 1990s

I was there from 1990 to 1992, and I saw the changing of the guard in real time.

The old guard had been taught that *they* controlled the accounts, not the customers. The customers were expected to shut up and buy what they were told to buy. That had worked for 50 years, why wouldn't it work now? The new generation thought that the old guard were out of touch technically, and they were. They have been more current with technology, but they still viewed customers as livestock who were expected to follow whatever IBM dictates.

The problem was that the world was changing, and the money wasn't all in multimillion dollar mainframes and the associated high-margin support contracts. Cheap PCs were the new thing (a $3,000 Compaq 386 could do about 75% of what a System/36 could do, at about a tenth of the cost. IBM viewed PC users as defective corporations, and treated them as such.

IBMers who tried to buck the trend were let go. IBMers who challenged orthodoxy were let go. The result was a corporate culture that not only stifled innovation, it punished it. Engineers were treated as interchangeable commodities. If the loaded cost of that commodity is $175 in Toronto and $1.38 in Mumbai (seriously, it was under $2 in 1997), they were going to do everything they could to replace Toronto resources with ones in Mumbai. Some Toronto people were even apparently offered positions in India when they were told their local positions were being phased out.

LaMoreaux's comment that it's not IBM policy is technically correct. It's what is legally termed "Constructive Dismissal". There is no directive from senior management to fire older employees, but what there is are targets that are given to middle management that cannot be met any other way except through the dismissal of the older employees. They're not telling the manager to fire the most senior engineer; they're cutting his budget so that he either has to fire the senior engineer or two (or sometimes three, or even four) younger members of the team. Obviously, the middle manager will cut as few people as possible, and that just "happens" to be the oldest, and most expensive, member of the team.

Linux updates with an undo function? Some distros have that

billdehaan
Meh

Another argument for backups

I switched from Windows to Mint a few months back. Some things are better, some are the same, and some are worse.

I much prefer Linux's text file based configuration design over the Windows registry. It's so much easier to back up the config directory and edit human-readable configuration files than it is to go hunting through the Program Files, ProgramData, and %APPDATA% directories, never mind determining whether you should be looking in the Local, LocalData, or Roaming profiles. And then there's the registry, which can be incredibly convoluted and difficult to deal with.

My overnight backup went from 90 minutes under Windows to 18 minutes under Linux. Score one point for Linux.

On the flip side, VSS under NTFS made things like live imaging of the boot partition/disk possible under Windows. Linux isn't there, yet. I run Timeshift and backup, but if I really want to back up an image of my boot partition, I need to boot my PC from a Linux USB in order to use RescueZilla, dd, or the Gnome Disk Manager and back up the boot drive. Score one for Windows there.

I'm still happily running Mint 21.3, and from I'm reading on the forums, it looks like 99% of the Mint users upgrade with little or no problem. But, as always, the 1% can still bork their system, even if they do everything right. And even if they don't screw up the upgrade, you can still find the new OS version doesn't recognize your wifi card, whether the previous version did.

For whatever reason, people may want, or need, to go back to the previous release. Fortunately, with Mint, even the previous version gets security support until 2027, so there's no need to rush.

The moral of the story is : do a complete backup before you install, and don't rush out to install first day unless you need to. I'm quite happy to let the early adopters find all the bugs for me.

Secure Boot useless on hundreds of PCs from major vendors after key leak

billdehaan
Thumb Down

The only thing worse than bad security

is false security.

I ran the check (efi-readvar -v PK in Linux), and my systems are okay. Mind you, they're HP, Zotac, and Lenovo, who weren't listed in the exposed systems. But honestly, the idea that we need to protect the BIOS / UEFI in the first place is a design failure. Putting crap like programmable vendor logos and user-defined backgrounds into the boot sequence is just asking for trouble. And now that they've handed the bad guys a way to load undetectable code that executes before any defence or corrective action can be taken, the only solution is to replace the motherboard entirely, which more often than not means junking the PC.

Thunderbird is go: 128 now out with revamped 'Nebula' UI

billdehaan
Meh

I think I'll wait a bit

I've been using Thunderbird for literally decades, and there are some annoyances that have never never been addressed, even though they've been in the bug list for 15+ years. The one that's annoyed me more than anything is the search rules lack of a simple "From or To" option. There's a From, and a To, and a From/To/CC/BCC, but there's not From/To.

Oh, you can have rule 1 be From X, and rule 2 be To X, and OR them, but that only works if those are the only two rules. If you want to see only the mails between you and person X over the past year, there's no way to define [[From X OR To X] AND [Age less than 1 year]].

I tried Betterbird last year, and the first thing I noticed is that they've implemented search groups, which is exactly what I've been wanting for 15+ years. There are a lot of other fixes as well, but that was the big one for me.

If Thunderbird has finally implemented something like that, I might consider switching back, but unless there is some security issue in Betterbird that isn't being patched, I don't see this new Thunderbird version as a reason to go back.

I'm happy to see the project moving forward, though.

Microsoft makes it harder to avoid OneDrive during new Windows 11 installs

billdehaan
Facepalm

This isn't new, I dealt with it years ago

As I just posted in the "Windows: Insecure by design" story, I had a customer be unable to backup to her external USB disk because OneDrive intercepted the drive to drive copy operation, see that there wasn't enough space on OneDrive for the file, and abort the disk operation.

Had she called MS for support, I'm sure they'd have told her she had to purchase 2TB of cloud storage to be allowed to use her local hard disk. Fortunately, she called me instead, and I defanged her Windows set so that she didn't have to go through Microsoft servers to make a local backup.

During this investigation, we discovered that 5GB of her proprietary customer data was in the cloud on OneDrive, without her knowledge or consent.

When I played with setting up Linux last year, I tried almost a dozen distros before ultimately deciding on Mint. One of the marked differences between Linux and Windows is that while you still have to configure your desktop environment to suit you, with Linux, you're customizing the interface, not trying to defeat it.

Linux isn't perfect, by any means. In Mint, you still have to enable the firewall which is disabled by default. But you don't have to go though hoops to not associate your desktop computer with an internet account, you don't have to run a "decrappifier" or hunt through dozens of configuration screens to disable telemetry, and you don't have reconfigure your machine after every operating system update (which in Windows, can be weekly) because the update has reset all of the privacy settings and file associations to what Microsoft wants them to be, not what you set (and reset, and reset, and reset) them to be.

Windows: Insecure by design

billdehaan
Facepalm

I can't wait for Recall to be backed up to OneDrive and then have OneDrive hacked

What I can't stand is Microsoft automatically sets up OneDrive to back up my folders whether I want it to or not. Not cool, Microsoft! Not cool at all. If I want to back up my files, I'll decide where I want them to go – not you.

A few years ago, a customer called me up, frantic that her hard drive was failing. Although I'd set her up with an external backup disk, it apparently had died and she'd simply never bothered to mention it. So she had something like 17 years of her videos/photos/etc., almost 2TB of data, that was slowly dying. She'd bought an external 4TB USB disk, but it "was out of space", so she needed help.

Her 2TB of data should easily fit on a 4TB disk, and by definition, a newly purchased 4TB disk should have 4TB of free space. At first, I assumed it was improperly formatted, maybe with a MacOS format, but no, it was readable by Windows. I thought perhaps it could be FAT32 formatted, which would explain why files over 4GB (which described the majority of her work) wouldn't copy, but it wasn't that, either.

She showed me the error message. She opened two Explorer windows (that was her workflow), one with her 2TB C: drive, and one with the external 4TB D: drive. Drag a folder from C: to D:, and sure enough an "insufficient disk space" error appeared, along with a hex error code, in a popup.

Hmm.

I tried the disk with my laptop, and I could copy to it just fine. It also wasn't user permissions, or anything like that. In fact, she was able to copy some small files, but that was it.

Looking up the hex error code, I was surprised to see it was a OneDrive error code. WTF? She wasn't even using OneDrive, this was a local drive to local drive copy. Or at least, it was supposed to be.

Well, as it turns out, in Windows 10, if you copy a folder from one Windows Explorer window to another, Windows intercepts that, and copies the content to OneDrive, as well. And since she had only 2GB (or maybe it was 5GB, whatever the default is) of OneDrive space, it was completely stuffed, and couldn't take any more.

So, the copy aborted, with a "disk full" error. Not a "OneDrive disk is full" error, just "disk full".

She didn't know what OneDrive even was, Microsoft simply enabled it by default in the Windows installation process.

Thankfully, when I disabled it from starting up when she logged in, she was able to actually use her external drive and do a proper backup.

Luckily, she had called me for help, rather than Microsoft Support. I have no doubt that they would have told her the resolution was for her to by 2TB (or 4TB) of OneDrive space, and then have her struggle with trying to back up 2TB to online storage with her (at the time) 1MB ADSL upload speed. That would have taken months, but probably less, because her disk would be dead long before the backup could finish.

And yes, she told me that after I disabled that "OneDrive virus", her file copies were "so much faster now". Imagine that.

Out of curiosity, she checked out what was on her OneDrive, and was horrified to discover that confidential customer files were online. They were uploaded without her knowledge or consent, simply because that's default behaviour. Default behaviour that slowed her PC to a crawl, prevented proper backups, and uploaded confidential data to the internet without her consent.

If Recall is enabled, it will take screenshots, with no regard for what's private and confidential, and depending on OneDrive configuration, those screenshots may end up online. By default, Recall will be disabled, at first, but how many privacy-breaking settings "accidentally" get reset to the least private after a Windows Update? Far too many.

And that, boys and girls, is why I switched to Mint last year, and why I've been fielding a surprising number of calls from people asking me how hard it would be for them to switch, and whether I can help them do it.

You're wrong, I'm right, and you're hiding the data that proves it

billdehaan

Re: Have you proven a colleague wrong...

Yes, I have. To both. A number of times, surprisingly.

The most extreme example was when a VIP executive was nearly in tears because his laptop didn't work on site. It worked fine in the office, but at the customer site (several flight hours away), it just would not connect to the mothership database, which was necessary for his job function. This was in the early 1990s, before the consumer internet was a thing; we're talking about a high end IBM laptop with a top of the line (for the time) modem.

The IT department tested it, blessed it, and he went to field with it, only to have it fail to connect when in the onsite meeting with the customer. He came back, yelled at IT, they tested, again, decreed it good, again, and he went to the field, only to have it fail in front of the customer. Again.

So, he went back to IT and escalated to the top of the IT food chain. He explained that he had a last-chance meeting with the customer on Monday, and he was flying out on the weekend, so it *had* to be fixed by end of day (this was Friday). The head of IT declared that the problem was the hard drive, and that it would be replaced. He gave it to a subordinate, and the VIP went away.

At 3pm on Friday, he called IT and got dead silence. He called my boss, who told me to look at it. It turned out that the IT dweeb had simply put the laptop on a shelf with a sticky note that said "look at on Monday, first thing", ad left early.

Basically, VIP was being hung out to dry.

So, I, uh, "liberated" the laptop (in violation of company policy) to look at it. The idea that it was a hard drive issue made NO sense, since everything worked in the office, only not on site. I could called the database no problem. I wasn't on site, so I couldn't test, so I called one of our offices overseas, and had them set up a phone redirect to the database in my office. I called the number, and it called back, connecting from overseas. And sure enough, it failed.

I did some digging, and found the issue was the dialing prefix. Basically, it was using the local profile for everything. I set up a remote profile, tried again, and tested it successfully. I called VIP, explained it to him, showed him how to toggle profiles, and gave him my home number.

On Monday morning the IT dweeb noticed the missing laptop, reported it stolen, and filed a formal complaint against me. He reported me to the head of IT, who also filed a second complaint against both me and my manager.

When the VIP returned from his successful trip, with a signed multi-million dollar contract because he'd finally had a working laptop in the field, he was shocked to learn that the IT screwups who'd failed to fix his PC - twice - had formally complained about the guys who had actually fixed his laptop.

The VIP was the boss of the boss of the boss of the department head who was the boss of my VP *and* the IT department VP. As it turned out, one of our two departments was scheduled to be moved to the sub-basement in the coming re-org, and at the time, the odds were 70-30 that it would be my group. After this little escapade, the IT group was sent to the dungeon, instead of us.

I'd call that triumphant.

billdehaan
Facepalm

Thumbnail, raw image, what's the diff?

I'm an embedded type, but I have occasionally been ordered to support web types, and it never goes well.

Back in the early 2000s, when a customer's site was reduced to a crawl, "my" back end was blamed for an astronomical increase in data usage. The web team (a different vendor) didn't bother to ask me to look at it, they went directly to the customer and reported that "the crappy database" was causing problems.

Since (a) it had been running fine previously, and (b) I hadn't even logged into the site, let alone made any updates in over two weeks, I suspected otherwise. When I contacted the web vendor to talk about it, my contact there (a decent enough chap, for a web type) didn't answer. I was informed by the customer that he had in fact left the company, and that his duties were taken over by a new hire.

In a surprising coincidence, his replacement had started only a day or two before the performance problem was reported.

Imagine that.

So before digging into the server side database, I decided, on a whim, to look at the client side HTML for the site.

The old site had workable, but very inefficient code, of the form:

[a href="D:\Site\Images\20MB_Image.jpg" target="_blank"][img src="D:\Site\Images\8kb_Image_thumbnail.jpg" width="128" height="64" alt="Image Text" /][/a]

(pseudo-HTML so el Reg's editor will allow it)

The new web designer was aghast that every single image was duplicated, one being the image itself, and another being a thumbnail.

As you can see, this is extremely efficient, when all you have to do is use the "width" and "height" parameters. So, he recoded the thumbnail page accordingly:

[a href="D:\Site\Images\20MB_Image.jpg" target="_blank"][img src="D:\Site\Images\20MB_Image.jpg" width="128" height="64" alt="Image Text" /][/a]]

This way, all of those silly "8kb_xxxxx_thumbnail.jpg" files could be deleted. And were.

This resulted in 1100 image files being reduced to 550, with the added benefit that there was no need to keep all those href and img tags in sync. Just use a single tag, and there was no chance of them getting out of sync!

What the web designer clearly didn't understand was that the raw "20MB" files were named that way because the images were, well, 20MB in size. That's why there was a downsized and compressed thumbnail that the previous web designer had tried (but not always succeeded) to keep under 8KB in size. The site had 500+ such images, split over something like 10 pages, so each page had 50 such thumbnails on average. That was about 400KB of data. That doesn't sound like much in 2024, but in 2002, when most people still had 56kb (or even 28kb) modems, downloading those thumbnails alone took 3-5 seconds. Downloading one of the 20MB files took more than 5 minutes.

The web developer seemed to think that "width" and "height" parameters magically compressed the images, rather downscale them. So his "optimization" resulted in 400KB of thumbnail data being replaced with something like 384MB (raw pictures ranged in size from 500kb to 20MB). And, of course, since he tested it in a local network with 100MB connection rather than a 56kb modem, performance was not an issue for him.

Since the web team had gone directly to the customer with the issue, it was only proper that I do the same. Fortunately for me, if not the web designers, the basics of "the database didn't change, the web site did" was not lost on him.

Unsurprisingly, when the web designers' contract contract ended, I found myself working with a new web design company on the front end.

Your trainee just took down our business and has no idea how or why

billdehaan

Re: Sounds unlikely.

I worked at IBM on contract in the early OS/2 days (as in, OS/2 1.x days). And while I have many (many, many, oh so many) criticisms of IBM, one of the things they did right, and better (in my experience) than any other large organisation, was the on-boarding process for new hires (and in my case, contractors).

My first day, I was assigned a (very) small office, a phone, a short tour, and a map of the building floor I was on, highlighting the paths to most important things I needed to know: the fire exits, the washrooms, my group leader's office, the coffee machines, and the cafeteria.

Most importantly, I was given a huge (200+ page) 8.5x11 inch binder of paper. Each page was to be read, and initialed that I'd read it, and agreed to it. There was a scratchpad where I was to write out any questions and/or objections. The binder included not only job duties and responsibilities, but restrictions, processes, and how to escalate questions. The overall tone was "if you don't know or understand, don't guess, ask".

Being young, and this being early in my career, I thought this was silly, and overkill, as 90% of it was self-evident, or of the "well, duh" type of information that should be obvious.

Later in life, when I saw the disasters at other companies because they didn't have a good on-boarding process, I understood the importance of it. It may well have been that 95% of that initial on-boarding was redundant or useless, but the 5% that prevented new hire disasters more than paid for itself over time.

Of course, although everyone agrees that 95% of it is useless, no one could agree on which 95% can be cut, so it stays. Today, whenever I see one of these new hire disaster stories, I keep looking to see if any have hit IBM yet, but they don't seem to (although many other types of hits occur, certainly).

This was 30 years (or more... sigh) ago, so it could well have changed, but back in the day, IBM's new hire on-boarding was the gold standard.

billdehaan

Has an ignorant kid broken your boxes? Have they ever

I've worked in the defence, finance, energy, transportation, medical, food, and general IT sectors over the past few decades, and almost every one of them has some variation of an "unsupervised new hire brings the company to a halt" story.

Bank trading floor brought down by a new hire plugging in incompatible equipment? Check.

Server room and business center evacuated because new hire thought the big red button was the "unlock the exit door" in the server room, when it was really the HALON fire system? Check. "Fortunately", the HALON actually malfunctioned on the new hire wasn't killed, at least.

Run the "build a hex file from the source tree and copy it to the EMPROM programmer" scripts in wrong order, and accidentally overwrite the project's entire, and not recently backed up, source code base? Check.

Start the test bench sequence in the incorrect order and start a small fire? Check.

Send confidential (fortunately only embarrassing and not legally concerning) information out company wide by using REPLY-ALL and attaching the wrong file? Check.

The details all differ, but the common problem was that an untrained and most importantly unsupervised new employee was given duties/responsibilities/access to resources far beyond their current state of knowledge, and/or training, and expected to have the same skill and knowledge as an experienced employee. In many cases, it wasn't even standard industry practices, but an in-house created, and usually arcane process that the company was convinced should be obvious and intuitive when it was anything but.

In looking at the aftermath of some of these disasters, my reaction has been "well, what did you expect?". In one case, the poor new hire had to execute a script that included warnings like "Does the J: drive have enough free space?", and "Is the M: drive mapped correctly?". How the hell is a new hire going to know what is enough free space, and what the correct drive mappings are?

In one case, the FNG (fricking new guy) was told to "run the script on the G: drive". When he asked what the script was called, he was told he'd know it when he saw it. He saw the script directory had half a dozen scripts with extremely similar names, picked the most likely one, and caused a near-catastrophe. In the end, it turned out IT had incorrectly mapped his drive letters, so his G: drive was mapped to a completely different system than it should have been. There was literally no way the poor guy could have even accessed the script he needed, he had no idea what it was called, and when he asked, he not only got zero help, he was called an idiot for not being able to figure it out.

While most supervisors blame the new hire for not being omniscient and magically knowing undocumented corporate lore, there have been some good ones. The best response I ever saw in this situation was the new hire, having caused high five figures of loss because of his actions, fully expected to be fired by his manager. The manager's boss, the VP, interjected, and said "why should we fire you? Your manager just spent $80,000 training you!", clearly showing that he understood the real fault lay with the manager and the lack of guidance provided.

Trying out Microsoft's pre-release OS/2 2.0

billdehaan

Be careful with those rose coloured glasses

I worked at IBM (on contract) doing OS/2 work from 1990 to 1992. I take issue with this statement:

The surprise here is that we can see a glimpse of this world that never happened. The discovery of this pre-release OS shows how very nearly ready it was in 1990. IBM didn't release its solo version until April 1992, the same month as Windows 3.1 – but now, we can see it was nearly ready two years earlier.

The phrase nearly ready is completely untrue.

I was booting OS/2 2.0 and using it for my work from June of 1990 onwards. These were internal builds, from the same code tree as the MS version being discussed here. The OS was certainly bootable, and usable for testing, in 1990, but in no way could it be considered "ready" for consumer adoption.

It ran great on IBM PS/2 Model 80s, with the MCA bus, but that wasn't what consumers had. That early version of OS/2 2.0 was basically a 32 bit version of OS/2 1.3. It allowed multiple DOS boxes (or "penalty boxes"), where OS/2 v1.3 had only allowed one, and not being limited to the 80286 architecture, it had better memory management and virtualization.

It was, however, buggy as hell, driver support was almost insignificant for non-IBM hardware, and the WPS (Workplace Shell) development had barely even started. The SOM/DSOM (the replacement for Windows' COM/DCOM) was also in its' infancy.

I could, and did, run OS/2 at work every day. And I was installing new builds at least twice a week. Stability improved, as did driver support. But it wasn't until the middle of 1991 that I could successfully even install one of those internal builds on my non-IBM home PC, even though it was SCSI based system with an Adaptec 1542-B controller. And even when I did manage to do it, I still couldn't get my ATI video card to go above 640x480 resolution until the GRE update of November 1992.

Yes, that 1990 build could run Windows programs, but it took almost 8 minutes for Notepad to start up (as opposed to 13 seconds on the same hardware with the a 1992 OS/2 build). It didn't support SVGA. It didn't support WinModems. It didn't support EIDE drives properly. And don't even ask about Stacker, or tape drives.

What MS and IBM had in OS/2 in 1990 was a bootable kernel that was suitable for development. It was not even close to being "nearly ready" for commercial release.

It's like saying the Windows 95 was nearly ready for release because there was a beta of Chicago (as it was then known) in 1993.

Fresh version of Windows user-friendly Zorin OS arrives to tempt the Linux-wary

billdehaan

Like a new bicycle with training wheels

I used a lot of Unixes (Nixdorf, Siemens, HP-UX, RS/6000, but mostly SunOS/Solaris) in the 1980s and 1990s, but I never really did much with Linux, other than set up servers in it.

Back then, it was a great OS to run on decommissioned Windows machines as headless servers. Firewalls, backup, FTP and NNTP servers, etc. were great, but it wasn't really a user-friendly daily driver. Despite all the claims otherwise, there was simply too much tinkering needed with obscure configuration files for the average user to bother with it.

Today, with Windows pushing people away with all of the unavoidable and intrusive telemetry and snooping, not to mention unwanted AI nonsense, more people are looking at Linux than before.

I've played with Zorin, and I like it. Although I run Mint as my daily driver, Zorin has a lot of things to recommend it, especially for new users.

Complaints that it's using a two year old kernel don't really mean much to potential users who prefer to stay on their existing OS. Microsoft usually has to drag people kicking and screaming to their new OS, by discontinuing security and support for their older OS. Zorin may be running a two year old kernel, but (a) it works just fine, (b) the differences between it and the latest version aren't likely to be even noticed by new users, and (c) it still receives updates and security patches.

It's entirely possible that new users may outgrow Zorin, and decide to move to Mint (or Ubuntu, or Manjaro, or Debian, or whatever) , but in terms of friendliness for first time users, I haven't seen anything close to it. Not only is it very approachable for Windows users, it's surprisingly similar to MacOS, depending on which skin you select during the installation.

In many ways, I prefer the UI over Mint. Mint has a number of things that aren't available in Zorin (at least not easily), so it's not really for me (although I may set up another machine with it soon), but for expat Windows and MacOS users, it's won over quite a number of friends who are in the "my PC can't run Windows 11 and we can't afford/don't want to buy a new PC, what do we do" camp. And for reasons I don't understand, there are Windows apps that run on Wine under Zorin without problem that give installation and setup faults on Wine under Mint.

For the average home user, where email, an office suite, and a good browser covers 95% of what they do on their machine, Zorin is a much cheaper solution than needlessly spending money on a new PC just to keep doing what they're doing because Windows 10 is expiring.

Zorin definitely has some weak spots, as do all Linuxes. I'm not a gamer, but I'm told gaming on it is still an issue. It's much improved from years past; gaming on Linux has gone from being impossible to merely difficult, but it's still not up to Windows' level. But for non-gamers, I think a huge percentage could switch without any loss in productivity.

Moving to Windows 11 is so easy! You just need to buy a PC that supports it!

billdehaan
Devil

Moving the Linux is easier

It actually is. I have four machines, with one (32 bit) laptop running Linux, and the other three running Windows 10.

At least I did, until one of them could no longer download Windows 10 updates because the 32GB boot drive couldn't handle a 105GB "patch" that was almost four times the size of the OS plus applications.

I read the Windows support voodoo, which recommended all sorts of nonsense that pushed OS management onto the end user (clear this cache, delete this subdirectory, change the ownership of this file, then edit this registry entry, then reboot to the USB disk and copy this file to this directory, they reboot again, blah blah blah), and even spent a couple of days batting around with it, without too much success.

Then, I downloaded a few Linux ISOs, booted off them, installed Mint on the machine, set up a Win10 VM on the other disk in case I needed to actually run any Windows app on it (I haven't in the four months since I switched), and left it that way.

The second machine I switched to Mint, and likewise haven't touched.

My primary machine is still Win10, but I'm slowly migrating things off, and will easily be finished before October 2025.

Not one of my PCs would run Windows 11, according to Microsoft. All four of them run some variant of Linux. And since I can run Windows 10 in a Linux VM, all I have to do is disable networking for the VM and there's no worry about security, either.

Why should I throw away perfectly good working computers simply because Microsoft stops security updates for them?

Linux didn't pull me in; Windows pushed me out.

Top Linux distros drop fresh beats

billdehaan

Re: Preparing for October 2025

Well, the terminology of the time was a little vague.

The original SparcStations, like the IPC and IPX, didn't use the Sparc processors. I think the first pizza box we got was a SparcStation 10, which had used the Sparc I chipset. The IPC used a Fujitsu processor, and the IPX had a Weitek.

So, generally speaking, Sun (at least our Sun reps) referred to the IPC and IPX as such, and only used "Sparc" to describe stations that had a Sparc (later SuperSparc or HyperSparc) processor in it.

As of HP-UX, you're right. So many of the terms uses slashes (OS/2, A/UX) that I forget which is which.

billdehaan

Re: Preparing for October 2025

I think the first Windows version I bought was Windows 95, although you could argue that the OS/2 v2 copy I bought (for $1) that included WinOS2 might qualify. I only ran the earlier versions of Windows at work, and after Windows 95, I was a beta tester for NT4, so that was free, and I think my Win2000 and XP copies were from MSDN, so I got those for free, too.

Like you, I paid for Win7 when XP was deprecated (or defenestrated), and it was a good upgrade from XP, although if XP support had continued, I'd probably have stuck with it. The same was true for Win to Win10, but at least that upgrade was free. But even if Win11 worked on my machines (which it supposedly won't), and even if the upgrade was free (which it isn't), I doubt I'd upgrade it even if I could. The data collection, the privacy issues, the mandatory online account, and the move towards AI integration are not improvements, but downgrades, in my opinion. And since they aren't optional, and cannot be disabled, there's simply no reason for me to support it with my hard-earned money.

billdehaan

Re: Preparing for October 2025

I have no illusions that there is going to be a mass migration off of Windows onto Linux (or Mac) in 2025. I expect some, certainly, but people who are expecting to see an 80%, or 60%, or whatever drop in the number of Windows machines, to be replaced with a huge adoption of Linux (whatever flavour) are going to be disappointed.

On the other hand, I expect to start seeing lots of really good deals in terms of used computers as perfectly good Windows 10 machines that cannot run Windows 11 are thrown into the market. Some companies will continue to pay for support for Windows 10, and apparently, even consumers will be able to for the first time, but most will just buy new machines. Since all those machines will be Linux capable, there will be some great deals to be had.

billdehaan

Preparing for October 2025

I'm pretty much the textbook example of the type of user this appeals to.

I've been an on and off again Unix user since 1983. I've booted Nixdorf and Siemens boxes, I spent five years developing on pre-Oracle Sun machines (the ipcs and ipxs that predated Sparc), HP/UX, and a number of others, and I migrated SCO Xenix stuff to Red Hat and Mandrake in the late 1990s.

Although I frequently ran my backup PCs on some Linux flavour over the past 20 years (whether Mandriva, Ubuntu, or something else), my primary machines were always Windows (XP/7/10). But while Linux was fantastic for firewalls, backup servers, NTP servers, download boxes, FTP transfers, etc., the desktop experience simply wasn't enough to justify a switch, especially since I was working in Windows at the office.

That's not to say that Linux was bad, or incapable. It wasn't. But there really was nothing to justify switching away from a working Windows system. If it was "just as good", or even a little better, that didn't warrant the effort of switching; there was not enough benefit to justify it.

Until now.

The sheer amount of telemetry and spying that Windows 10 does, and the amount of effort required to neuter all the data collection is absurd, and unacceptable. As the saying goes, you're either the customer or the product. But with Windows, you're now both.

With free online services, you expect, and accept, that they will collect some data and/or provide advertisements. With a commercial operating system that you pay money for, the vendor should not be collecting your data, or shoving advertisements onto your machine, but Microsoft is doing both.

That alone sours the desire to stay on Windows. Fortunately, there are lots of free "decrapifiers" that make Windows less intolerable (if not great) on the privacy front, and ways to get past the MS requirement that you have an online Windows account to use your PC. But why on earth should users be fighting against their OS vendor, trying to defeat OS functions that they don't want in the first place? And not only that, they pay for the privilege.

Add to that the fact that many fully functional Windows 10 PCs won't run Windows 11 (mine say they won't), and that means in October 2025, people must either run an insecure and unsupported operating system (a bad idea), throw out perfectly good hardware (just as bad an idea), or switch to Linux.

So, I've switched one PC to Mint, with the other dual booting Zorin and Windows. And although I've tried MX, I wasn't really that enthralled with it. Zorin wins in terms of easy migration off of Windows, Mint wins in terms of customization, and both are excellent choices. Unlike 15 years ago, the software available for Linux is largely on par with Windows (at least for home users), so it really won't be that difficult to turn off Windows next year (and if necessary, it can be run in a VM with network connectivity disabled on a Linux box).

The sad thing is that it's not so much that Linux made a compelling case for people to move to it, but that Microsoft made a compelling case to move away from Windows.

Snow day in corporate world thanks to another frustrating Microsoft Teams outage

billdehaan

Re: I was wondering why things were so quiet today

Oh, there are SLA (System Level Agreements) all over the place.

The problem isn't just that the outtages themselves, it's that things that shouldn't be moved to the cloud in the first place are.

Before the cloud, there were internal backup servers, where users' Office documents were backed up. If there was an outtage of the backup server on Wednesday night, if meant that the most recent backup was Tuesday. If it didn't come back up until Thursday, that meant users were working without a net for two days. Not great, but work was still getting done.

With the move to the cloud, when the net connection goes down, that's it. No more Office access until it comes back on. Customers don't just lose backup capability, they lose access to everything, hence the term single point of failure.

billdehaan

I was wondering why things were so quiet today

Also, much more productive.

I can't help but be amused by all of these outages. IT and IS departments convinced CTOs to spend massive amounts of money to outsource all of their infrastructure to the cloud, so that it would be more reliable, and yet many companies are experiencing more downtime and data loss.

It reminds me of the time some execs ordered us to save money by getting rid of those "pointless" co-located backup servers and the "useless" in-house redundant server, and just put everything into one really big box. Simple, clean, none of that "replication" nonsense that slowed things down.

It wasn't until it was fully in production (which we did under protest) that I was asked what the machine name spof.companyname.com meant. When I explained that SPOF mean "single point of failure", the CEO (the CTO's boss) went white as a sheet, and wanted us to explain what would happen if it we to fail.

One rendition of Monty Python's dead parrot sketch ("it's pinin' for the fjords; it's ceased to be; it shall be an ex-server") later, he demanded we explain and justify "our" decision to do this. Several CYA emails were displayed, and the new CTO that arrived the next month promptly reversed the decision, and we were able to restore multi-site before there was any disaster.

Today, "SPOF" is becoming synonymous with "the cloud". AWS, Office 365, and the like mean that if your net connection goes down, so do you.

40 years of Turbo Pascal, the coding dinosaur that revolutionized IDEs

billdehaan

I'm not sure what school you went to, but when I was learning Pascal, it was not done in an IDE, and we did compile binary executables.

As for why it wasn't use commercially, the short answer is performance. Compared to the other languages at the time (BASIC, Fortran, Lisp, C, Forth, and a few others), Pascal was extremely inefficient. It was great to teach concepts, but doing type-safe checking in the compiler, rather than relying on the developer to do it, resulted in lots of processor time spent on bounds checking and other things that the other language compilers simply didn't do.

In an education setting, performance is less important that the ability to teach the concepts. In industry, the reverse is true. I did some timing tests once of an application that had been written (very elegantly, I might add) in Microsoft Pascal for the PC, to a hacked version that had been written in Microsoft C (this was at the time when Microsoft C was just rebranded Lattice C). The Pascal code was much easier to follow, the logic flow was cleaner, and it would have been a lot easier to maintain than the C version, which was doing address arithmetic and never checking array bounds. However, the Pascal version required 50% more memory to run (at a time when memory was very expensive) and was about 30% slower.

Since time is money, Pascal lost out.

There were several attempts to make Pascal more commercially viable, but with every attempt, they ended up needing to make certain tweaks to the syntax ot meet the performance goals, at which point it wasn't Pascal anymore, it was Modula 2, or Modula 3, or even Ada. Of course, Pascal itself had the same relationship to Algol, so it was just continuing the trend.

Microsoft gives unexpected tutorial on how to install Linux

billdehaan

Tacit admission, or hardware reality?

Could it be a tacit admission that you might need a free-of-charge OS for your PC?

I recently discovered that Windows update was no longer working on my 2018 Zotac PC. Update blared the usual hysterical "YOUR PC IS UNPROTECTED AND AT RISK", because Windows security patches were missing. Of course, it ignored the fact that Malwarebytes and the like were running just fine, only the native Windows stuff was out of date, because Windows couldn't, or wouldn't download itself.

After shrieking "update me! update me!", if started the download of the necessary components, and stalled at... 0%. A quick search showed this was a very common problem, with Microsoft's official solutions involving steps to clean out the SoftwareDistribution directories, running DISM and SCANF a lot, killing various tasks, disabling and reenabling core Windows services, messing with TrustedInstaller, and removing the WinSXS\Temp\InFlight directory.

I'm a software dev myself, and my eyes glazed over at all the support voodoo the Microsoft was expecting end users to do in order to make the update process worked. Someone pointed me to a github update tool which, hilariously, could download the Windows updates from Microsoft's servers that Microsoft's own Windows Update could not. The mind boggles.

One of the reasons is that updates have ballooned to ridiculous sizes. The PC in question has a 32GB SSD, and although the security updates were only a few megabytes, the Windows feature patches were over 100GB in size. Each. And so Windows Update refused to download anything.

There are a lot of utility PCs like this, with small (32GB/64GB) SSDs that are fixed in the machine, and aren't going to be upgrade. Although I got the update going, as an aside, I tried a few Linux installs, which (unlike Windows) would cheerfully load off of the 1TB hard drive rather than the SSD. I installed Zorin OS, booted it, and configured it in less time than it had taken to run the Windows Updates.

When Windows 10 hits end of life, am I going to spend money on a Windows 11 licence for those machines? Even assuming that they could run Windows 11 (which is unlikely), there are Linuxes out there (like Zorin Lite) that explicitly support legacy PCs that Windows doesn't, are currently maintained and secure, and are free to download. Even the professional editions which cost money still are half to a third the price of a retail Windows licence.

So showing users how to install a Linux setup might be a way for Microsoft to relieve themselves of "problematic" end users that are not cost effective to support.

IBM Software tells workers: Get back to the office three days a week

billdehaan

Re: Hilarious

How things have progressed.

You have no idea.

A friend in Toronto was in a team working with their Texas office, and it was decreed that teleconferences were insufficient, in-person meetings were essential. So, the Toronto office packed up their entire team and flew them down to the Texas office to spend 3-4 weeks in person with their colleagues.

Upon arrival, they discovered the Texas team couldn't put them in the same building as the group they were working with; there simply wasn't enough space. But that was okay, there was another building, about a mile away, that they'd rented, and it had high-speed connectivity, so they could just teleconference. Unfortunately, hotel space was tight, so the team had to stay in a a hotel about 30 miles away.

So, for about a month, my friend stayed in a hotel, took at 45 minute cab ride into the building where his Toronto team was located, and teleconferenced with the Texas team in the building a mile away, then took a 45 minute cab ride back to the hotel. This apparently was a much better solution than staying at home, commuting 15 minutes to the Toronto office, and teleconferencing with the Texas team remotely from 1,500 miles away.

At bonus time, the cupboard was bare, because the company had "unexpectedly" spent so much money on flights and accomodations on that trip that they were in dire straights financially. Did anyone have any cost-cutting ideas, they were asked?

Many the team subsequently, and "unexpectedly" left for saner pastures.

billdehaan

I worked on a contract where the project manager decided that such productivity could be measured by counting the number of semicolons in the source code, and got someone to write him a script to do so.

The term for this is Goodhart's Law, and I've seen it hundreds of times over the years.

In an example almost identical to yours above, when I worked at one large, unnamed (*cough* mentioned in the article title here *cough*) Fortune 1 company decades ago, the metric was kLoc, or "thousands of lines of code". Yes, they measured software productivity by weight, essentially.

Management had a parser that went through each developer's code, including all included headers and etc., and counted the number of lines. There was a database module that was particularly huge, including hundreds, if not thousands, of 3-5 line modules that handled the numerous permutations of data. It was completely and totally unnecessary to every subsystem but the database module. One week, every subsystem, including the graphics, communication, and all others, suddenly included the root header file for the database, because doing so dragged in about 8,000 lines of headers.

Build times went up from 90 minutes to about four hours. Ugh.

When I asked what was going on, I was told "next week is audit week". Sure enough, after the audit was completed, the code was "re-factored", and through brilliant optimization, the developers were able to improve the four hour build time down to less than half, about 90 minutes. Management was extremely impressed, and I believe one developer even got an award for his brilliant optimization work of, err, removing the pointless header files that they'd only inserted a week earlier to make the kloc audit look good.

You're too dumb to use click-to-cancel, Big Biz says with straight face

billdehaan
Meh

Always check cancellation procedures before signing up

Like most people, I've had horror stories about the difficulty, and sometimes near impossibility, of cancelling a service. Bell Canada stands out as one where their stores, phone support, and web site all pointed to each other as being responsible for cancellations. Despite their contracts clearly stating that the consumer "contact Bell" to terminate the contract, no one could actually clearly explain how to contact and how a cancellation could be achieved.

Despite no one in Bell having a clue how to cancel an account, once I did successfully manage to do it, I received a phone call from their retentions department less than 20 minutes later, and three followup calls within a week trying to get me to sign backup.

Of course, that's nothing compared to the guy who spent 20 minutes on the phone with his phone company repeatedly saying "cancel the account, cancel the account, cancel the account" to a service rep who simply refused to cancel it. Once he posted it to the internet and it went viral, he was able to cancel it, but the company had to be publicly bullied into cancelling an account. That's absurd.

Ever since my dealings with Bell, I've made a point of checking out cancellation procedures when I've considered signing up for any recurring service. I do a search for "cancel $SERVICE Canada", and it's surprising how many of those searches link to long lists of horror stories. I'm sure it's saved me money, as I've skipped signing up for a lot of things.

There are definitely reasons to not make it too easy to terminate an account, because it could be done accidentally (any service rep can tell you customer horror stories), but it should be no more difficult to terminate than it was to sign up for in the first place.

Amazon Prime too easy to join, too hard to quit, says FTC lawsuit

billdehaan

Maybe it's different in Canada

I signed up for Prime when I reached the point that I was ordering enough that shipping costs would be more than the Prime membership. I didn't care about Prime Music (which I never ended up using) or Prime Video (which I actually did).

I've seen dark patterns before (shout out to Bell Canada here, where Bell reps have actually been unable to find a way to cancel an account, both in store and on the phone), and I always check how to cancel before I sign up for something. I've avoided signing up for some services because I've read the horror stories online about difficulty in cancelling the service.

Generally speaking, if a search of "cancel $SERVICE Canada" brings up nothing but links to horror stories, I avoid it.

I checked the Amazon sign up and cancellation procedures, and at least in Canada, they're clear and pretty straightforward. The link here (at time of writing) goes to the page called "End your Amazon Prime Membership", which has a one-click "End Membership" button, and another link with a two step explanation of how to terminate automated billing.

Maybe the graphics have changed, but the page doesn't look any different than when I first looked for it.

I have no doubt that it could be different, and more difficult, in other countries. But at least in Canada, "too hard to quit" isn't accurate.

Fed up with slammed servers, IT replaced iTunes backups with a cow of a file

billdehaan
Big Brother

And before iPods, there was Usenet

In the mid 1990s, I worked in the USENET admin team for a fairly large (5,000 employees+) Canadian based corporation.

Bandwidth was metered in those days, and rates differed by usage and time of day. Much of what our team did was automate tasks to optimize bandwidth usage so that our Toronto to New York backup ran at 3AM, when the bandwidth cost was under 10% of what it would cost to run at 3PM, that sort of thing.

Although the web existed, it will still in its' infancy. Most of the traffic was FTP, and there were many Gopher, Archie, and Veronica users within the company. There was also USENET.

Originally, USENET was brought in as a programming resource for the software developers. We brought in the Comp, News, and Sci hierarchies to start with. Then Misc got added, and Rec, because (a) a lot of the users were book nerds, and (b) using Rec groups got them interested in USENET enough to learn it, and then they started using the News and Sci groups for their actual jobs.

But then, there was Alt. The Alt.* hierarchy was (and is) a snake pit that we tried to keep far, far away. Unfortunately, it was inevitable that some particular newsgroup was needed from it, and it wasn't easy to bring it in without bringing in the entire hierarchy, although we tried. We wanted to restrict it as much as possible, but were overruled, and so the entire thing came in.

USENET bandwidth usage exploded. It went from being ~3% of company bandwidth to ~70% in about two weeks. What was worse was that it didn't just increase by a factor of 25, it was also being used during peak business hours, when bandwidth rates were highest. Our $800-$1200 a month bandwidth cost, that we'd been trying to get under $500 a month, ballooned to something like $18,000 one month, and was projected to be over $30,000 the next month.

Management freaked and ordered an investigation.

What I found, not surprisingly, was that the bandwidth was almost all USENET, from the Alt.* hierarchy. Specifically, the Alt.sex.* hierarchy. Yup, people were downloading porn videos. Instead of 2kb text articles, lots of 40MB video clips were being downloaded. Repeatedly.

The next step was to identify who. And lo an behold, 90% of the usage was attributable to six users. Five users were responsible for about 20% in total, with the other 70% being from a single user. A single executive. An executive of the "reports directly to the Board of Directors" variety.

Awkward....

So, without naming names, we sent out an email blast to world+dog in the company, with a USENET traffic graph, showing how much was being transferred from the different groups.

Of the top 20 groups, 18 were alt.sex groups. Fortunately, there was nothing illegal, and we didn't have to deal with child porn or anything like that, thank god.

Unsurprisingly, when the users realized that we could see the login credentials and knew exactly who was transferring what, our bandwidth went back down to about $1100 a month again.

Learn the art of malicious compliance: doing exactly what you were asked, even when it's wrong

billdehaan
IT Angle

Working with defence contractors teaches you life skills

I wasn't military, but I worked with defence contractors, so in modern terms, I would be called "military adjacent" or somesuch.

The most basic skill when dealing with the military (any military) is CYA, or "Cover Your A**".

One of the reasons that the military has a reputation for staggeringly (over)complete documentation is largely due to the culture of CYA that developed, of necessity. In militaries where disobeying orders can get you executed, it's a good idea to have it recorded, repeatedly and in several different documents and locations that you "were just following orders" when you did what you did (sorting by first name, in Steve's case).

I personally had a team lead who was notorious for saying to do X (sort by first name), confirm it, double confirm it, triple confirm it, and then when it hit the fan, would deny to upper management that he had ever said that, and that I (or another member of the team) had done X on our own initiative. If someone refused to do the stupid thing (because it was stupid), he would tattle to senior management that the person was disobeying orders. If they had proof that he'd ordered them to do X, he's say that the person misunderstood his instructions. No matter what, the subordinate was always the one to blame.

As you can imagine, he was not beloved within the team for these reasons (and many others).

So, when he one day decided to order me to do something particularly stupid, I confirmed that he meant it. And doubly confirmed. But I waited until the meeting with the big brass that was scheduled for the next day to triple confirm it. I did the "explain it to me like I'm five years old" approach, and he condescendingly spelled out exactly what he wanted done, step by step, exactly what he wanted me to do. And so I did, exactly in the sequence he'd laid out.

The results were glorious. They resulted not only in invalidating a flight test and missing a ship date, they put the entire project at risk of cancellation. Senior executives got involved. First he tried the "I never told him to do that" approach, except there were several members of the brass who'd been present to see him to just that. They didn't understand the implications of the orders, but they remembered damned well that he'd not only ordered me to do it, he'd done so repeatedly.

Likewise, the "well, he misunderstood" argument went nowhere, because I'd repeatedly asked for clarification, he'd provided it, and it matched letter for letter what I'd done, and what had caused the situation we were in.

But the chef's kiss was his statement that "if I was really telling people to do things that stupid, people would be complaining all the time", apparently unaware that at least six team members (although I was not one of them) had made formal complaints both to management, and in two cases to HR, about being backstabbed exactly like this. When they checked, there were something like 38 such complaints over a period of 3 years.

Unsurprisingly, in the next re-org shuffle about six weeks later, he was moved into the newly-created "Special Projects" group where he would be leading the team (which at the moment was just him) in said special projects, which were yet to be defined. Internally, this was later referred to as the "Ice Floe" group, named after the practice of some Inuit tribes to put their sick and elderly members who were a drain on the tribe onto an ice floe so that they'd float away and die.

No more free love: Netflix expands account sharing restrictions

billdehaan

Re: "woke crap" your definition is displaying your bias

HBO's "Velma" has to be the touchstone of this phenomena.

Netflix Originals were uneven. They weren't guaranteed to be great, but their track record was good enough that people would start watching things like their MCU shows just on the fact that Netflix recommended them. Sure, there were misses (Iron Fist comes to mind), but generally speaking, even the duds were still watchable.

Then they started to change direction. Back in 2019, Netflix promoted the hell out of their Another Life series. It was not only laughably bad (a Youtube video mocking it actually got more views and feedback than the Netflix official marketing of it), it was absurd. If it had been named marketed as a a satire, and called the ship the "USS Diversity Hire", it might have been accepted as a comedy, but Netflix was pushing this as their vision of serious science fiction. Then they followed it up with The I-Land, and people I know stopped watching "Netflix Originals" at that point.

And then there was Cuties. The controversy over that was enough to damage the brand significantly. They even had to issue a memo to employees telling them to knock it off with all the heavy handed politics, because it was costing them viewers.

There are still good shows being produced, and breakout hits like Stranger Things, Squid Game, and Wednesday, but those are the exceptions. Netflix's reputation for quality just isn't there any more, and it's been replaced with a reputation of producing propaganda instead of entertainment. Not surprisingly, that hasn't helped their revenue numbers.

billdehaan

Still haven't flipped the switch in Canada

A friend didn't so much as as share his password with me as he entered it on my XBox when he was visiting one day and wanted the gang to see some movie. He had one of those accounts which allowed four simultaneous logins, and there was no mention about households a the time, so he just set it up at three friends' houses and told us we were welcome to use it.

Now that he's getting emails from Netflix warning him of impending restrictions, he's asking me to test it to see if it still works. It did, as of Feb 10th, although apparently that ends on Feb 21st.

Personally, between Amazon Prime and Youtube, I doubt if I've actually watched more than 10 hours of Netflix in the past year. The only series I can think that I deliberately used it for was to watch Wednesday. I certainly won't be buying a subscription, and I doubt I'll miss it.

Back in 2015, Netflix was the one and only streaming service, and it had both all of the legacy movies and television series. Outside of live concerts and sports, it was the one shop shopping location for all people's entertainment needs. Today, the streaming landscape is fractured. On the commercial side, there is Apple TV, Hulu, Amazon Prime, Disney+, HBO Max. Then there is the freeware/ad-supported tier, like Tubi, Plex.tv, Crackle, and Roku. And then of course there are torrents and the pirate streamers.

So, Netflix has more competition than before, its' prices are higher than before, and it's taking action to disable one of the strongest reasons for its' growth.

I'm sure that some people will sign up for their own accounts, but I have no doubt that this will cost them many subscribers, as well. Whether it will be a net gain or a net loss in terms of subscriber numbers, I don't know. I suspect it will be a loss, but that's just a guess. I'm sure it will save them bandwidth costs, but whether it will restore profitability is anyone's guess.

Arca Noae is modernizing OS/2 Warp for 21st century PCs

billdehaan
Thumb Up

Modernize, shomdernize, as long as it runs

I did OS/2 applications work on contract from about 1990 (with OS/2 v1.1) to 1992 (with OS/2 v1.3 and 2.0). I leveraged those OS/2 skills to get contracts at companies in the back half of the 1990s, although sadly, that often meant decomissioning OS/2 in favour of something else (Linux, Windows NT 3.1 or 3.5, or even... Windows 95).

I still work with companies that have an installed base of OS/2 deployments 25 year later. They are usually in turnkey solutions for control systems. eComStation and now Arca Noae have been lifesavers.

Lots of companies have clients with OS/2 software that can't be replaced or upgraded. Often the original vendor no longer exists. If it does exist, it often doesn't have the source for the legacy system, and even if they do, they don't have the skills to maintain it, let alone upgrade it. You want an upgrade to our 25 year old OS/2 product? Here's our Windows 10 version.

Unfortunately, for some clients, that's not an option. They're perfectly happy to keep running the original code, which works as well today as it did in 1997. That's especially true when talking about PCs that weren't networked. Unfortunately, when that 1997 era Pentium Pro dies, finding a replacement that will even load OS/2 is the problem.

Sure, there are virtual boxes, but a lot of OS/2 device drivers are fussy about running in VMs.

That's where Arca Noae has been a godsend. I've had customers with $25M installations with six OS/2 boxes that had a box die, then another, then another, and then another. The cost of upgrading the software is often quoted at $1M or more, when all they need are four replacement PCs that can run OS/2 v4 (or v3, or even v2.1).

No version of OS/2 will install on a modern PC. It doesn't know about USB, or SATA, or pretty much any other innovation that took place after 1996.

But an Arca Noae installation that gets the customer up and running again is worth its' weight in gold. Does it handle quad cores properly? We don't care. As long as it runs as fast as that 1998 era Pentium, and it can handle the custom device driver for the weird 128 port output controller, it spares the customer spending $1M in needless upgrades.

I can't see myself ever doing any new development in C/Set2, and I doubt many others do, either. But just keeping existing OS/2 installs alive makes Arca Noae a viable business.

LastPass admits attackers have a copy of customers’ password vaults

billdehaan

Re: The cloud is just someone else's computer

You're right, I was reading an article about 1Password prior to this and conflated the two. Mea Culpa.

billdehaan

Re: Someone Else's Password

Well, I did say that it was a definition like that. And no, that's not my actual password.

Even if I had entered my actual Autohotkey definition, it wouldn't work. Things like the plus sign can't be entered directly, they have to be defined as {+}. So even cutting and pasting the string literal from my editor into the password won't work.

billdehaan

Re: Someone Else's Password

Using a password manager with a hard to guess master password is way superior to using weak passwords

On a Windows PC, I've found using the AutoHotKey abbreviations function very useful to store strong passwords and assign them to keyboard strings.

So, a definition like

::bwpw::QchauTQ<[Mkzg[RPR8<k3d!58wQ8Kw-svajzygG>awsHjR[Kr<9XLJakyGZmKR!

Allows you to type "bwpw" (BitWarden password), and have it expand to the 64 password you need to get into the actual password manager.

Unfortunately, I haven't figured out how to manage long passwords like that on mobile devices.

billdehaan

The cloud is just someone else's computer

Unless 1Password is doing things line enforcing users to have different passwords (ie. warning them when they try to save a password for website X that it's already used in website Y), and I doubt such a thing is even possible without accessing the passwords themselves, people are going to continue to use, and re-use, weak passwords.

I've seen some websites which report things like "the password you entered is one of the 10,000 most commonly used passwords; please select another", but then all most people do is tack their pet's name or something in front of it (which is still better than nothing, but hardly ideal).

Personally, I've been using Keepass for literally decades. It may not be the most convenient thing to use, but it succeeds at the most important thing a password manager should be good at: it's secure.

For low-risk passwords (like, er, el Reg here), I use Bitwarden. It's a zero-knowledge system, I'm using a 48 byte master password, and frankly, if someone wants to take the effort to crack Bitwarden and my master password, they deserve to get my Register, Slashdot, and Ars Technica passwords, for all the good it will do them.

For things like banking, taxes, and online shopping accounts, they're all in a Keepass hive on a VeraCrypt volume that includes a portable browser instance with no plugins or extensions.

The funny/sad thing is, the effort I take with my low value accounts (hi, el Reg) on Bitwarden is actually more than many of my friends' SOs and family members use for their high value accounts. I had to resuscitate a PC with a nearly dead SATA drive a while back, and it came with a sticky note that had "USERNAME=xxx PASSWORD=yyy" for the windows login account. I cloned the drive, and when testing the replacement in the PC, I brought up the browser. There were a dozen tabs with various accounts, and sure enough, I was either already logged in to the owner's account, or the "yyy" password would get me in.

I've never been a big fan of the cloud, because of things like this 1Pass breach. But seeing how most people treat security on their own, it's a question of which is worse.

Know the difference between a bin and /bin unless you want a new doorstop

billdehaan
Go

And then there was OS/2

The UK has bins, the US prefers trashcans, and computers like their /bin

Due to the various look and feel lawsuits, after macOS put their "trashcan" on the desktop, Microsoft had to use "recycle bin" for theirs, to be different.

So, when IBM released OS/2 v2, they could use neither. So they settled on "shredder". The desktop was complete with a paper shredder WAV file that made the appropriate grinding noises when any object was dropped on it.

Since IBM was new to the desktop, or at least they believed their customers were, they populated it with all sorts of instructional games and the like to start with. They included a chess game (GNU chess, which they neglected to credit), and an entertainment program called Neko that helped users get used to the mouse interface.

However, many of the studious, steadfast IBM employees had little use for humour, and their first reaction was to remove it from the desktop. However, this was a system object, not a user object, and so it couldn't actually be deleted.

This led to much amusement as the OS/2 bug reports contained several variations of "I dropped the cat into the shredder but it didn't die". This became even more amusing when Very High Level Executives visited the site. These were not just Executives but Senior Executives. They were far too Senior to be familiar with (ugh) PC desktop software.

And one of these Senior Executives either was on, or his wife was on, one of the state ASPCA (American Society for the Prevention of Cruelty to Animals) board of directors. The phrase "dropped the cat in the shredder and it didn't die" was not as amusing to the Senior Executives as it was to the rank and file, oddly enough.

billdehaan
Facepalm

Mounter, automounter, what's the diff

I worked at a large bank where there was a culture clash between the ancient whitebeards who used the ancient, grotty old IBM mainframes, while the younger generation of Sun worshippers (Hail to the Sun god, he is a fun god, Ra Ra Ra) basked in the glory that was Unix. Specifically, SunOS.

One of the major benefits, they explained to the whitebeards, was that, unlike the mainframe, if the network died, individual Unix workstations would keep running. As they were saying this, someone upgraded the SunServer automounter, meaning that the workstations could not mount things from it. At which point, every Sun workstation on the floor also stopped working. In order to save space on the individual workstations, they loaded the /bin directory from the file server. Fortunately, the local workstations also had /usr/local/bin, but in a fit of overly aggressive optimization, that too was loaded from the server.

This led to the bizarre admission that "SparcOS can't run $BUSINESSAPP until it can load /usr/local/bin from the file server", which led to management asking the fair question of why one had to load a "local" object from the remote server.

Mind, it's not always the person deleting the directory who is at fault.

I worked on a shared lab machine in the 1980s where space was tight. I was working on a video project, so I created a directory \DEVTEAMS (this was during the days of DOS 3 and 8.3 file names), and under it, I created VIDPROJ. Another group was doing a transputer project, so I also created TRANSPTR. That way, anyone who backed up the shared machine's \DEVTEAMS directory would back up both projects, \DEVTEAMS\TRANSPTR and \DEVTEAMS\VIDPROJ.

The transputer team had different ideas, though. They used their teams initials. So Charley, Uri, Norman and Kevin put their development work in the directory C:\JUNK.

Well, one day, a team member found that there was only 2kb of disk space on the 10MB (or maybe it was 20MB) drive. So, the first thing he did was run a backup of \DEVTEAMS, and then he went around seeing what he could get rid of to free up space. He found a directory C:\JUNK, and it was filled with thousands of small (10 to 50 byte) files with names like $d217.d78 and $aix7.7$a. They were obviously cache files of some kind, so he wiped them and freed up about 1.5MB of disk space.The machine now had enough space to run his build.

The next day, the head of the transputer group was livid. His team's entire project had been destroyed on the lab machine. There was month's worth of work gone! It turned out that instead of saving source files, the transputer development tool saved their source as hundreds of individual source objects, and maintained a database of how to link them. In other words, hundreds of little files with autogenerated file names, like $d217.d78 and $aix7.7$a

Yes, on a shared machine, they set up a directory called JUNK, and filled it with binary files with names like $3j5a1.d7x, and were upset people looking to clean out dead files didn't realize that those were critical project files.

Although they weren't so critical that their team ever bothered to back them up once in the span of six months, of course.

Whatever you do, don't show initiative if you value your job

billdehaan

Newbies can screw up but to do real damage you need management buy-in

Several years ago, I worked in a lab that was pretty much a free for all. The software was tested in the local lab and then deployed in machines with dedicated hardware all over the world. When there was a field issue, support people would often bring in the machine (if they thought it was a hardware issue) or just the hard drive (if they thought it was only a software problem) from the field back to the lab to debug it.

Of course, many of these machines would be absolutely riddled with viruses and malware, which would then run rampant over the lab network. Management's response was not to install antivirus on the lab machines, however. That was too expensive, and frankly, the McAfee software that they had standardized on made computers so disgustingly slow that they were unusable. That was perfectly fine for employee machines, of course. No one cares about them. But customers visited the lab, so the machines there had to be presentable.

Management ordered IT to install a process on every lab machine that would check every USB media connection attempt and check it had a specific file in the root directory. That file would have an MD5 checksum proving it had been checked for viruses by the lab anti-virus machine. The idea was that you brought your USB thumb drive to the lab and plugged it into the virus scanner machine, which would write this time-stamped USB credential file. Then you connected the USB to a lab PC, and the antivirus process would check that the credential file was current and correct. If it wasn't, or if it was missing, the process would eject the USB.

Naturally, neither management nor IT told the engineering staff about this. Projects ground to a halt as engineers took test builds down to the lab and spent hours struggling unsuccessfully to install them on the lab PCs, only to have their USB media ejected as soon as it was connected. Dozens of problem tickets were raised against both lab support and IT, but since only the upper castes in those groups were aware of the cause, several IT and lab support people were trying to debug the issue.

For those engineers whose lab machines had a CD reader, they burned their builds to CDs and were able to install them that way. Of course, they couldn't get logs back, but they could at least be partially productive. Others tried dozens of different USB keys, without success. A third group discovered that if you started a batch job on the PC that continually copied a big file to the USB drive letter in a loop before the USB was connected, once the USB was connected, it would establish a file handle and the disk wouldn't be ejected. That solution was mailed around by engineering leaders to their teams, and it became the de facto resolution.

Delivery dates were missed, customers screamed, and finally, someone in management realized that maybe they should have, oh, announced this change or something. Then they came up with the brilliant idea that instead of just silently ejecting the USB, the software on the PC could put up a message on the screen telling the user why the USB had been rejected, or something.

Ingenius! The software was updated, and an IT person was tasked with upgrading it on all the lab machines. All the machines. Several hundred of them, in multiple labs, on several floors.

The IT person realized that there was a better way. Rather than manually doing 1500+ installs, why not have the users do it? Since they were plugging in USB disks to the lab machines themselves, have the virus scanner install the newer version on their USB, enable the USB's autorun, and then when anyone scanned their USB (which at least some engineers were doing now, since a month after this started, there was a company-wide notice about the change) and plugged it into a new machine, it would install the upgrade! Any machine that was used would be upgraded, and any that weren't upgraded weren't being used in the first place, right? This was a great time saver.

So, the approved solution was to install executable software on the USB and put it in the USB's autorun. And it worked. The first time such a USB was connected to a lab machine, it would update the software, and all was good.

However, on the second connection, the updated software would look at the USB, notice that it had autorun enabled, and was trying to install software on the PC. Oh my god! That's a virus! Quick! Force a disconnect and lock the machine!

People who didn't use the virus scanner and were running the batch file hack to lock the USB didn't have any problem. People who did use the mandated (and mandatory) virus scanner found that after doing so, as soon as they connected to a lab machine, it not only ejected the USB, it immediately locked up. And since only lab personnel had the machine passwords, work stopped until a lab person could be found.

In other words, you were fine as long as you didn't follow company policy. There's a great incentive structure for you.

Even better, when people then plugged their USBs back on their desktop machines, it then spread the "virus" into the corporate network at large.

That time-saving optimization disabled the entire lab for weeks, took IS three days to clean up the corporate network, and IT spent a month scouring all of the remnants of it off of the 1500+ lab machines.

By the way, there never was a case of a virus being transmitted via USB in the lab. Ever. All of the viruses had been transmitted by field personnel bringing hard disks in from the field and installing them in the lab, which completely bypassed the USB checking nonsense. In other words, in addition to all the chaos that it caused, the entire exercise was completely useless at its' stated goal.

When management went nuclear on an innocent software engineer

billdehaan
Facepalm

Been there, done that

No problem. The US tech had simply grabbed the disk from the PC running the testing and copied it to the other computers. "Obviously it worked because they are all up and running," he said.

I had a somewhat similar experience, although thankfully lesser in scope.

As in the article, I was working on a testbed simulator for a safety-critical system that was currently running in production. To be clear, this was a simulator to test an upgrade which had not been deployed yet.

We had multiple client X machines that communicated with server Y. Many of the X machines were mobile, and field testing of the upgrade showed that many X machines intermittently either lost communication, or had communication errors (duplicated, out of sequence, or lost messages). Sometimes.

The stationary clients were fine, but no one could determine the cause of the intermittent failure. The systems integrator said it was the hardware, the hardware vendor showed the hardware tests that proved the communications systems worked when stationary, so it had to be the mobility aspect, the mobility supplier ran pings and traceroutes showing it couldn't be them, so obviously, it had to be our proprietary protocol. Our protocol guys pointed out that we didn't even know if an X was mobile or stationary, so how could the protocol fail only on the mobile clients?

To investigate the problem, I essentially wrote a customized ping command to test. My ping didn't use ICMP, though, it used our proprietary protocol. I made it a master/slave application, where the master would timestamp a message packet, give it a master sequence number, and send it to the client. The client would in turn timestamp the arrival time, give it a slave sequence number, and send it back. When the master received it, it would log it to disk. The operators could configure the protocol's send rate, packet size, etc.

The idea was to try it in the field, identify which clients/mobility locations were the problem, and play with the frequency and protocol payload to narrow down what was going on.

Of course, with all the time stamping and optimizations going on, there were several issues (time syncing was particularly troublesome), and although version 1 worked as a proof of concept, they wanted more features, so I got a budget for version 2.

And then a field report came in. My tool had reported a catastrophic loss of communication in an entire range of client machines where there was actually no problem with the actual upgrade application. So obviously my tool was crap, the customer had no faith in us because we had no clue what we were doing, how could we be trusted with safety-critical systems, etc. I had endangered the entire multi-million dollar project, we would all be out on the street, our children would starve, and all because of me. The project manager wanted blood, specifically mine.

Hmm. Send me the logs, I said.

Looking at the logs, I noted that the master and slave packet formats were slightly different, which made no sense, because masters and slaves were paired. I was extending the packet format for version 2, but it hadn't been released yet. This looked almost like a version 1 master trying to communicate with a version 2 slave. So I asked them to confirm the MD5 checksums of the master and all slave versions in the field.

Sure enough, there was a version 1 master, and the offending slaves didn't match version 1. What on earth had happened?

It turned out that the project manager had been in our test lab, and seen a couple of the machines where I'd been testing version 2. He liked what he saw, so he made a copy of the software (the unreleased, work in progress software, which was being debugged) from the lab machine, and when he returned to site, he gave it to the field testers, who then ran with it.

Not surprisingly, when this came to light (not to the customer, at least), there was a reshuffling, and there was a new project manager.

I promptly added two more features to the software. First, during the initial handshake, masters and slaves verified that they were each running compatible versions, or they stopped talking. Secondly, I added a time/date check, and the software would only work for 5 days after compilation, after which it would post an onscreen message saying it was expired beta software. I disabled that check in the formal release, but it put a stop to project-ending reports from the field.

What do you do when all your source walks out the door?

billdehaan
Facepalm

I didn't need to do it. Management did it for me.

In the early 1980s, I was on contract for a Fortune 500 company. My manager was notoriously, almost pathologically miserly. So while other teams had IBM ATs (80286 machines with 20MB hard disks) or at worst IBM XTs (8088 PCs with 10MB hard disks), our team had IBM PCs (8088 machines with no hard disk, only floppy drives), in order to save money.

Floppy disks cost about $3 each, or $25 for a box of 10. That was if you got the good ones (Dysan or Maxell), and of course, we didn't. Our manager got something like 10 boxes of BASF for $89. As you can imagine, these disks were cheaper for a reason, and data loss was very high.

Being the paranoid sort, I kept redundant backups. My source and tools grew to about 8 diskettes, so I just allocated one box to the project. One at home, and one at work, plus the original source meant I was using 3 boxes.

When my contract ended, in my exit interview, I turned over all 3 boxes. My manager very angrily said "so you're the one who's been hogging all the disks". He summoned another team member, and handed him the two boxes of backups, and ordered him to reformat them and put them in the general diskette pool, "where they should have been in the first place".

I left the company, and life went on.

A few months later, I got a call from them. The manager had gone to another office to install the component that I'd been working on. Rather than "waste" two more floppies and make copies of the executables to install, he took the entire box of disks, including the source and tools, with him. So when he slipped on the ice getting out of the streetcar, dropped the box of disks, and they were promptly run over by a car, that meant everything I'd worked on for them for over a year was lost. Source, executable, tools, documentation, everything.

Did I... by chance... happen to have any copies, even partial ones, at home?

I pointed out that would have been a violation of NDA (it would have), so no, I didn't.

Fortunately, for me, if not the company, I still had the copy of my exit interview, where the manager had put in writing the fact that I'd been "wasting" valuable floppies by "hoarding" floppy disks from the rest of the team. So if they wanted to claim that I'd been the one at fault for the loss, I had proof that I'd provided complete source when I left, and they'd accepted it.

The person one the one sounded like I wasn't the first person to tell him that, and this wasn't the first time this had happened.

Internet backbone provider Lumen quits Russia

billdehaan

The goal is to return Russia's economy to its' 1989 borders.

Given the huge lineups in front of stores and the pictures of empty store shelves, I'd say it's being quite successful at that.

Your app deleted all my files. And my wallpaper too!

billdehaan
FAIL

I had a customer with almost the exact opposite

We have a couple of graphic designer/artist/musician types as clients. They're brilliant in their fields, but as far as computers go, they require that their computers be, to quote one of them, "blonde proof".

Last year, one of them had their external USB backup disk die on them. So, they got a shiny new 4TB MyBook as a replacement, and they said things were good.

A few months later, things were less good. Their C: drive was starting to die, making hideous belt sander noises, and losing critical files. But when they tried to back up to the USB disk, it was out of space. Panic ensued, as the primary disk wasn't backed up, and the backup disk had no space.

I had them send me screenshots, but the external disk was 4TB, had 3.98TB free, and they seemed to be able to copy some files over, but only about 1% of them.

So, I went over and checked it. My first assumption was that it was formatted as FAT32, and any files over 4GB wouldn't fit. Nope, that wasn't it.

I had her show me exactly what she did, so I could see if it was a PEBCAK error, as it usually was.

She double clicked on the icon of the USB disk, which opened a Windows Explorer window, as excepted. She then dragged a folder from her desktop to it. And sure enough, an "insufficient space" error, with a Windows error code number appeared.

She, of course, was panicking that she was going to lose years of work, and rightly so. I tried various things, but there was no issue. I put the external USB on my laptop, and there was no problem writing to it. The disk wasn't write protected, it didn't need administrator rights, it wasn't FAT32, and in fact it could copy about 300MB of files because it gave the "insufficient space" error. What the hell was it?

So, I researched the Windows error number. Strangely, it was not a file system error, it was a OneDrive error code. WTF? She wasn't even using OneDrive. Or was she?

Sure enough, OneDrive was enabled. She hadn't configured it; she had no idea what OneDrive even was. This was one of those "it came that way when I bought it" things. Either the box store had configured Windows for her, or it was done when she set up Windows the first time. Since she was a "click yes to everything" type user, it would be whatever Microsoft sets as defaults.

In the end, it turned out to be one of the more malicious things Microsoft has done. When OneDrive is enabled in Windows 10, when you copy from one drive to another using Windows Explorers, it backs up the destination disk on OneDrive. It's totally seamless and transparent.

Of course, if there isn't space on OneDrive, the copy is aborted. And that also aborts the local copy to the USB disk, too.

Yes, that's right. Her backup was failing because she was trying to back up about 2TB of data to a 4TB disk, but OneDrive only had 5GB or so, so Windows would only allow her to copy 5GB to the disk.

This idiocy could be bypassed by using the command line, or a third party tool, or another file manager, but this was her workflow.

By logging her out of OneDrive, and disabling, and then removing OneDrive so it didn't restart at boot time, she was actually able to use her 4TB disk.

Another "improvement" that makes things worse.

Chromium-adjacent Otter browser targets OS/2

billdehaan

I was an OS/2 developer from 1990 to 1992, and I used it as my home OS from 1990 to some time in 1994, whenever NT 4 came out. I would dual boot between them and planned to alternate which was better as my home OS. I was using Solaris at work by then, so I didn't need to worry about compatibility between home and work environments.

I expected it would take three or four months to compare the two. I ended up switching over almost completely within about three weeks.

OS/2 had a lot of good things going for it, but the PM's SIQ (Presentation Manager's single input queue) was the Achilles' Heel. Yes, the OS was robust, but if the SIQ was blocked, as it did often, all mouse and keyboard input was ignored. It didn't crash as Windows 95 did so often, but in practical terms, the difference wasn't that significant.

It was terrific as a server. If you wanted a home file server and didn't touch the GUI, the HPFS was much more robust than FAT or later FAT32 under Windows, but as a workstation, it was far too problematic. If you had native OS/2 software, it was very robust; the problem was that native OS/2 software was rare, and for the most part, if it did exist, it was immature and missing features compared to the competitive Windows offerings. The result was that people were always trying to run Windows applications under OS/2, which had all sorts of problems. And when things went sideways, the vendor wouldn't help, because they didn't support OS/2, IBM wouldn't help, because it was a Windows application, not their software, and Microsoft wouldn't help, because they weren't supporting OS/2. A lot of stuff worked fine, some worked partially, and some worked not at all. It really was a crapshoot.

By the time OS/2 v4 (Merlin) finally officially addressed the SIQ issue that had been there for years, it was too little, too late. NT4 matched it for stability, and while the GUI was nowhere near as elegant or mature as the PM, it was functional, and there was no SIQ issue. There's a strong argument that SOM and DSOM were better than Windows' COM and DCOM, but so little OS/2 software actually took advantage of it that it really didn't matter.

When my dual boot machine blew out a hard drive six months later and I had to regen the system from backups, I realized I hadn't booted OS/2 in months, so I only installed the NT4 partition. I left the space for OS/2 unused and figured I'd see if I had a need for it. A few months later, I just partitioned the unused space for NT, and never looked back.

I enjoyed using OS/2, but there isn't really much in it that isn't in the Linux, MacOS, or Windows offerings today. The GUI shell is very innovative, but even there, there's not a lot that can't be done with the modern OSes.

Apple is about to start scanning iPhone users' devices for banned content, professor warns

billdehaan
Stop

People went to digital photography to get AWAY from this

Back in the 1990s and early 2000s, there was a "think of the children" panic in Canada, and crusaders went on the tear to get the police and government to "do something" to stop it.

In the middle of this climate, I know of three cases where people ended up getting visited by police investigating them for alleged child pornography.

One case was a Japanese anime, as in, a cartoon, with no actual humans being filmed, let alone children.

The other two were the result of photo development. Those old enough to remember actual film cameras know that unless you had a darkroom, chemicals, and skill, you needed to go to a photo developer to convert your raw film into actual snapshots. Camera stores did it, of course, as well as specialty outlets like Fotomat, but one of the most common photo development places was, oddly enough, the pharmacy. And it was pharmacies that called the cops on two people getting their photos developed.

The first case showed the shocking picture of a nude 5 year old boy with his pants on the sidewalk with a scantily clad 3 year old girl next to him. In other words, a 3 year old girl snuck up on her big brother and pants his swimsuit on him. Mom happened to be taking pictures of her kids in the pool, and couldn't resist getting a snap of her kids pranking each other.

The second case was similar, with a grown woman in a bathtub with a 2 year old boy, who decided to make an obscene gesture to shock his mommy just as Daddy walked in. In other words, a typical "Jim, get in here and see what your son is doing" family moment.

Fortunately, in both cases, the police officers were parents themselves and not idiots, and when they visited the families and saw that the kids photographed were the children of the photographers, they realized that the photo developers had completely over reacted. But as you can imagine, those families stopped sending their film out to be developed, and went to digital photography.

Now, you don't even have to drop your film off to have busybodies report you to the cops, your camera vendor will do it as soon as you take your picture.

There's no way that this won't be abused, both by companies, and governments.

Apple's Steve Jobs: Visionary, dreamweaver... and the kind of fellow who might tell a porky or two on his job application

billdehaan

Re: Expertise in self promotion

The reason Woz didn't get credit was because he didn't seek it, or claim it.

Steve Jobs is a lot like Stan Lee of Marvel Comics, another icon who is a household name, but whose co-contributors are not well known (to the general public, at least).

People like Jobs and Lee are first and foremost salesmen. Everyone knows them because they're always talking about themselves. When Apple had an announcement to make, it was Jobs who was calling up journalists and going to the trade shows. Woz didn't.

It's not money, but personality, that got Jobs noticed. Love him or (quite commonly) hate him, Jobs was memorable. Memorable people get talked about. People like Woz who are quietly competent, don't.

Bloated middle age beckons: Windows 1.0 turns 35 and is dealing with its mid-life crisis, just about

billdehaan

Re: Breakthrough/turning point

MacOS's marketshare was growing strongly despite the massive You can't get fired if you buy IBM syndrome

MacOS was growing in several markets, but Windows was growing faster. The entire market was growing; home computers were becoming less of a curiosity and more acceptable. They still weren't necessities, by any means, but they were no longer oddities.

A PC in 1987 was still the price of a used car, and Macs were considerably more expensive. And that's what did MacOS in.

A person with a $3,000 PC might invest $99 to buy Windows to try it out, but he wasn't going to spend $5,000 to try out a Mac.

Even then, that $99 was a hurdle, and Microsoft recognized it. That's why they included a copy of Windows with pretty much everything they sold. You bought a Microsoft Mouse? Here's a runtime copy of WIndows with some graphics programs. You bought a Microsoft game? Here's the DOS and the Windows versions, together. Oh, you don't have Windows? That's okay, it comes with a Windows runtime.

Sure, those runtime Windows ran like a dog, and needed more memory than the user had, but he could afford to buy more memory a lot more than he could afford a Mac.

There was a push at the time to get MacOS on PCs, one that Jobs fought against. He railed against "crap PC hardware", and he was right. PC hardware quality was all over the map, as was pricing. Macs were stable, because they were single-source. But that came at a cost that most people weren't willing to pay.

And by the same token, if you spend the same amount of money a Mac cost and put it into a high end PC, then the gap between MacOS and Windows narrowed considerably.

Mac still won a lot of markets, notably education and graphics, but the expected educational followup never happened. There was a lot of talk that parents of kids with Macs at school would buy a Mac for home, and some did. But most walked into the computer section of the department store, saw the Mac and the PC next to each other, and got sticker shock and the cost of the Mac, and went home with a PC.

MacOS made some inroads with the "Hackintoshes", as they were known, but one of Jobs' first actions when returning to Apple was to kill them, so that was that.

IT Marie Kondo asks: Does this noisy PC spark joy? Alas, no. So under the desk it goes

billdehaan

Re: Insert mandatory story about Miniscribe disks

Yeah, I believe it.

I had a friend who custom built systems for offices back then. When Miniscribe came out with their next generation of hard drives, they released a lot of promotional material to people like my friend beforehand, to try to get them to start buying Miniscribe again.

He showed me some of it. One of the blurbs stated that Miniscribe guaranteed that the sound level of their new drives was 30db or lower. I don't remember the actual number, it may have been lower than 30db, but it was still high. But the key point was that they were actually promising a maximum noise level from their drives. They were the only hard drive manufacturer to specify that, and they'd never done it before. That alone was enough to tell me that the noise had been hurting their sales, no matter what their salesmen were claiming at the time.

billdehaan

Insert mandatory story about Miniscribe disks

As anyone who worked back in the 1980s can attest, the hard drives produced by Miniscribe were monsters. Like Godzilla or Mothra, they were big, heavy, and especially noisy.

In 1985 or so, I had a service call to a small law office. It was a legal factory, with one lawyer and seven or eight public notaries. They each had a PC and a primitive network (parallel port based, as I recall). The centrepiece of the office was the extremely large, extremely impressive, and extremely loud legal line printer. It was essentially a networked (by the standards of the day) typewriter that printed out 14" legal sized documents all day, every day. I sounded like a light submachine gun when it was running, which was most of the time.

As a result of that, it was enclosed in a sound box, which was essentially a plexiglass cage with lots of soundproof padding inside to muffle the noise. It didn't stop the noise, of course, but it brought the office down from about 60db to about 35db. Still not great, but better.

The way the "lan" worked was a point to point network. No one could print directly, but each notary could copy the file to the file server PC next to the printer, which had a job that just printed everything in a particularly directory every minute. It was a kludge, but it worked.

One day, the file server died. So they got another one. The new one had a Miniscribe hard drive. Running a miniscribe hard drive is like having a bag of metal ball bearings dropping on a skillet, 7/24. The noise was not only loud, it was excrutiating.

The next time I visited that office, the printer was in the middle of the room, completely exposed. The sound baffle was now over the file server, because the hard drive was louder than the printer.

'One rule for me, another for them' is all well and good until it sinks the entire company's ability to receive emails

billdehaan

my kids (teenagers) initially didn't believe that when I was their age, the internet basically didn't exist

Show them the movie Soylent Green. Made in 1973, it showed the far future world of ... 2022.

In the film's climax, the hero is being chased, and desperately has to get the information he learned out to the world. It's a major plot point that during the chase, he can't find a telephone booth to use :-)

billdehaan

All my apps are essential

I was once tasked with cutting costs in an office that had about a hundred users, which paid vendors for custom data feeds. Some feeds were cheap, some were expensive, and some were astronomically expensive.

The system had built up over several years, both the original users and the support staff had moved on over the years, and the only documentation that was guaranteed to be accurate were the current invoices. Looking back at the pricing logs over the years looked like "Hollywood accounting", as vendors gave special deals to compete with one other, the undocumented specifics of which were lost to time.

As explained, the job was supposed to be looking at the network logs, seeing what data feeds a user was using, asking him if he really needed them, then cancelling the unneeded data feed to save money.

In a lot of cases, a user would be accessing a high cost data feed but only using a portion of the data, and what they were using was available from another vendor on a different (and cheaper) feed, and we could switch them over.

The theory was that the users would tell us when they weren't using something, and we could cut it.

The theory was, of course, incorrect. Every user was adamant that they needed what they currently had. No changes would be tolerated. Everything was essential, including the user who was using the fourth most expensive feed despite covering everything but the feed's clock with other windows. Why did she need this particular feed? Because she "liked the way the clock looked".

I actually quoted that in my report.

Obviously, the users weren't going to be any help, so we resorted to a more effective method of analysis: brute force.

Now, our network analysis showed which feeds were really being used all the time, and we left those alone. It was the "blue moon" ones - things that were only accessed once a day, or weekly - that were the best candidates for culling. So, we simply unplugged them, and waited to hear screams.

If we unplugged something and a user screeched that his feed was done less than 30 seconds later, that one was obviously in use, and stayed in the "keep, analyse later" pile. If no one complained, we'd wait to see how long it was before there was a complaint, and after a while, downgrade the speed/usage rate (some of these feeds had metered options rather than just unlimited), or cancel it altogether.

We saved enough money to meet the goal, and life went on.

I left the company and went on to greener pastures after Christmas. One day, in mid-July, I got a phone call from my replacement. It turned out that user X had tried to access his "favourite screen", the one he used every damned day, and it turned out that the feed had been cancelled. By me. Five months earlier. And yet it had taken him half a year to notice that his favourite screen had been empty for several months...

Of course, it didn't work like that at all. Every

entailed

querying the users, finding out what they

Page: