* Posts by billdehaan

164 publicly visible posts • joined 6 Mar 2014


IBM Software tells workers: Get back to the office three days a week


Re: Hilarious

How things have progressed.

You have no idea.

A friend in Toronto was in a team working with their Texas office, and it was decreed that teleconferences were insufficient, in-person meetings were essential. So, the Toronto office packed up their entire team and flew them down to the Texas office to spend 3-4 weeks in person with their colleagues.

Upon arrival, they discovered the Texas team couldn't put them in the same building as the group they were working with; there simply wasn't enough space. But that was okay, there was another building, about a mile away, that they'd rented, and it had high-speed connectivity, so they could just teleconference. Unfortunately, hotel space was tight, so the team had to stay in a a hotel about 30 miles away.

So, for about a month, my friend stayed in a hotel, took at 45 minute cab ride into the building where his Toronto team was located, and teleconferenced with the Texas team in the building a mile away, then took a 45 minute cab ride back to the hotel. This apparently was a much better solution than staying at home, commuting 15 minutes to the Toronto office, and teleconferencing with the Texas team remotely from 1,500 miles away.

At bonus time, the cupboard was bare, because the company had "unexpectedly" spent so much money on flights and accomodations on that trip that they were in dire straights financially. Did anyone have any cost-cutting ideas, they were asked?

Many the team subsequently, and "unexpectedly" left for saner pastures.


I worked on a contract where the project manager decided that such productivity could be measured by counting the number of semicolons in the source code, and got someone to write him a script to do so.

The term for this is Goodhart's Law, and I've seen it hundreds of times over the years.

In an example almost identical to yours above, when I worked at one large, unnamed (*cough* mentioned in the article title here *cough*) Fortune 1 company decades ago, the metric was kLoc, or "thousands of lines of code". Yes, they measured software productivity by weight, essentially.

Management had a parser that went through each developer's code, including all included headers and etc., and counted the number of lines. There was a database module that was particularly huge, including hundreds, if not thousands, of 3-5 line modules that handled the numerous permutations of data. It was completely and totally unnecessary to every subsystem but the database module. One week, every subsystem, including the graphics, communication, and all others, suddenly included the root header file for the database, because doing so dragged in about 8,000 lines of headers.

Build times went up from 90 minutes to about four hours. Ugh.

When I asked what was going on, I was told "next week is audit week". Sure enough, after the audit was completed, the code was "re-factored", and through brilliant optimization, the developers were able to improve the four hour build time down to less than half, about 90 minutes. Management was extremely impressed, and I believe one developer even got an award for his brilliant optimization work of, err, removing the pointless header files that they'd only inserted a week earlier to make the kloc audit look good.

You're too dumb to use click-to-cancel, Big Biz says with straight face


Always check cancellation procedures before signing up

Like most people, I've had horror stories about the difficulty, and sometimes near impossibility, of cancelling a service. Bell Canada stands out as one where their stores, phone support, and web site all pointed to each other as being responsible for cancellations. Despite their contracts clearly stating that the consumer "contact Bell" to terminate the contract, no one could actually clearly explain how to contact and how a cancellation could be achieved.

Despite no one in Bell having a clue how to cancel an account, once I did successfully manage to do it, I received a phone call from their retentions department less than 20 minutes later, and three followup calls within a week trying to get me to sign backup.

Of course, that's nothing compared to the guy who spent 20 minutes on the phone with his phone company repeatedly saying "cancel the account, cancel the account, cancel the account" to a service rep who simply refused to cancel it. Once he posted it to the internet and it went viral, he was able to cancel it, but the company had to be publicly bullied into cancelling an account. That's absurd.

Ever since my dealings with Bell, I've made a point of checking out cancellation procedures when I've considered signing up for any recurring service. I do a search for "cancel $SERVICE Canada", and it's surprising how many of those searches link to long lists of horror stories. I'm sure it's saved me money, as I've skipped signing up for a lot of things.

There are definitely reasons to not make it too easy to terminate an account, because it could be done accidentally (any service rep can tell you customer horror stories), but it should be no more difficult to terminate than it was to sign up for in the first place.

Amazon Prime too easy to join, too hard to quit, says FTC lawsuit


Maybe it's different in Canada

I signed up for Prime when I reached the point that I was ordering enough that shipping costs would be more than the Prime membership. I didn't care about Prime Music (which I never ended up using) or Prime Video (which I actually did).

I've seen dark patterns before (shout out to Bell Canada here, where Bell reps have actually been unable to find a way to cancel an account, both in store and on the phone), and I always check how to cancel before I sign up for something. I've avoided signing up for some services because I've read the horror stories online about difficulty in cancelling the service.

Generally speaking, if a search of "cancel $SERVICE Canada" brings up nothing but links to horror stories, I avoid it.

I checked the Amazon sign up and cancellation procedures, and at least in Canada, they're clear and pretty straightforward. The link here (at time of writing) goes to the page called "End your Amazon Prime Membership", which has a one-click "End Membership" button, and another link with a two step explanation of how to terminate automated billing.

Maybe the graphics have changed, but the page doesn't look any different than when I first looked for it.

I have no doubt that it could be different, and more difficult, in other countries. But at least in Canada, "too hard to quit" isn't accurate.

Fed up with slammed servers, IT replaced iTunes backups with a cow of a file

Big Brother

And before iPods, there was Usenet

In the mid 1990s, I worked in the USENET admin team for a fairly large (5,000 employees+) Canadian based corporation.

Bandwidth was metered in those days, and rates differed by usage and time of day. Much of what our team did was automate tasks to optimize bandwidth usage so that our Toronto to New York backup ran at 3AM, when the bandwidth cost was under 10% of what it would cost to run at 3PM, that sort of thing.

Although the web existed, it will still in its' infancy. Most of the traffic was FTP, and there were many Gopher, Archie, and Veronica users within the company. There was also USENET.

Originally, USENET was brought in as a programming resource for the software developers. We brought in the Comp, News, and Sci hierarchies to start with. Then Misc got added, and Rec, because (a) a lot of the users were book nerds, and (b) using Rec groups got them interested in USENET enough to learn it, and then they started using the News and Sci groups for their actual jobs.

But then, there was Alt. The Alt.* hierarchy was (and is) a snake pit that we tried to keep far, far away. Unfortunately, it was inevitable that some particular newsgroup was needed from it, and it wasn't easy to bring it in without bringing in the entire hierarchy, although we tried. We wanted to restrict it as much as possible, but were overruled, and so the entire thing came in.

USENET bandwidth usage exploded. It went from being ~3% of company bandwidth to ~70% in about two weeks. What was worse was that it didn't just increase by a factor of 25, it was also being used during peak business hours, when bandwidth rates were highest. Our $800-$1200 a month bandwidth cost, that we'd been trying to get under $500 a month, ballooned to something like $18,000 one month, and was projected to be over $30,000 the next month.

Management freaked and ordered an investigation.

What I found, not surprisingly, was that the bandwidth was almost all USENET, from the Alt.* hierarchy. Specifically, the Alt.sex.* hierarchy. Yup, people were downloading porn videos. Instead of 2kb text articles, lots of 40MB video clips were being downloaded. Repeatedly.

The next step was to identify who. And lo an behold, 90% of the usage was attributable to six users. Five users were responsible for about 20% in total, with the other 70% being from a single user. A single executive. An executive of the "reports directly to the Board of Directors" variety.


So, without naming names, we sent out an email blast to world+dog in the company, with a USENET traffic graph, showing how much was being transferred from the different groups.

Of the top 20 groups, 18 were alt.sex groups. Fortunately, there was nothing illegal, and we didn't have to deal with child porn or anything like that, thank god.

Unsurprisingly, when the users realized that we could see the login credentials and knew exactly who was transferring what, our bandwidth went back down to about $1100 a month again.

Learn the art of malicious compliance: doing exactly what you were asked, even when it's wrong

IT Angle

Working with defence contractors teaches you life skills

I wasn't military, but I worked with defence contractors, so in modern terms, I would be called "military adjacent" or somesuch.

The most basic skill when dealing with the military (any military) is CYA, or "Cover Your A**".

One of the reasons that the military has a reputation for staggeringly (over)complete documentation is largely due to the culture of CYA that developed, of necessity. In militaries where disobeying orders can get you executed, it's a good idea to have it recorded, repeatedly and in several different documents and locations that you "were just following orders" when you did what you did (sorting by first name, in Steve's case).

I personally had a team lead who was notorious for saying to do X (sort by first name), confirm it, double confirm it, triple confirm it, and then when it hit the fan, would deny to upper management that he had ever said that, and that I (or another member of the team) had done X on our own initiative. If someone refused to do the stupid thing (because it was stupid), he would tattle to senior management that the person was disobeying orders. If they had proof that he'd ordered them to do X, he's say that the person misunderstood his instructions. No matter what, the subordinate was always the one to blame.

As you can imagine, he was not beloved within the team for these reasons (and many others).

So, when he one day decided to order me to do something particularly stupid, I confirmed that he meant it. And doubly confirmed. But I waited until the meeting with the big brass that was scheduled for the next day to triple confirm it. I did the "explain it to me like I'm five years old" approach, and he condescendingly spelled out exactly what he wanted done, step by step, exactly what he wanted me to do. And so I did, exactly in the sequence he'd laid out.

The results were glorious. They resulted not only in invalidating a flight test and missing a ship date, they put the entire project at risk of cancellation. Senior executives got involved. First he tried the "I never told him to do that" approach, except there were several members of the brass who'd been present to see him to just that. They didn't understand the implications of the orders, but they remembered damned well that he'd not only ordered me to do it, he'd done so repeatedly.

Likewise, the "well, he misunderstood" argument went nowhere, because I'd repeatedly asked for clarification, he'd provided it, and it matched letter for letter what I'd done, and what had caused the situation we were in.

But the chef's kiss was his statement that "if I was really telling people to do things that stupid, people would be complaining all the time", apparently unaware that at least six team members (although I was not one of them) had made formal complaints both to management, and in two cases to HR, about being backstabbed exactly like this. When they checked, there were something like 38 such complaints over a period of 3 years.

Unsurprisingly, in the next re-org shuffle about six weeks later, he was moved into the newly-created "Special Projects" group where he would be leading the team (which at the moment was just him) in said special projects, which were yet to be defined. Internally, this was later referred to as the "Ice Floe" group, named after the practice of some Inuit tribes to put their sick and elderly members who were a drain on the tribe onto an ice floe so that they'd float away and die.

No more free love: Netflix expands account sharing restrictions


Re: "woke crap" your definition is displaying your bias

HBO's "Velma" has to be the touchstone of this phenomena.

Netflix Originals were uneven. They weren't guaranteed to be great, but their track record was good enough that people would start watching things like their MCU shows just on the fact that Netflix recommended them. Sure, there were misses (Iron Fist comes to mind), but generally speaking, even the duds were still watchable.

Then they started to change direction. Back in 2019, Netflix promoted the hell out of their Another Life series. It was not only laughably bad (a Youtube video mocking it actually got more views and feedback than the Netflix official marketing of it), it was absurd. If it had been named marketed as a a satire, and called the ship the "USS Diversity Hire", it might have been accepted as a comedy, but Netflix was pushing this as their vision of serious science fiction. Then they followed it up with The I-Land, and people I know stopped watching "Netflix Originals" at that point.

And then there was Cuties. The controversy over that was enough to damage the brand significantly. They even had to issue a memo to employees telling them to knock it off with all the heavy handed politics, because it was costing them viewers.

There are still good shows being produced, and breakout hits like Stranger Things, Squid Game, and Wednesday, but those are the exceptions. Netflix's reputation for quality just isn't there any more, and it's been replaced with a reputation of producing propaganda instead of entertainment. Not surprisingly, that hasn't helped their revenue numbers.


Still haven't flipped the switch in Canada

A friend didn't so much as as share his password with me as he entered it on my XBox when he was visiting one day and wanted the gang to see some movie. He had one of those accounts which allowed four simultaneous logins, and there was no mention about households a the time, so he just set it up at three friends' houses and told us we were welcome to use it.

Now that he's getting emails from Netflix warning him of impending restrictions, he's asking me to test it to see if it still works. It did, as of Feb 10th, although apparently that ends on Feb 21st.

Personally, between Amazon Prime and Youtube, I doubt if I've actually watched more than 10 hours of Netflix in the past year. The only series I can think that I deliberately used it for was to watch Wednesday. I certainly won't be buying a subscription, and I doubt I'll miss it.

Back in 2015, Netflix was the one and only streaming service, and it had both all of the legacy movies and television series. Outside of live concerts and sports, it was the one shop shopping location for all people's entertainment needs. Today, the streaming landscape is fractured. On the commercial side, there is Apple TV, Hulu, Amazon Prime, Disney+, HBO Max. Then there is the freeware/ad-supported tier, like Tubi, Plex.tv, Crackle, and Roku. And then of course there are torrents and the pirate streamers.

So, Netflix has more competition than before, its' prices are higher than before, and it's taking action to disable one of the strongest reasons for its' growth.

I'm sure that some people will sign up for their own accounts, but I have no doubt that this will cost them many subscribers, as well. Whether it will be a net gain or a net loss in terms of subscriber numbers, I don't know. I suspect it will be a loss, but that's just a guess. I'm sure it will save them bandwidth costs, but whether it will restore profitability is anyone's guess.

Arca Noae is modernizing OS/2 Warp for 21st century PCs

Thumb Up

Modernize, shomdernize, as long as it runs

I did OS/2 applications work on contract from about 1990 (with OS/2 v1.1) to 1992 (with OS/2 v1.3 and 2.0). I leveraged those OS/2 skills to get contracts at companies in the back half of the 1990s, although sadly, that often meant decomissioning OS/2 in favour of something else (Linux, Windows NT 3.1 or 3.5, or even... Windows 95).

I still work with companies that have an installed base of OS/2 deployments 25 year later. They are usually in turnkey solutions for control systems. eComStation and now Arca Noae have been lifesavers.

Lots of companies have clients with OS/2 software that can't be replaced or upgraded. Often the original vendor no longer exists. If it does exist, it often doesn't have the source for the legacy system, and even if they do, they don't have the skills to maintain it, let alone upgrade it. You want an upgrade to our 25 year old OS/2 product? Here's our Windows 10 version.

Unfortunately, for some clients, that's not an option. They're perfectly happy to keep running the original code, which works as well today as it did in 1997. That's especially true when talking about PCs that weren't networked. Unfortunately, when that 1997 era Pentium Pro dies, finding a replacement that will even load OS/2 is the problem.

Sure, there are virtual boxes, but a lot of OS/2 device drivers are fussy about running in VMs.

That's where Arca Noae has been a godsend. I've had customers with $25M installations with six OS/2 boxes that had a box die, then another, then another, and then another. The cost of upgrading the software is often quoted at $1M or more, when all they need are four replacement PCs that can run OS/2 v4 (or v3, or even v2.1).

No version of OS/2 will install on a modern PC. It doesn't know about USB, or SATA, or pretty much any other innovation that took place after 1996.

But an Arca Noae installation that gets the customer up and running again is worth its' weight in gold. Does it handle quad cores properly? We don't care. As long as it runs as fast as that 1998 era Pentium, and it can handle the custom device driver for the weird 128 port output controller, it spares the customer spending $1M in needless upgrades.

I can't see myself ever doing any new development in C/Set2, and I doubt many others do, either. But just keeping existing OS/2 installs alive makes Arca Noae a viable business.

LastPass admits attackers have a copy of customers’ password vaults


Re: The cloud is just someone else's computer

You're right, I was reading an article about 1Password prior to this and conflated the two. Mea Culpa.


Re: Someone Else's Password

Well, I did say that it was a definition like that. And no, that's not my actual password.

Even if I had entered my actual Autohotkey definition, it wouldn't work. Things like the plus sign can't be entered directly, they have to be defined as {+}. So even cutting and pasting the string literal from my editor into the password won't work.


Re: Someone Else's Password

Using a password manager with a hard to guess master password is way superior to using weak passwords

On a Windows PC, I've found using the AutoHotKey abbreviations function very useful to store strong passwords and assign them to keyboard strings.

So, a definition like


Allows you to type "bwpw" (BitWarden password), and have it expand to the 64 password you need to get into the actual password manager.

Unfortunately, I haven't figured out how to manage long passwords like that on mobile devices.


The cloud is just someone else's computer

Unless 1Password is doing things line enforcing users to have different passwords (ie. warning them when they try to save a password for website X that it's already used in website Y), and I doubt such a thing is even possible without accessing the passwords themselves, people are going to continue to use, and re-use, weak passwords.

I've seen some websites which report things like "the password you entered is one of the 10,000 most commonly used passwords; please select another", but then all most people do is tack their pet's name or something in front of it (which is still better than nothing, but hardly ideal).

Personally, I've been using Keepass for literally decades. It may not be the most convenient thing to use, but it succeeds at the most important thing a password manager should be good at: it's secure.

For low-risk passwords (like, er, el Reg here), I use Bitwarden. It's a zero-knowledge system, I'm using a 48 byte master password, and frankly, if someone wants to take the effort to crack Bitwarden and my master password, they deserve to get my Register, Slashdot, and Ars Technica passwords, for all the good it will do them.

For things like banking, taxes, and online shopping accounts, they're all in a Keepass hive on a VeraCrypt volume that includes a portable browser instance with no plugins or extensions.

The funny/sad thing is, the effort I take with my low value accounts (hi, el Reg) on Bitwarden is actually more than many of my friends' SOs and family members use for their high value accounts. I had to resuscitate a PC with a nearly dead SATA drive a while back, and it came with a sticky note that had "USERNAME=xxx PASSWORD=yyy" for the windows login account. I cloned the drive, and when testing the replacement in the PC, I brought up the browser. There were a dozen tabs with various accounts, and sure enough, I was either already logged in to the owner's account, or the "yyy" password would get me in.

I've never been a big fan of the cloud, because of things like this 1Pass breach. But seeing how most people treat security on their own, it's a question of which is worse.

Know the difference between a bin and /bin unless you want a new doorstop


And then there was OS/2

The UK has bins, the US prefers trashcans, and computers like their /bin

Due to the various look and feel lawsuits, after macOS put their "trashcan" on the desktop, Microsoft had to use "recycle bin" for theirs, to be different.

So, when IBM released OS/2 v2, they could use neither. So they settled on "shredder". The desktop was complete with a paper shredder WAV file that made the appropriate grinding noises when any object was dropped on it.

Since IBM was new to the desktop, or at least they believed their customers were, they populated it with all sorts of instructional games and the like to start with. They included a chess game (GNU chess, which they neglected to credit), and an entertainment program called Neko that helped users get used to the mouse interface.

However, many of the studious, steadfast IBM employees had little use for humour, and their first reaction was to remove it from the desktop. However, this was a system object, not a user object, and so it couldn't actually be deleted.

This led to much amusement as the OS/2 bug reports contained several variations of "I dropped the cat into the shredder but it didn't die". This became even more amusing when Very High Level Executives visited the site. These were not just Executives but Senior Executives. They were far too Senior to be familiar with (ugh) PC desktop software.

And one of these Senior Executives either was on, or his wife was on, one of the state ASPCA (American Society for the Prevention of Cruelty to Animals) board of directors. The phrase "dropped the cat in the shredder and it didn't die" was not as amusing to the Senior Executives as it was to the rank and file, oddly enough.


Mounter, automounter, what's the diff

I worked at a large bank where there was a culture clash between the ancient whitebeards who used the ancient, grotty old IBM mainframes, while the younger generation of Sun worshippers (Hail to the Sun god, he is a fun god, Ra Ra Ra) basked in the glory that was Unix. Specifically, SunOS.

One of the major benefits, they explained to the whitebeards, was that, unlike the mainframe, if the network died, individual Unix workstations would keep running. As they were saying this, someone upgraded the SunServer automounter, meaning that the workstations could not mount things from it. At which point, every Sun workstation on the floor also stopped working. In order to save space on the individual workstations, they loaded the /bin directory from the file server. Fortunately, the local workstations also had /usr/local/bin, but in a fit of overly aggressive optimization, that too was loaded from the server.

This led to the bizarre admission that "SparcOS can't run $BUSINESSAPP until it can load /usr/local/bin from the file server", which led to management asking the fair question of why one had to load a "local" object from the remote server.

Mind, it's not always the person deleting the directory who is at fault.

I worked on a shared lab machine in the 1980s where space was tight. I was working on a video project, so I created a directory \DEVTEAMS (this was during the days of DOS 3 and 8.3 file names), and under it, I created VIDPROJ. Another group was doing a transputer project, so I also created TRANSPTR. That way, anyone who backed up the shared machine's \DEVTEAMS directory would back up both projects, \DEVTEAMS\TRANSPTR and \DEVTEAMS\VIDPROJ.

The transputer team had different ideas, though. They used their teams initials. So Charley, Uri, Norman and Kevin put their development work in the directory C:\JUNK.

Well, one day, a team member found that there was only 2kb of disk space on the 10MB (or maybe it was 20MB) drive. So, the first thing he did was run a backup of \DEVTEAMS, and then he went around seeing what he could get rid of to free up space. He found a directory C:\JUNK, and it was filled with thousands of small (10 to 50 byte) files with names like $d217.d78 and $aix7.7$a. They were obviously cache files of some kind, so he wiped them and freed up about 1.5MB of disk space.The machine now had enough space to run his build.

The next day, the head of the transputer group was livid. His team's entire project had been destroyed on the lab machine. There was month's worth of work gone! It turned out that instead of saving source files, the transputer development tool saved their source as hundreds of individual source objects, and maintained a database of how to link them. In other words, hundreds of little files with autogenerated file names, like $d217.d78 and $aix7.7$a

Yes, on a shared machine, they set up a directory called JUNK, and filled it with binary files with names like $3j5a1.d7x, and were upset people looking to clean out dead files didn't realize that those were critical project files.

Although they weren't so critical that their team ever bothered to back them up once in the span of six months, of course.

Whatever you do, don't show initiative if you value your job


Newbies can screw up but to do real damage you need management buy-in

Several years ago, I worked in a lab that was pretty much a free for all. The software was tested in the local lab and then deployed in machines with dedicated hardware all over the world. When there was a field issue, support people would often bring in the machine (if they thought it was a hardware issue) or just the hard drive (if they thought it was only a software problem) from the field back to the lab to debug it.

Of course, many of these machines would be absolutely riddled with viruses and malware, which would then run rampant over the lab network. Management's response was not to install antivirus on the lab machines, however. That was too expensive, and frankly, the McAfee software that they had standardized on made computers so disgustingly slow that they were unusable. That was perfectly fine for employee machines, of course. No one cares about them. But customers visited the lab, so the machines there had to be presentable.

Management ordered IT to install a process on every lab machine that would check every USB media connection attempt and check it had a specific file in the root directory. That file would have an MD5 checksum proving it had been checked for viruses by the lab anti-virus machine. The idea was that you brought your USB thumb drive to the lab and plugged it into the virus scanner machine, which would write this time-stamped USB credential file. Then you connected the USB to a lab PC, and the antivirus process would check that the credential file was current and correct. If it wasn't, or if it was missing, the process would eject the USB.

Naturally, neither management nor IT told the engineering staff about this. Projects ground to a halt as engineers took test builds down to the lab and spent hours struggling unsuccessfully to install them on the lab PCs, only to have their USB media ejected as soon as it was connected. Dozens of problem tickets were raised against both lab support and IT, but since only the upper castes in those groups were aware of the cause, several IT and lab support people were trying to debug the issue.

For those engineers whose lab machines had a CD reader, they burned their builds to CDs and were able to install them that way. Of course, they couldn't get logs back, but they could at least be partially productive. Others tried dozens of different USB keys, without success. A third group discovered that if you started a batch job on the PC that continually copied a big file to the USB drive letter in a loop before the USB was connected, once the USB was connected, it would establish a file handle and the disk wouldn't be ejected. That solution was mailed around by engineering leaders to their teams, and it became the de facto resolution.

Delivery dates were missed, customers screamed, and finally, someone in management realized that maybe they should have, oh, announced this change or something. Then they came up with the brilliant idea that instead of just silently ejecting the USB, the software on the PC could put up a message on the screen telling the user why the USB had been rejected, or something.

Ingenius! The software was updated, and an IT person was tasked with upgrading it on all the lab machines. All the machines. Several hundred of them, in multiple labs, on several floors.

The IT person realized that there was a better way. Rather than manually doing 1500+ installs, why not have the users do it? Since they were plugging in USB disks to the lab machines themselves, have the virus scanner install the newer version on their USB, enable the USB's autorun, and then when anyone scanned their USB (which at least some engineers were doing now, since a month after this started, there was a company-wide notice about the change) and plugged it into a new machine, it would install the upgrade! Any machine that was used would be upgraded, and any that weren't upgraded weren't being used in the first place, right? This was a great time saver.

So, the approved solution was to install executable software on the USB and put it in the USB's autorun. And it worked. The first time such a USB was connected to a lab machine, it would update the software, and all was good.

However, on the second connection, the updated software would look at the USB, notice that it had autorun enabled, and was trying to install software on the PC. Oh my god! That's a virus! Quick! Force a disconnect and lock the machine!

People who didn't use the virus scanner and were running the batch file hack to lock the USB didn't have any problem. People who did use the mandated (and mandatory) virus scanner found that after doing so, as soon as they connected to a lab machine, it not only ejected the USB, it immediately locked up. And since only lab personnel had the machine passwords, work stopped until a lab person could be found.

In other words, you were fine as long as you didn't follow company policy. There's a great incentive structure for you.

Even better, when people then plugged their USBs back on their desktop machines, it then spread the "virus" into the corporate network at large.

That time-saving optimization disabled the entire lab for weeks, took IS three days to clean up the corporate network, and IT spent a month scouring all of the remnants of it off of the 1500+ lab machines.

By the way, there never was a case of a virus being transmitted via USB in the lab. Ever. All of the viruses had been transmitted by field personnel bringing hard disks in from the field and installing them in the lab, which completely bypassed the USB checking nonsense. In other words, in addition to all the chaos that it caused, the entire exercise was completely useless at its' stated goal.

When management went nuclear on an innocent software engineer


Been there, done that

No problem. The US tech had simply grabbed the disk from the PC running the testing and copied it to the other computers. "Obviously it worked because they are all up and running," he said.

I had a somewhat similar experience, although thankfully lesser in scope.

As in the article, I was working on a testbed simulator for a safety-critical system that was currently running in production. To be clear, this was a simulator to test an upgrade which had not been deployed yet.

We had multiple client X machines that communicated with server Y. Many of the X machines were mobile, and field testing of the upgrade showed that many X machines intermittently either lost communication, or had communication errors (duplicated, out of sequence, or lost messages). Sometimes.

The stationary clients were fine, but no one could determine the cause of the intermittent failure. The systems integrator said it was the hardware, the hardware vendor showed the hardware tests that proved the communications systems worked when stationary, so it had to be the mobility aspect, the mobility supplier ran pings and traceroutes showing it couldn't be them, so obviously, it had to be our proprietary protocol. Our protocol guys pointed out that we didn't even know if an X was mobile or stationary, so how could the protocol fail only on the mobile clients?

To investigate the problem, I essentially wrote a customized ping command to test. My ping didn't use ICMP, though, it used our proprietary protocol. I made it a master/slave application, where the master would timestamp a message packet, give it a master sequence number, and send it to the client. The client would in turn timestamp the arrival time, give it a slave sequence number, and send it back. When the master received it, it would log it to disk. The operators could configure the protocol's send rate, packet size, etc.

The idea was to try it in the field, identify which clients/mobility locations were the problem, and play with the frequency and protocol payload to narrow down what was going on.

Of course, with all the time stamping and optimizations going on, there were several issues (time syncing was particularly troublesome), and although version 1 worked as a proof of concept, they wanted more features, so I got a budget for version 2.

And then a field report came in. My tool had reported a catastrophic loss of communication in an entire range of client machines where there was actually no problem with the actual upgrade application. So obviously my tool was crap, the customer had no faith in us because we had no clue what we were doing, how could we be trusted with safety-critical systems, etc. I had endangered the entire multi-million dollar project, we would all be out on the street, our children would starve, and all because of me. The project manager wanted blood, specifically mine.

Hmm. Send me the logs, I said.

Looking at the logs, I noted that the master and slave packet formats were slightly different, which made no sense, because masters and slaves were paired. I was extending the packet format for version 2, but it hadn't been released yet. This looked almost like a version 1 master trying to communicate with a version 2 slave. So I asked them to confirm the MD5 checksums of the master and all slave versions in the field.

Sure enough, there was a version 1 master, and the offending slaves didn't match version 1. What on earth had happened?

It turned out that the project manager had been in our test lab, and seen a couple of the machines where I'd been testing version 2. He liked what he saw, so he made a copy of the software (the unreleased, work in progress software, which was being debugged) from the lab machine, and when he returned to site, he gave it to the field testers, who then ran with it.

Not surprisingly, when this came to light (not to the customer, at least), there was a reshuffling, and there was a new project manager.

I promptly added two more features to the software. First, during the initial handshake, masters and slaves verified that they were each running compatible versions, or they stopped talking. Secondly, I added a time/date check, and the software would only work for 5 days after compilation, after which it would post an onscreen message saying it was expired beta software. I disabled that check in the formal release, but it put a stop to project-ending reports from the field.

What do you do when all your source walks out the door?


I didn't need to do it. Management did it for me.

In the early 1980s, I was on contract for a Fortune 500 company. My manager was notoriously, almost pathologically miserly. So while other teams had IBM ATs (80286 machines with 20MB hard disks) or at worst IBM XTs (8088 PCs with 10MB hard disks), our team had IBM PCs (8088 machines with no hard disk, only floppy drives), in order to save money.

Floppy disks cost about $3 each, or $25 for a box of 10. That was if you got the good ones (Dysan or Maxell), and of course, we didn't. Our manager got something like 10 boxes of BASF for $89. As you can imagine, these disks were cheaper for a reason, and data loss was very high.

Being the paranoid sort, I kept redundant backups. My source and tools grew to about 8 diskettes, so I just allocated one box to the project. One at home, and one at work, plus the original source meant I was using 3 boxes.

When my contract ended, in my exit interview, I turned over all 3 boxes. My manager very angrily said "so you're the one who's been hogging all the disks". He summoned another team member, and handed him the two boxes of backups, and ordered him to reformat them and put them in the general diskette pool, "where they should have been in the first place".

I left the company, and life went on.

A few months later, I got a call from them. The manager had gone to another office to install the component that I'd been working on. Rather than "waste" two more floppies and make copies of the executables to install, he took the entire box of disks, including the source and tools, with him. So when he slipped on the ice getting out of the streetcar, dropped the box of disks, and they were promptly run over by a car, that meant everything I'd worked on for them for over a year was lost. Source, executable, tools, documentation, everything.

Did I... by chance... happen to have any copies, even partial ones, at home?

I pointed out that would have been a violation of NDA (it would have), so no, I didn't.

Fortunately, for me, if not the company, I still had the copy of my exit interview, where the manager had put in writing the fact that I'd been "wasting" valuable floppies by "hoarding" floppy disks from the rest of the team. So if they wanted to claim that I'd been the one at fault for the loss, I had proof that I'd provided complete source when I left, and they'd accepted it.

The person one the one sounded like I wasn't the first person to tell him that, and this wasn't the first time this had happened.

Internet backbone provider Lumen quits Russia


The goal is to return Russia's economy to its' 1989 borders.

Given the huge lineups in front of stores and the pictures of empty store shelves, I'd say it's being quite successful at that.

Your app deleted all my files. And my wallpaper too!


I had a customer with almost the exact opposite

We have a couple of graphic designer/artist/musician types as clients. They're brilliant in their fields, but as far as computers go, they require that their computers be, to quote one of them, "blonde proof".

Last year, one of them had their external USB backup disk die on them. So, they got a shiny new 4TB MyBook as a replacement, and they said things were good.

A few months later, things were less good. Their C: drive was starting to die, making hideous belt sander noises, and losing critical files. But when they tried to back up to the USB disk, it was out of space. Panic ensued, as the primary disk wasn't backed up, and the backup disk had no space.

I had them send me screenshots, but the external disk was 4TB, had 3.98TB free, and they seemed to be able to copy some files over, but only about 1% of them.

So, I went over and checked it. My first assumption was that it was formatted as FAT32, and any files over 4GB wouldn't fit. Nope, that wasn't it.

I had her show me exactly what she did, so I could see if it was a PEBCAK error, as it usually was.

She double clicked on the icon of the USB disk, which opened a Windows Explorer window, as excepted. She then dragged a folder from her desktop to it. And sure enough, an "insufficient space" error, with a Windows error code number appeared.

She, of course, was panicking that she was going to lose years of work, and rightly so. I tried various things, but there was no issue. I put the external USB on my laptop, and there was no problem writing to it. The disk wasn't write protected, it didn't need administrator rights, it wasn't FAT32, and in fact it could copy about 300MB of files because it gave the "insufficient space" error. What the hell was it?

So, I researched the Windows error number. Strangely, it was not a file system error, it was a OneDrive error code. WTF? She wasn't even using OneDrive. Or was she?

Sure enough, OneDrive was enabled. She hadn't configured it; she had no idea what OneDrive even was. This was one of those "it came that way when I bought it" things. Either the box store had configured Windows for her, or it was done when she set up Windows the first time. Since she was a "click yes to everything" type user, it would be whatever Microsoft sets as defaults.

In the end, it turned out to be one of the more malicious things Microsoft has done. When OneDrive is enabled in Windows 10, when you copy from one drive to another using Windows Explorers, it backs up the destination disk on OneDrive. It's totally seamless and transparent.

Of course, if there isn't space on OneDrive, the copy is aborted. And that also aborts the local copy to the USB disk, too.

Yes, that's right. Her backup was failing because she was trying to back up about 2TB of data to a 4TB disk, but OneDrive only had 5GB or so, so Windows would only allow her to copy 5GB to the disk.

This idiocy could be bypassed by using the command line, or a third party tool, or another file manager, but this was her workflow.

By logging her out of OneDrive, and disabling, and then removing OneDrive so it didn't restart at boot time, she was actually able to use her 4TB disk.

Another "improvement" that makes things worse.

Chromium-adjacent Otter browser targets OS/2


I was an OS/2 developer from 1990 to 1992, and I used it as my home OS from 1990 to some time in 1994, whenever NT 4 came out. I would dual boot between them and planned to alternate which was better as my home OS. I was using Solaris at work by then, so I didn't need to worry about compatibility between home and work environments.

I expected it would take three or four months to compare the two. I ended up switching over almost completely within about three weeks.

OS/2 had a lot of good things going for it, but the PM's SIQ (Presentation Manager's single input queue) was the Achilles' Heel. Yes, the OS was robust, but if the SIQ was blocked, as it did often, all mouse and keyboard input was ignored. It didn't crash as Windows 95 did so often, but in practical terms, the difference wasn't that significant.

It was terrific as a server. If you wanted a home file server and didn't touch the GUI, the HPFS was much more robust than FAT or later FAT32 under Windows, but as a workstation, it was far too problematic. If you had native OS/2 software, it was very robust; the problem was that native OS/2 software was rare, and for the most part, if it did exist, it was immature and missing features compared to the competitive Windows offerings. The result was that people were always trying to run Windows applications under OS/2, which had all sorts of problems. And when things went sideways, the vendor wouldn't help, because they didn't support OS/2, IBM wouldn't help, because it was a Windows application, not their software, and Microsoft wouldn't help, because they weren't supporting OS/2. A lot of stuff worked fine, some worked partially, and some worked not at all. It really was a crapshoot.

By the time OS/2 v4 (Merlin) finally officially addressed the SIQ issue that had been there for years, it was too little, too late. NT4 matched it for stability, and while the GUI was nowhere near as elegant or mature as the PM, it was functional, and there was no SIQ issue. There's a strong argument that SOM and DSOM were better than Windows' COM and DCOM, but so little OS/2 software actually took advantage of it that it really didn't matter.

When my dual boot machine blew out a hard drive six months later and I had to regen the system from backups, I realized I hadn't booted OS/2 in months, so I only installed the NT4 partition. I left the space for OS/2 unused and figured I'd see if I had a need for it. A few months later, I just partitioned the unused space for NT, and never looked back.

I enjoyed using OS/2, but there isn't really much in it that isn't in the Linux, MacOS, or Windows offerings today. The GUI shell is very innovative, but even there, there's not a lot that can't be done with the modern OSes.

Apple is about to start scanning iPhone users' devices for banned content, professor warns


People went to digital photography to get AWAY from this

Back in the 1990s and early 2000s, there was a "think of the children" panic in Canada, and crusaders went on the tear to get the police and government to "do something" to stop it.

In the middle of this climate, I know of three cases where people ended up getting visited by police investigating them for alleged child pornography.

One case was a Japanese anime, as in, a cartoon, with no actual humans being filmed, let alone children.

The other two were the result of photo development. Those old enough to remember actual film cameras know that unless you had a darkroom, chemicals, and skill, you needed to go to a photo developer to convert your raw film into actual snapshots. Camera stores did it, of course, as well as specialty outlets like Fotomat, but one of the most common photo development places was, oddly enough, the pharmacy. And it was pharmacies that called the cops on two people getting their photos developed.

The first case showed the shocking picture of a nude 5 year old boy with his pants on the sidewalk with a scantily clad 3 year old girl next to him. In other words, a 3 year old girl snuck up on her big brother and pants his swimsuit on him. Mom happened to be taking pictures of her kids in the pool, and couldn't resist getting a snap of her kids pranking each other.

The second case was similar, with a grown woman in a bathtub with a 2 year old boy, who decided to make an obscene gesture to shock his mommy just as Daddy walked in. In other words, a typical "Jim, get in here and see what your son is doing" family moment.

Fortunately, in both cases, the police officers were parents themselves and not idiots, and when they visited the families and saw that the kids photographed were the children of the photographers, they realized that the photo developers had completely over reacted. But as you can imagine, those families stopped sending their film out to be developed, and went to digital photography.

Now, you don't even have to drop your film off to have busybodies report you to the cops, your camera vendor will do it as soon as you take your picture.

There's no way that this won't be abused, both by companies, and governments.

Apple's Steve Jobs: Visionary, dreamweaver... and the kind of fellow who might tell a porky or two on his job application


Re: Expertise in self promotion

The reason Woz didn't get credit was because he didn't seek it, or claim it.

Steve Jobs is a lot like Stan Lee of Marvel Comics, another icon who is a household name, but whose co-contributors are not well known (to the general public, at least).

People like Jobs and Lee are first and foremost salesmen. Everyone knows them because they're always talking about themselves. When Apple had an announcement to make, it was Jobs who was calling up journalists and going to the trade shows. Woz didn't.

It's not money, but personality, that got Jobs noticed. Love him or (quite commonly) hate him, Jobs was memorable. Memorable people get talked about. People like Woz who are quietly competent, don't.

Bloated middle age beckons: Windows 1.0 turns 35 and is dealing with its mid-life crisis, just about


Re: Breakthrough/turning point

MacOS's marketshare was growing strongly despite the massive You can't get fired if you buy IBM syndrome

MacOS was growing in several markets, but Windows was growing faster. The entire market was growing; home computers were becoming less of a curiosity and more acceptable. They still weren't necessities, by any means, but they were no longer oddities.

A PC in 1987 was still the price of a used car, and Macs were considerably more expensive. And that's what did MacOS in.

A person with a $3,000 PC might invest $99 to buy Windows to try it out, but he wasn't going to spend $5,000 to try out a Mac.

Even then, that $99 was a hurdle, and Microsoft recognized it. That's why they included a copy of Windows with pretty much everything they sold. You bought a Microsoft Mouse? Here's a runtime copy of WIndows with some graphics programs. You bought a Microsoft game? Here's the DOS and the Windows versions, together. Oh, you don't have Windows? That's okay, it comes with a Windows runtime.

Sure, those runtime Windows ran like a dog, and needed more memory than the user had, but he could afford to buy more memory a lot more than he could afford a Mac.

There was a push at the time to get MacOS on PCs, one that Jobs fought against. He railed against "crap PC hardware", and he was right. PC hardware quality was all over the map, as was pricing. Macs were stable, because they were single-source. But that came at a cost that most people weren't willing to pay.

And by the same token, if you spend the same amount of money a Mac cost and put it into a high end PC, then the gap between MacOS and Windows narrowed considerably.

Mac still won a lot of markets, notably education and graphics, but the expected educational followup never happened. There was a lot of talk that parents of kids with Macs at school would buy a Mac for home, and some did. But most walked into the computer section of the department store, saw the Mac and the PC next to each other, and got sticker shock and the cost of the Mac, and went home with a PC.

MacOS made some inroads with the "Hackintoshes", as they were known, but one of Jobs' first actions when returning to Apple was to kill them, so that was that.

IT Marie Kondo asks: Does this noisy PC spark joy? Alas, no. So under the desk it goes


Re: Insert mandatory story about Miniscribe disks

Yeah, I believe it.

I had a friend who custom built systems for offices back then. When Miniscribe came out with their next generation of hard drives, they released a lot of promotional material to people like my friend beforehand, to try to get them to start buying Miniscribe again.

He showed me some of it. One of the blurbs stated that Miniscribe guaranteed that the sound level of their new drives was 30db or lower. I don't remember the actual number, it may have been lower than 30db, but it was still high. But the key point was that they were actually promising a maximum noise level from their drives. They were the only hard drive manufacturer to specify that, and they'd never done it before. That alone was enough to tell me that the noise had been hurting their sales, no matter what their salesmen were claiming at the time.


Insert mandatory story about Miniscribe disks

As anyone who worked back in the 1980s can attest, the hard drives produced by Miniscribe were monsters. Like Godzilla or Mothra, they were big, heavy, and especially noisy.

In 1985 or so, I had a service call to a small law office. It was a legal factory, with one lawyer and seven or eight public notaries. They each had a PC and a primitive network (parallel port based, as I recall). The centrepiece of the office was the extremely large, extremely impressive, and extremely loud legal line printer. It was essentially a networked (by the standards of the day) typewriter that printed out 14" legal sized documents all day, every day. I sounded like a light submachine gun when it was running, which was most of the time.

As a result of that, it was enclosed in a sound box, which was essentially a plexiglass cage with lots of soundproof padding inside to muffle the noise. It didn't stop the noise, of course, but it brought the office down from about 60db to about 35db. Still not great, but better.

The way the "lan" worked was a point to point network. No one could print directly, but each notary could copy the file to the file server PC next to the printer, which had a job that just printed everything in a particularly directory every minute. It was a kludge, but it worked.

One day, the file server died. So they got another one. The new one had a Miniscribe hard drive. Running a miniscribe hard drive is like having a bag of metal ball bearings dropping on a skillet, 7/24. The noise was not only loud, it was excrutiating.

The next time I visited that office, the printer was in the middle of the room, completely exposed. The sound baffle was now over the file server, because the hard drive was louder than the printer.

'One rule for me, another for them' is all well and good until it sinks the entire company's ability to receive emails


my kids (teenagers) initially didn't believe that when I was their age, the internet basically didn't exist

Show them the movie Soylent Green. Made in 1973, it showed the far future world of ... 2022.

In the film's climax, the hero is being chased, and desperately has to get the information he learned out to the world. It's a major plot point that during the chase, he can't find a telephone booth to use :-)


All my apps are essential

I was once tasked with cutting costs in an office that had about a hundred users, which paid vendors for custom data feeds. Some feeds were cheap, some were expensive, and some were astronomically expensive.

The system had built up over several years, both the original users and the support staff had moved on over the years, and the only documentation that was guaranteed to be accurate were the current invoices. Looking back at the pricing logs over the years looked like "Hollywood accounting", as vendors gave special deals to compete with one other, the undocumented specifics of which were lost to time.

As explained, the job was supposed to be looking at the network logs, seeing what data feeds a user was using, asking him if he really needed them, then cancelling the unneeded data feed to save money.

In a lot of cases, a user would be accessing a high cost data feed but only using a portion of the data, and what they were using was available from another vendor on a different (and cheaper) feed, and we could switch them over.

The theory was that the users would tell us when they weren't using something, and we could cut it.

The theory was, of course, incorrect. Every user was adamant that they needed what they currently had. No changes would be tolerated. Everything was essential, including the user who was using the fourth most expensive feed despite covering everything but the feed's clock with other windows. Why did she need this particular feed? Because she "liked the way the clock looked".

I actually quoted that in my report.

Obviously, the users weren't going to be any help, so we resorted to a more effective method of analysis: brute force.

Now, our network analysis showed which feeds were really being used all the time, and we left those alone. It was the "blue moon" ones - things that were only accessed once a day, or weekly - that were the best candidates for culling. So, we simply unplugged them, and waited to hear screams.

If we unplugged something and a user screeched that his feed was done less than 30 seconds later, that one was obviously in use, and stayed in the "keep, analyse later" pile. If no one complained, we'd wait to see how long it was before there was a complaint, and after a while, downgrade the speed/usage rate (some of these feeds had metered options rather than just unlimited), or cancel it altogether.

We saved enough money to meet the goal, and life went on.

I left the company and went on to greener pastures after Christmas. One day, in mid-July, I got a phone call from my replacement. It turned out that user X had tried to access his "favourite screen", the one he used every damned day, and it turned out that the feed had been cancelled. By me. Five months earlier. And yet it had taken him half a year to notice that his favourite screen had been empty for several months...

Of course, it didn't work like that at all. Every


querying the users, finding out what they

HMD Global revamps infamous commuter-botherer, the Nokia 5310 XpressMusic

Thumb Up

Shut up and take my money

If you're tempted, the handset will be available in March. Bus tickets and c are sold separately.

Could someone explain to a poor Canadian what the "c" in "and c" refers to? Probably a party drug reference, I'd guess, but I don't know the lingo.

Oddly enough, I still have my Nokia 5130, the thicker, cheaper version of the 5310. It was my daily driver from 2009-2010, and it's been a solid backup performer/emergency car phone since. The battery just died in January via overcharge, and I got a replacement Feb 2nd. Instructions said to charge it up and let it run down full three times. I've done so, having charged it on Feb 2nd, Feb 17th, March 2nd, and now... today.

Yes, two weeks plus a day or three on a charge. Ah, the good old days.

The only real complaint was that the PC/Phone software is, by modern standards pretty horrific. And as I recall, it could only take 1GB or 2GB microSD cards; 4GB or larger aren't recognized. So a 32GB capacity would be welcomed.

As I recall, it happily supported Bluetooth, but no wifi. There wasn't much in the S40 that needed the net back then, to be honest. This was not a phone to email or browse on. And there was no GPS for similar reasons.

The camera was tolerable for car accident photos, but little else (there was a green tinge on everything), but it ran for weeks on a charge, got reception when no other mobile would, matched MP3 players that cost more than the phone did, and being Nokia, a friend's daughter put hers in the wash cycle and was annoyed that the 5 key was "a bit fiddly" after the wash, but that was about it.

I can definitely see a market for this.

Hello, support? What do I click if I want some cash?


Re: This wouldn't have happened

No, but it would have opened up an entirely different set of problems.

We're currently dealing with some legacy OS/2 applications in the field. The major problem is finding replacement hardware that OS/2 runs on. We've been trying to migrate to ArcaOS. but finding programmers with the skills to maintain and develop OS/2 3.x drivers has proven just as challenging for us as it was for our customers to find 8GB IDE hard drives and Pentium processors.

And given that many of the ATMs were running OS/2 1.2 and later 1.3, and never updated to OS/2 v2.0, let alone Warp or Merlin, that problem would only grow.

I worked at IBM from 1990-1992, and I ran it as my home system until 1996, when NT4 came along and beat the stuffing out of if. I liked it for what it was, but I don't have any illusions about it being a cure-all for modern problems.

Cache me if you can: HDD PC sales collapse in Europe as shoppers say yes siree to SSD


Re: Define primary

Prices at a local chain (Canada Computers) sell the low end 120GB SSD for $40, a 240GB for $60, the 500GB SSDs going for $90-$120, depending on brand, there's a 1TB for $340, and $900-$1300 will buy you a 4TB, depending on the performance you want.

SATA drives don't have anything in the $40 range, but $60 gets you 1TB, $70 gets you 2TB, $110 buys you 4TB, and I see 16TB for $700.

So, SATA storage gives four to eight times (on average) the capacity for the same prices as SSD, and of course, the maximum SSD capacity is much lower than hard disks.

I definitely want my bootable OS drive to be a fast SSD, and my 128GB SSD does the job nicely. But I've got 8TB of data (and another 12TB of backup media) for things that don't require than performance. That Minute Waltz MP3 doesn't need to be on an SSD, for example, it's gonna take a minute no matter what.

I don't see disks disappearing any time soon. Also, as anyone who's debugged one knows, data from a dying drive can often be recovered, even if only partially and at great cost. When an SSD dies, it's gone, pining for the fjords, and that's all she wrote. One bad experience with an SSD failure has already spooked at least three people I know of and made them shy of the things.


Re: And..

As the old line in the Unix fortunes file said, SCSI is not magic. There are fundamental technical reasons why it is necessary to use a black dagger and an onyx throne when you sacrifice a young goat at midnight to get the chain working.

I remember when the big lie "with SCSI, there's none of that interrupt or memory addressing nonsense; all you have to do is set the LUNs". Of course, the reality was the terminating the SCSI chain was just as magic as other buses at the time. If anything, it could be worse because it was so simple. You had the terminator at both end, and as long as no LUN in the chain was duplicated, you were supposed to be okay.

But when it didn't work, what was there to debug? We all saw things like a 0-1-4 chain that simply wouldn't work, but a 0-1-5 would, despite there being no difference logically. Then a new device was added, and 0-1-5-2 failed, but set the original device back to the 4, and 0-1-4-2 worked just fine.

Now, when it worked, SCSI was a charmer. I ran OS/2 systems with SCSI that, once configured and left alone, ran rings around friends' systems with IDE. If they tried to burn a CD, their machine went to 98% CPU, and if an email popup appeared during the burn, the CD writer could time out and they lost the disk. So burning a CD meant turning off the modem, shutting down all programs other than the writer software, etc. before trying it. Meanwhile, I could burn a disk with my SCSI writer while copying a huge file between two SCSI disks and formatting a (non-SCSI) floppy all at the same time, my CPU never went about 35%, and I never lost a disk.

Of course, once EIDE showed up, SCSI disks became too expensive, at least for consumers. It was one thing when a 4GB SCSI disk was the same as an 8GB EIDE, but then the EIDE prices dropped so that same 4GB SCSI cost what a 10GB EIDE disk did. Then a 12GB, then a 20GB, then a 40GB, etc. Eventually, SCSI disk prices were ten times or more what EIDE cost. Yes, the EIDE were slower, and used more CPU. For corporate data, SCSI was still the way to go. But for consumers who could buy two 20GB EIDE disks (one as a backup) for less than a 4GB SCSI, it became difficult to justify.

And then SATA showed up and kicked over the apple cart yet again.

The BlackBerry in your junk drawer is now a collectors' item: TCL says no more new keyboard-clad phones


Re: That's a shame

There's already quite a wide selection of these already.

The average prices is about C$40; they've been around for years.

Thunderbird is go: Mozilla's email client lands in a new nest


Re: I've stopped using it

I've been de-Googling myself over the past year or so, and I replaced my Gmail with a more privacy focused provider (in my case, Mailbox.org, though there are numerous others). However, even when I was using Gmail and Google calendar, I ran Thunderbird and Lightning to maintain local copies of my mail and calendar data,dating back to about 2004. Prior to that, I was using The Bat!, which while recommended, was locked out by some local ISPs, which made it no longer viable, sadly.

For home users, TB remains a great archival tool, assuming you have POP3 access and not just IMAP. Whether or not you want to use TB for your mail is another story. For non-technical users, web mail is probably a better option nowadays, simply because someone else is doing all of the configuration for them.

TB still has some annoyances, including their creation window fonts being nothing at all like what's sent, and the UI is absolutely outdated. But the archival abilities, along with the search capabilities, still make it a tremendously useful tool.

Free Software Foundation suggests Microsoft 'upcycles' Windows 7... as open source


OS/2 all over again

When IBM tossed OS/2 onto the scrap heap of history, there was considerable hue and cry from the OS/2 user base, asking (or in the case of Team OS/2, demanding), that IBM release OS/2 as an open source project.

That was 20 years ago, and it still hasn't happened. It didn't happen then, and it won't happen now.

All other considerations notwithstanding, there are one major reason IBM didn't open source OS/2, and why Microsoft won't open source Windows 7.

Never mind the proprietary intellectual property that's embedded in all that code, which would have to be audited before release. Ignore the fact that they'd be giving away a code base that cost hundreds of millions of dollars to develop, over the past forty years. That doesn't even matter.

What does matter is that Microsoft doesn't own Windows, any more than IBM owned OS/2. Oh, it owns a huge percentage of it, to be sure. It's developed it over the past four decades, and the overwhelming majority is no doubt Microsoft spawned, for good or ill. But there are components in there, lots of components, that were purchased with conditions from other vendors, and/or licensed. Microsoft would have to either get free and clear title to the dozens (hundreds? thousands?) of licensed pieces of code that they've acquired and incorporated into Windows over the years before they could release it to the public.

That would be a major cost, and that's not even counting the potential legal exposure if they got sued for releasing something that they didn't have full legal claim to release. Parts may wind up on github, but lots of things are tied up in cross licensing agreements that will never be released to the light of day.

LastPass stores passwords so securely, not even its users can access them


Re: KeePass


For one thing, you can use Keepass on cell phone/tablet/other computers without lugging the portable drive around.

Other benefits include things like auto timeout. If you leave your password file open and walk away from your PC, someone could copy your passwords. With Keepass, 30 seconds (configurable) after it gets the last keystroke/mouse action, it automatically locks itself, and you have to re-enter the password.

Other features include things like clearing the copy/paste buffer 30 seconds (again, configurable) so that if for example, a malicious javascript looks in your paste buffer, it won't see your last password.


Re: KeePass

I'm not sure why you're having a problem; Keepass has had CSV import for decades. I remember bringing in a CSV password file from a customer's Blackberry password manager that I worked for back around 2005 without problem.

Of course, there's no CSV standard for password files (that I know of), but Keepass lets you pick the order. In the worst case, you might have to write a script and/or manually edit the field order so that LastPass and Keepass understand each other, but it shouldn't be hard.

A quick search finds that there's a tutorial by a user who's done a LastPass to Keepass migration.


Re: KeePass

I've been using Keypass since the early 2000s, if not the late 1990s.

It may be a local DB, but one of the benefits of it being so established, and open source, is that there's always been a port or a version (a 1.x version, anyway) available on every platform I've ever needed, whether it's Windows, Linux, Windows Phone, IOS, or Android.

The other nice feature is that you can have as many DBs as you like. My general password DB, for things like El Reg and various forums, is on my home PC, my phone, and a keychain thumbdrive. My financial passwords DB is on a VeraCrypt volume on my home PC. Even if someone stole my phone, and somehow got my phone password and my Keepass password both, my banking info isn't exposed, because it's not there to take.

I've also got a BitWarden account where I copy some of the less important passwords (again, like El Reg) for forums and the like, for when I'm in places where USB can't be connected, and I'd rather not look at my phone to manually type in a 32 byte password of random characters. But it's just a convenience, not a necessity; and they aren't passwords that I'm worried if they were compromised. But my Amazon or eBay passwords? Forget it. Even with Authy authentication, I'm not putting those passwords in a cloud-based manager.

That code that could never run? Well, guess what. Now Windows thinks it's Batman


Re: Assume the worst

Less good work putting your personal landline number in the error message!

That was absolutely an error. When my group did a similar thing back in 1989, the burned-in-rom telephone number was that of our much-disliked group leader. He was the one who said it was ready for production and couldn't fail, let him deal with the fallout.

Remember the 1980s? Oversized shoulder pads, Metal Mickey and... sticky keyboards?


IBM AT Keyboard for the win

I still have a couple of my IBM AT era keyboards from 1986. I actually used to have an 1981 IBM PC Model 5150, but the 5 pin DIN to the 6 pin mini-DIN converter didn't work with the 6 pin mini-DIN to USB convertor very well, so I donated those keyboards.

The IBM AT keyboards from before 2000 were tanks. Some of them weighed as much as a modern laptop. I've have successfully run one through a dishwasher to clean it (placing the key caps int the closed cutlery container), and in the years before I had a dishwasher, the bathtub did the job. Remove key caps, fill tub to about six inches of water, and insert keyboard. For stubborn stains, use the shower head blast them off. Dishwashing soap with warm water and a soft sponge (no abrasive copper or steel pads, please) can restore even the most aged and grotty looking keyboard to like-new condition.

Quick!! The! top! five! things! you! want! to! see! from! Yahoo! – what! are! they!?


Free, and worth what you pay for

I set up a Yahoo account, back around 2004, and in many ways, it was better than Google's and Microsoft's offerings at the time.

The email was okay (GMail didn't exist), the calendar and contact manager were pretty good, and they also incorporated a number of useful features, like an RSS reader, bookmark manager, yahoo group interface, and file briefcase into your account.

Unfortunately, they ignored it, and it was surpassed by competitors. Still, it did no real harm to the functionality. The interface was dated, and missed features that were in competitor's offerings, but it remained usable.

But then the security breaches hit. And hit. And hit. And Yahoo either ignored them, hid them, or outright lied about them.

By that point, I'd pretty much given up on it, and was already using other systems. Some mailing lists still used my Yahoo account, so I checked in every once in a while to keep up. That is, until one day, I couldn't.

For some reason, my password wasn't accepted. I'd logged in the previous day at home in a portable browser instance, but I couldn't log in anywhere else. There was no reason given, and I was told use the password reset facility to get a new password.

The password reset facility said it couldn't proceed for "an invalid reason", and that I should go to Yahoo help. Yahoo help told me to call the Yahoo hotline. The Yahoo hotline told me to try the password reset facility. It was a perfect loop.

Now, when I'd set up the account, I put secondary emails and phone number information, in case of an event like this. It didn't matter; Yahoo doesn't even look at that. I used the browser that was logged in already to send emails to support from within Yahoo, and they sent me to the support forums. I went to the forums, and the Yahoo reps there told me to just use the password reset facility, which is where the problem had started.

Fortunately, since I was logged in in that one instance, I was able to scrub the account clean. I deleted everything. Every email, contact, calendar entry, briefcase file, RSS feed, the works.

Three months later, I got an email in one of my secondary accounts from Yahoo support, who had just discovered a server issue that had affected "a small number of users" and had locked them out of their accounts. This had apparently happened 90 days ago, and they had fixed it, so I could start using the account again, no problem.

I checked, and yes, I could reset my password. I still have it, but it's been idle for a decade. When I was locked out of my account, I was completely hung out to dry by Yahoo. Despite setting up secondary emails and phone numbers, they were ignored. Yahoo reps were unreachable. Forum posts and help requests were utterly useless. If not for the fact that a tech stumbled across the problem on their side by accident, my account would still be locked out. So, why on earth would I want to rely on a service like that?

I now use a paid service (mailbox.org) that charges me 1 euro a month, and actually has a financial interest in not locking me out of my service, because if they do, they don't get paid. Sometimes free really is worth what you pay for it.

We asked for your Fitbit horror stories and, oh wow, did you deliver: Readers sync their teeth into 'junk' gizmos


Re: Pebble

Now, if only someone would make another similar smart watch

No one has yet reached the Holy Grail of Pebbleness, but the nearest one, in terms of battery life, always on screen, notifications, music control, but no API for apps, is the Amazfit Bip The other one that's getting attention now is the PineTime, but it's still at the design stage, which is where most Pebble wannabes go to die, sadly.


Re: Pebble

Yep they bought Pebble and then cut the loyal users adrift...

This is a fairly common misconception. Fitbit didn't buy Pebble and cut users; Pebble went out of business, and Fitbit bout some of Pebble's assets (mostly software IP), and hired a bunch of (now ex-)Pebble software people.

It wasn't a merger, or a takeover, it was a garage sale. Many people think that Pebble was a healthy company, and Fitbit killed it. The reality was that Pebble crashed and burned, sadly. What we got from Fitbit was about a year and a half of hosting service, which is about a year and a half more than the nothing we were owed.

I'm not impressed with what Fitbit has to offer, but I can't fault them for their handling of the Pebble assets.

Can you download it to me – in an envelope with a stamp?


Persuading software

Does anyone here remember the tedious regularity of having to persuade your terminal software to pick up an interrupted upload/download from where it left off rather than starting it all over again from the beginning?

You mean replacing XModem and YModem with TeLink, and later ZModem?

Yes, yes, we do.

As for downloading via snailmail, remember the old maxim to "never underestimate the bandwidth of a station wagon filled with magtapes barrelling down the freeway".

MAMR Mia! Western Digital's 18TB and 20TB microwave-energy hard drives out soon


Re: I bought a drive on the weekend

When floppy storage was $1/MB


Back in 1984, when PCs were big, and the AT hadn't even been released yet, it was a big deal that DOS 2.x allowed you to reformat those DSDD 320kb floppies as 360kb. That allowed 1/3rd of a megabyte on a single disk. That 12% increase doesn't sound like much, but compared to other media, like 8" floppies at 88kb, and "high density" single sided media of 180kb, 360kb was huge.

Slow, but huge.

The going rate was something like $12 a disk, so a MB was about $35. If you bought a box of 10 Dysans, I remember they were "only" $99, so 20% cheaper than buying individual floppies.

So, 20MB of floppies would be 60 disks, or about $720 ($600 if buying in bulk). But given the horrible 80ms speeds, as opposed to 20ms for the hard drive, not to mention not having the flip through dozens of floppies, and split files, etc., the hard drive was well worth the 35% price increase over the floppies.

When the AT came out, and 1.2MB floppies appeared, they cost $99 for a box of ten, and the 360KB prices dropped significantly. But as anyone who's ever tried to back up a 20MB disk onto them (hello, FastBack), they were still a pain to use.

Thumb Up

I bought a drive on the weekend

Over the weekend, I did my monthly backup. This is the "take the 3TB disk out of the fireproof safe, do the backup, put it back in the safe for another month" backup, not the daily backup, or the offsite one.

It failed for the first time, ever. Apparently, my 4TB disk in my PC now has 3.09TB of data, so it could not be backed up onto the 3TB backup disk. So, I went to the shop, bought a 4TB WD Blue for C$99 (about 68 Euros), and did a backup. So, I've now got a spare 3TB lying around.

It occurred to me to do some math. That 4TB disk is 200,000 times the storage capacity of my first 20MB disk in 1985. And at $99, not even counting inflation, it was one tenth the cost. Going by price, storage capacity has increased two million fold over the past three decades. I picked up an 8TB three years ago on sale for $160 or so, and the 10TB, 12TB, and even 14TB have been on sale for a while now.

So, as glad as I am to hear it, increases to 18TB or 20TB don't really shock me nowadays. When they stop quoting TB and start quoting PB, then we'll have reached the next level.

Twice in one month: Microsoft updates new-style Terminal preview


Re: Bless 'em

I went to a Microsoft DevDays conference back in the early 2000s when .Net was just starting up. Our company wanted both an insider and outsider views, so they sent two of us. I was the Unix guy, the other guy on the team was the Windows guy.

I would say at least of half of what was presented was GUI wrappers for standard Unix functionality that had existed for decades. After one presentation, where my Windows compadre had written about three pages of notes on Microsoft's exciting new interoperability product, he noticed that my Unix-centric notes were somewhat terse. I'd written "Microsoft just invented UUCP". That was pretty much the tone of the the conference.

One of the biggest things that MS was pushing was Intellisense, and their adaptive spell checking technology that was going to really make MS Word super powerful. The example they gave was how Word automatically changed teh to the without any user interaction. My Windows colleague said, "I doubt Unix has anything like that, at which point I reminded him whenever he asked why I continued to use my "ancient" emacs editor, one of the reasons was "abbrev mode"...

Don't get me wrong, I think it's great MS includes these improvements. They're welcome. But I just don't see them as being all that brilliant.

I don't see anything in this new terminal that's not already in the freeware Cmder, which itself takes the console from ConEmu project, so I'm not getting excited about it. On the other hand, making it standard will mean it's available on all machines, and I won't have to carry a portable copy of Cmder around everywhere I go, so that's welcome.

Zapped from the Play store: Another developer gets no sense from Google, appeals to the public


Re: The Register has asked Google to comment...

I had a friend working on a Windows Phone app. He even got it in their store, except you couldn't find it.

It was a local app for Toronto-based consumers looking for a specific type of arts and crafts, but searching for "Toronto", "arts and crafts", or anything else came back empty. Only if you explicitly entered his app name would it come up.

So, he mentioned this in the feedback form. The result was silence.

A month later, he got a boilerplate "Hi, we've noticed your app in the app store hasn't had any hits. Have you considered that you need to advertise more? Is there anything we can do to help?". And, naturally, he copied back that the problem was that his (free) app couldn't be found unless people already knew it existed. Maybe if their search engine actually looked at, you know, the keywords he'd entered when he submitted the app, people might find it?

People could actually find his app in the Windows Phone Store, but not using the WPS search. If you used Bing or Google outside of the app store, it would point you to the app inside.

"We're sorry", MS said, "we cannot change the search algorithm to give preferential ranking to specific applications".

Of course, he wasn't asking for preferential ranking, just for the search engine to actually work. So, he said thank you, but he wasn't going to bother updating the app, it wasn't worth it.

A month later, MS sent him the "Hi, we've noticed your app in the app store hasn't had any hits recently, is there anything we can do?" email again.

And then they wonder why people bail on them.

Canadian ISP Telus launches novel solution to deal with excess email: Crash your servers and wipe it all


Re: It's somebody else's computer

I have always pulled my email down by POP3 and saved it in a local archive. Prior to Mozilla releasing Thunderbird, I used a couple of other options, but since then, it's been Thunderbird all the way.

I pay a third party (mailbox.org) 1 Euro a month for a secure email, rather than use the free Hotmail/Gmail/Outlook, because (a) I don't trust Big Email, and (b) being a paying customer, I have a right to complain. You're either the customer or the product, and I'd rather be a customer.

But even then, Mailbox is just a server with a POP3 and web interface. Although they offer to archive the mail, I don't rely on them to; I do that myself.

Unfortunately, that's not for everyone. Too many mundane/nontechnical people just assume email storage is some magical thing that's handled by "the internet". I've had clients with defective hard drives say it was fine to wipe the hard drive and re-image it, confirm that five separate times, say that there's nothing of value and they can redownload anything the need, and then totally freak out when you tell them that wiping their hard drive means losing all the emails they downloaded.


Was the placement accidental

I'm sure it was accidental. Adverts are paid for and planned long in advance. And the news guys don't know what commercials are playing in what order, so the can't really time the stories to match the ads.

I worked for a newspaper for a bit, and while that editorial/advertising firewall was very real and very necessary, it also produced a number of embarrassments that I can remember. The most vivid was a Gazprom ad in a German magazine (possibly Der Spiegel). On the left, a full page advertisement for a Russian gas company, proudly talking about the rich shared history of German and Russian in using gas together. On the right, page one of a historical retrospective of Auschwitz, and the Molotov Ribbentop pact. Awkward...