* Posts by billdehaan

211 publicly visible posts • joined 6 Mar 2014

Page:

Dilettante dev wrote rubbish, left no logs, and had no idea why his app wasn't working

billdehaan
WTF?

Least qualified I've cleaned up after? Too many candidates to choose from

I did contract work since the 1980s. We didn't have the terms then, but I was what today would be called a "duct tape programmer", and most of my work was fixing/recovering from/undoing the work of what we call "architecture astronauts" today.

There was the $BIGCORP who bought a $SMALLCORP an inhaled their codebase. The $SMALLCORP had a self-taught BASIC programmer, and a $BIGCORP Pascal coder was ordered to recode the BASIC into Pascal. Unfortunately, it looked like the $BIGCORP programmer understood neither BASIC nor Pascal.

The original BASIC coder had tried, I'll give him that. GW-BASIC ("gee whiz") didn't have data structures or boolean variables, so the BASIC code had added int FALSE=0 and int TRUE=1 at the head of his 8,000 line program, and there were lots of int ValidInput = FALSE; and if (ValidInput = TRUE) then constructs. It was actually quite readable, given the limits of the language.

The Pascal code, however was not. The coder had basically just copied over the BASIC into Pascal, compiled it, failed, and cut out what didn't compile. Because Pascal does have boolean variables, int FALSE=0 failed to compile, so the code fixed it to be int NO=0 and int YES=1. And so all of the boolean logic was done in integers, with if ( booleancondition = YES ) code to get around the pesky problem of not being to use TRUE or FALSE "properly".

The coder also "improved" the basic code by using negative logic. This was "hurts my brain to read" negative logic. Where normal humans would say if ( x or y ) then z();, he would write if ( not x and not y ) then do nothing; else z();. He also didn't realize Pascal can do character and string operations, so he did everything in ASCII numerics. With negative logic.

Here's a simple function. Forgive my Pascal syntax, this is from a 40+ year old memory. Also, apologies for the formatting being a mess, el Reg's HTML editor throws away all of the code formatting, no matter what I do.

function validchar( c : char ) : integer;

var

i: integer;

rc: integer;

begin

i := int(c);

rc := NO;

if ( not ( i >= 65 and i <= 90) and not ( i >=97 and i <= 122) ) then

(* do nothing *)

else

rc := YES;

validchar := rc;

end;

For those whose eyes glaze over at that, all it's doing is this:

function validchar( c : char ) : boolean;

type

ValidChars = set of 'A' .. 'Z', 'a' .. 'z';

begin

if ( c in ValidChars )

validchar := true;

else

validchar := false;

end;

And I'm not doing his original code justice, as the if/then/else nesting reached a depth of 40 levels, so trying to pair the else with the if that was 13 screen pages above was a nightmare.

The original BASIC was 8,000 lines. The original Pascal code was over 40,000 lines, and he'd spent five months of the six month project on it, and it didn't work. I ignored the Pascal, took the original BASIC, and wrote a clean Pascal version of it in about a day and a half. It was under 1,200 lines.

Then there was the architect who'd written the company's custom database. His code was already legendary. I once spent two days debugging two lines of a switch statement. The lead coder asked me why I was taking so long, and I showed him the code:

switch(tecsrch(*(TEC**)(id[*f].ttype),id[*f].var,&s,s=3D=3Dstr?(*str?str:="[a-z]"):NULL,func1=3D=3DF02?1:0)){

case -1 :

s=3D(*id[*f].tecfnc)(*(TEC**)(id[*f].ttype),id[*f].var);

if(*s)

...

Thankfully, this was C, not C++. He hadn't started using templates yet. However, he did make extensive use of the SCO Xenix C compiler, which was a problem, as we were trying to convert his code to Linux (Red Hat) gcc. It turns out that gcc is (or was) "far too limited" to handle his code. For example, do you know that if you make a function call in gcc, it only passes the first 128 parameters? Everything after that is not passed on the stack! Do you have any idea how much code that breaks? For most people, the answer would be none, but he had over 800 function calls that used between 140 and 200 parameters.

I was able to migrate some of his utilities to readable, coherent code, but much of it was unintelligable, and there were no requirements as to what it was even supposed to do. Fortunately, he was still with the company, so execs made him clean up the worst of it. He estimated it to take 2-3 weeks; when I left the company, he'd been at it for almost three years, and wasn't even half finished.

The scary thing is that those two examples may be the most vivid, but I can name at least a dozen others.

Microsoft proposes sweeping global concessions to Teams for up to a decade

billdehaan

Video chat groups are a commodity now

Twenty years ago, Skype was revolutionary. Things like Zoom followed, as did WebEx, Google Chat, Jitsi, Element, Jami, and a ton of others.

Teams is no longer a revolutionary "must have" product. It's just Microsoft's implementation in a competitive space. Group chats are no longer a technical differentiator, everyone has them. Teams offers security, which most of the free chats lack, but which all of the commercial ones offer, as well. So what argument is there for Teams?

Integration with the Microsoft ecosystem. That benefits Microsoft as much as, if not more, than the user. So it's not surprising that Microsoft is acting conciliatory about this with the EU. Customers can replace Teams with WebEx or Google offerings pretty easily without much loss of function. That would hurt MS more than the customers, so it's in MS best interest to suddenly pretend to care about customers.

More's the pity that similar competitive forces can't get them to disable telemetry and AI integration, which is what most companies and users really care about.

Microsoft set to pull the plug on Bing Search APIs in favor of AI alternative

billdehaan

For those trying to avoid Bing, and AI

Brave, Qwant, and Mojeek all have their own search indexes, and do not rely on Google or Bing, the way DDG and StartPage do.

Of course none of their indexes are anywhere near as big as Bing's or Googles, but at least least so far, none of them seem to be taken over by AI.

The 'End of 10' is nigh, but don't bury your PC just yet

billdehaan
Meh

I'm not expecting much

The reality is that pretty much everyone who is or was going to migrate to Linux already has.

The technical arguments, which everyone makes, really don't matter. The reality is that when October comes, the majority of users on Windows 10 will:

1. Switch to Windows 11 if they can

2. If they can't, they'll buy a new Windows 11 PC

3. If they can't buy a new Windows 11 PC, they might buy extended Windows 10 support

4. If they can't buy Windows 10 support, or don't know about it, they'll just keep running Windows 10

5. If they can't buy Windows 10 support, or don't know about it, the will migrate to Linux or Mac OS

What percentage each option will get is still TBD, but I expect that #5 will be the least likely option. Security conscious people who understand why running an insecure Windows OS are more likely to choose options 1, 2, or 3 than 5, and those who don't will just keep running Windows 10. Unless Microsoft starts hitting people with popups, most people won't even know that Windows 10 is expiring. Hell, to be honest, given how most Windows updates are intrusive and disruptive, many will welcome it.

I've been running Mint for over a year, and it does everything I need. But I know that's now what everyone needs. Telling Photoshop users to use Gimp simply isn't going to cut it.

I think that there will definitely some pickup, but honestly, I think the Pewdiepew Youtube video created more interest in Linux than this will.

VPN Secure parent company CEO explains why he had to axe thousands of 'lifetime' deals

billdehaan

Re: What does lifetime mean?

Lifetime can mean golden handcuffs.

When I did contract work at banks, interest rates were 8%-10%. The bank's employees were offered credit cards at the rate of 5%. Likewise, they were given excellent mortgage rates. Of course, those rates were employee benefits, which they lost if they left. When rates rose, many employees were essentially locked in to the company because of them.

A co-worker was offered a job at another firm at 10% higher salary. He did the math, and discovered that he'd actually lose money if he took it, because his 5% mortgage would have to be renegotiated something like 13%.

Savvier co-workers got their bank to give them the mortgage approval in writing, then went to a competing bank and applied there. When the competing bank offered them 9%, they'd show the 5% offer they had in hand, and the competing bank would often match it. Not only did that prevent them from being tied to their employer, it turned out that if they took the discounted mortgage at their bank, the government considered it a benefit of employment. Specifically, a taxable benefit of employment. So going to the competitor made all sorts of sense.

billdehaan
Devil

They have altered the deal. Pray they do not alter it further.

Regardless, it appears the decision is final, and no refunds will be offered to customers. Instead, former lifetime users will be offered discounted plans,

Fool me once, shame on you, fool me twice shame on me

For years, I've told people that "lifetime" deals don't mean your lifetime, but the lifetime of the company. If you pay a lifetime subscription fee to VPN Service 2025 Ltd., and they fold, a new corporation VPN Service 2026 Ltd. can magically appear out of the ashes, and offer you the same service, with absolutely no obligations to you.

VPN Secure could be an excellent service on technical merit, but their blaming the customers for their failure to do proper due diligence puts them firmly in the "companies to not deal with" bin right off the bat.

Europe plots escape hatch from enshittification of search

billdehaan

Fortunately, there are options

I stopped using Google search about a decade ago. Not as a protest or anything, I was doing a lot of coding work, and I found DuckDuckGo's bangs so useful, I just ended up drifting over to DDG full time. So, I pretty much missed the enshittification of Google Search that everyone is complaining about now.

I did discover a lot of other non-Google search tools in my travels, however, and not all of them are just wrappers for Bing.

For those interested, you can take a look at:

- Andi

- Brave Search

- Dogpile

- Ecosia

- eTools.ch

- Gibiru

- MetaGer

- Mojeek

- OneLook

- Peekier

- Qwant

- SearXNG

- SwissCows

- Whoogle

- Yippy

Some are wrappers, some are aggregrators, some are politically slanted (which doesn't really affect searches for C++ template rules, but does matter if you're looking up politics and/or current events), but if you're not using Google, and Bing goes offline, there are lots of options for you to try.

OS-busting bug so bad that Microsoft blocks Windows Insider release

billdehaan
FAIL

Users versus Microsoft - round 3.1

While making "even basic things difficult" might sound like a mission statement for the Windows team nowadays

I switched away from Windows 10 last year initially because the OS was screaming at me to install a security update on my backup PC which, of course, it refused to download. After a week of farcical Microsoft support voodoo (modify this registry entry, reboot, then clear this cache, then set these environment variables, the run this, then...), I said the hell with it, downloaded and installed Linux in less time than it took the 32GB Windows install to download the 104GB "patch" file it needed. Yes, a patch was almost four times the size of the entire operating system. Ye gods.

I spent 2024 migrating all my other PCs to Linux. I used mostly Mint, but I've toyed with PopOS, Debian, Fedora, Zorin, and most recently Tuxedo OS. They're all different, and they all have different ways of doing things, but one thing they all have in common is that they don't fight the user the way Windows did. I don't fear a Linux update the way I did with Windows, because after a Linux update, the system doesn't turn all security and privacy settings back to Microsoft defaults, again, the way it does with Windows.

the company just recently tipped several AI "enhancements" into the formerly pristine Notepad

A friend still stuck on Windows 11 despises CoPilot and AI both. He is constantly turning both of them off, and they just keep turning themselves on again. The thing that drives him crazy is that there is no global "turn this crap off system wide" setting for any of it. He had to turn off AI in Word. And then in Excel. And then in PowerPoint. Every single application needs to be defanged separately. Just last week, he discovered that AI had "infested" (his word) Notepad. What next? Calculator? Calendar?

I think the busiest Windows 11 projects on github nowadays are the "decrapifiers" and "debloaters" that do nothing more than disable and remove crap that Microsoft shoves into the OS against the explicit wishes of the users.

It's not like Microsoft doesn't know people hate this nonsense. When the operating system asks users if they want something installed, and the only options are (a) Yes, and (b) Ask me again in three days, and no "NO" option is offered, it's clear that they don't consider the PC to belong to the users, but to Microsoft.

Windows isn't an OS, it's a bad habit that wants to become an addiction

billdehaan

Re: I have several friends contemplating Windows 11

There are several parts to it.

First is the definition of "easily". It entails running a script and then following a multipage procedure of clicking on various buttons in dialogs that may be renamed, or moved, and include phrases like "if the dialog is no longer in the Control Panel, look in the Settings".

Second is that this the fact that this moving target procedure has to be run every time Microsoft re-enables telemetry, after the user explicitly disabling it.

And third, as you point out, it only disables the telemetry that Microsoft allows the operator to disable. There's a nice walkthrough here. Even after the "debloating", there's still a lot of stuff that's sent.

billdehaan
Meh

I have several friends contemplating Windows 11

Several of my friends are looking at the October Windows 10-pocalyse, and the reactions cover pretty much the entire spectrum.

I personally switched to Linux last year. I started about 18 months ago in October of 2023, and reformatted the Windows machine to Mint last May. A few others have also switched. Some run Ubuntu, some run LMDE, but all are quite pleased with their choice.

Others simply accept the inevitable, and stick with Windows because they don't really have any options not to.

The funniest, however, is a friend who set up a Windows 11 rig he's quite happy with. More power to him, but he dismisses the Windows 11 criticisms as silly, because "all you have to do" is change a few settings, so it's no big deal. The taskbar can't be moved like it could in Windows 10? "I never moved it anyway". The telemetry? "That can be disabled easily". He even thoughtfully made a simple "how to" document to show people how they can easily configure Windows 11.

It was 21 pages long.

I had my Mint PC blow out from a power failure, so I bought a replacement. To restore Mint, I installed the OS, copied over the /home/$USER directory from the old hard disk, then ran mintbackup, selected all packages, reloaded them, and my system was restored, complete with desktop settings (panels, wallpaper, icons, applets, etc.) just as it had been before. There were I think three packages (Proton VPN was one of them) that didn't restore from the backup, and I had to manually install them from the Software Store or just use apt get, and I was up an running.

It look less time for me to install Mint on a new PC from scratch than it takes some of my friends to tweak their systems after a weekly Windows Update.

After you get used to your system not fighting you, and just working in the background, you really notice just how much modern Windows works against the user. I have friends that have scripts like "unfxck.bat" that they run after every Windows Update to reset the OS, because every update just "happens" to reset all of the privacy settings in the system to the most permissive, user choice be damned.

It's nice to work on a system that's not a moving target.

Microsoft Copilot shows up even when it's not wanted

billdehaan

CoPilot reminds me of the Windows 10 forced upgrade

Back in 2015 or so, when Windows 10 was released (or perhaps I should say, it escaped), Windows 7 and Windows 8 users had to remain diligent in order to keep swatting away the constant popups from the operating system to upgrade to Windows 10, whether they wanted to or not.

I had several frantic customers in small shops who came into the office one morning to discover half their machines had upgraded to Windows 10, despite the shop explicitly rejecting it, repeatedly.

CoPilot is infesting Windows in the same way. A friend discovered you can't just say "no CoPilot" at the global level, you have to say no to every single Microsoft app. No, you don't want it in Outlook. No, you don't want it in Excel. No, you don't want it in Visual Studio. No, you don't want it in PowerPoint, etc.

After thinking he'd finally defanged the beast, he was shocked to see that it had snuck into... Notepad. Yes, frigging Notepad had an AI assistant, because there's nothing a lightweight editor used for quick and dirty text entry need that a large language model embedded in it.

Remember when people used to look forward to operating systems adding features? Now all the chatter is about how to disable unwanted and unwelcome operating system tools.

Why did the Windows 95 setup use Windows 3.1?

billdehaan

Re: Marketing

Whilst that was one of OS/2's marketing points, it really wasn't the major one.

It depends on where you were. I was working at IBM at the time on an OS/2 1.x product (part of AD/Cycle, if you remember that), and at all the trade shows, the IBM presenters pushed the "you don't need DOS" and "it doesn't sit on top of DOS" lines constantly.

OS/2 was 90% of the way there to being better than Windows for a huge number of users. What was really annoying was that instead of putting in the effort to address that 10% (the SIQ in particular), IBM just blamed the press, the retailers, and the end users for not appreciating the fact that it was better. They finally fixed the SIQ in Merlin in 1996, four years later, but if they'd done that, and addressed a number of other issues with the WPS in 1992, it would have lasted a lot longer.

OS/2's biggest problem wasn't competition from Microsoft, it was (mis)management within IBM. Some of the internal communications I saw were just jaw dropping in their delusions.

billdehaan

Marketing

Windows 95 was competing with OS/2 at the time.

OS/2's claim to fame was that it was a new operating system, unlike Windows, which was just a graphic user interface that sat on top of DOS.\

Microsoft countered that Windows 95 was a standalone operating system that, unlike Windows 3.1, did not require DOS to run.

The fact was that Windows 95 did did run on top of a DOS boot loader and kernel. However, unlike Windows 3.1, it was not sold separately from DOS. Windows 95 was sold as a packaged that included the DOS boot loader and the GUI.

So while at a technical level, Windows 95 required a DOS kernel to boot, at a marketing level, it was an integrated package, so MS could claim that it did not require DOS to run, since users did not need to buy it separately.

After going through those conniptions to convince the world that Windows 95 was not just a "clown suit for DOS" (as the joke went), having it boot to DOS to install would contradict the messaging. So, it used WIndows 3.1 instead.

Garmin pulls a CrowdStrike, turns smartwatches into fancy bracelets

billdehaan

Propitious timing for the competition

Only two days ago, it was announced that the late, much lamented Pebble smartwatch will be returning, as Google has open-sourced the code for it, and the original creator plans to restart production.

Since it's been gone for almost 8 years now, one of the biggest questions people are asking is, in the year 2025 with much smart competition like the Apple Watch, why would anyone want a Pebble?

The answer is that it's simple, and "just works".

Having a smarter by an order of magnitude competitor get bricked remotely makes the case for a simple, standalone device much more effectively than any salesman could.

How the OS/2 flop went on to shape modern software

billdehaan
Facepalm

I remember reading Letwin's post

I also remember agreeing with most of it.

When looking at the corpse of OS/2, everyone sees the bullet holes in the body. IBM points to the bullets in the head that Microsoft put there, like a sniper. They ignore the many, many more bullets in the feet that were put there by IBM. There are so many it looks like IBM used a Gatling gun.

I worked at IBM (on contract) doing OS/2 applications from 1990-1992. I didn't work on OS/2 itself, although I had friends that did. I did get to see, from within IBM, the breakdown of the JDA with Microsoft. The JDA was the IBM/Microsoft Joint Development Agreement. It basically stated that IBM and Microsoft shared the OS/2 kernel, that Microsoft owned the GUI, and IBM owned the database and networking (what was known as the Extended Edition) features.

When the JDA broke down, IBM's internal attitude was that OS/2's new goal was to be "not Microsoft". I saw numerous instances of OS/2 being changed, usually needlessly, and far too frequently to its' detriment, simply to be different than Windows. Working functionality would be scrapped when a necessary component was changed, solely for the purpose of making it different from Windows.

The belief from upper management seemed to be that the corporate market drove the personal market (I disagreed), and since corporations trusted IBM more than Microsoft (I agreed with that), they would standardize on OS/2 (which many did), leaving Windows to die. By making OS/2 incompatible with Windows (except for a WinOS2 layer) it would make migrating OS/2 applications to Windows extremely difficult. That would starve Windows of application development, and kill Microsoft.

"Kill Microsoft" was clearly a goal of many at IBM, especially the marketing and business direction types, who'd been stung by the failure of the JDA.

The problem was that the corporate market didn't dictate the market in 1992, the way it had in decades past. IBM management was told that repeatedly, but they refused to believe. The IBM internal fora (like Usenet, but internal only) was absolutely filled with rank and file employees screaming at the top of their lungs that it wasn't 1980 any more. Parents were not buying PC 5150 DOS machines and awing their children with this majestic new technology. In fact, the kids were often the ones explaining to parents what an Apple ][, or Atari, or Commodore 64 was. That may not be true in households with parents working at IBM, but for the vast majority of households, they cared less about what computers their company used, and more about what their kids' school used, and what they saw for sale at Sears, local electronics stores, and Circuit City.

Developers were not going to develop a massive application for OS/2 and then carve away functionality to make it run on Windows, the way IBM (executives) believed they would. They'd start from the bottom up, making it work for the easy case of Windows first, and then expand and extend it for OS/2. Or they would have, if IBM hadn't deliberately done everything they could to make that as difficult as possible.

I had a small DOS application that I'd written in 1988 and had sold to a number of local law firms. Many were curious and asked about Windows and OS/2 versions. When I asked Microsoft, they sent me a WIN32 Developer Kit, for free. It was a beast, and incredibly klunky to work with, but it worked. When I tried to talk to IBM about OS/2, I was sent a price list that showed C/Set2 tools, starting at $500, and that was it.

Microsoft went out of its' way to court developers. Often they went too far, to the point where they were practically bribing people to develop Windows apps. In contrast, IBM held non-corporate developers in contempt. As one magazine at the time put, "IBM would garner a lot more support for their OS/2 operating system if they stopped treating potential developers for it like child molesters".

OS/2 was technologically far ahead of Windows, especially version 3.x. It was still technically better than Windows 95. But in real world terms, for consumers and developers, IBM was simply too difficult to deal with.

At home, I ran OS/2 1.x from 1990 to 1991, and OS/2 2.x from 1991 (beta versions) to 1996, when Windows NT 4.0 came out. I dual booted between them. Remember MOST, the Multiple Operating System Tool, that IBM included with OS/2? Long before GRUB, we had MOST. But once NT 4.0 came out, with the stability of NT and much of the application base of Windows 95, OS/2 was simply too far behind to ever catch up.

Both KDE and GNOME to offer official distros

billdehaan

Number of Linux distributions soon to exceed number of Linux users

It's the logical conclusion, when you think about it.

Microsoft flashes Win10 users with more full-screen ads for Windows 11

billdehaan

Re: Ten years ago

I played around with Linux from the time you had to ftp 10+ diskette images from tsx-11.mit.edu

Oh, that sounds familiar. We had a computer bookstore literally across the street from one workplace, and they stocked all of the O'Reilly books. We were a Solaris shop, and basically any O'Reilly book that came out, we bought the next day. So we were using the GNU tools on DOS, Solaris, and even OS/2 for years. Most of our Linux distributions we got from there on CD. I think it was either late 1996 or early 1997 that we were comfortable enough with Linux to replace a Xenix machine with it. Of course, it was just an NNTP and FTP server to begin with, something that wouldn't disrupt the business if it failed. As management confidence grew, we started migrating more and more things over.

We had Star Office (later Open Office) some time around 1998, when the licencing made it free. This was at a time when Office suites were $500 or more, which is why Microsoft, Lotus, Apple and others sold lower-cost (and lower function) "Works" packages for about $100.

And absolutely, if your use case is web surf, play movies, mail, edit Office documents, Linux can do all that, and then some. There's still custom, Windows-only software that holds people back, and as good as it is, Wine isn't a panacea, so Windows isn't going anywhere. But it no longer has the strangehold at the consumer level that it once did.

billdehaan

Re: Ten years ago

Ten years ago you'd have been laughed at for suggesting Linux.

That's both because ten years ago, the Linux desktop was far less mature than it is today, and because Windows wasn't the intrusive system it is today.

I booted Yggdrasil in the mid 1990s, and migrated a ton of Xenix servers over to Red Hat, then Mandrake, then Mandriva, in the late 1990s and early 2000s. Linux was, and is, an awesome server OS. It made a great backup server, FTP server, NNTP server, firewall, etc. But the desktops simply weren't ready for end users.

KDE, originally just a knockoff of CDE, was a great improvement over Motif and OpenLook, to be sure. But expecting end users to edit .xinitrc in vi? Forget it.

Fortunately, few apps required a GUI to install, so the headless machines could easily be administered via SSH. And once installed, many services allowed remote administration via a web page.

But unless you were talking to a gearhead who was happy to edit config files in vi, and understood xfontset, yes, you'd be laughed at if you suggested Linux. Few people wanted to tinker with /dev/audio just to get sound working, or futz in 640x480 video trying to find the proper video settings for their 1920x1080 video card. Even setting up networking could be a royal pain.

Today, it's not just that Windows is declining in usability, and security, and privacy, and pretty much every other way that's driving people away. It's that Linux is now much easier to install, to configure, and to use. I downloaded a Mint ISO to a thumbdrive, and installed it on a new machine in about half an hour. Mint recognized the sound card, the 1920x1080 video, the web cam, the microphone, the SSD, the gigabit ethernet, and the USB sound card and speakers right out of the box.

So unlike a decade ago, Linux really is a valid choice for a lot of people who don't require specific Windows-only software to run.

Given that so much of what we used to run locally (POP3 and IMAP email, downloading news in RSS, etc.) has been replaced by web apps running remotely, just being able to run a modern browser covers a lot of use cases of modern users.

Undergrad thought he had mastered Unix in weeks. Then he discovered rm -rf

billdehaan

Undergrad? Pfft. Try system administrator

In the 1980s, I contracted at $COMPANY, a defence contractor. It was like a Dilbert strip, co-written by Franz Kafka.

They worked exclusively on closed tender cost plus contracts. For those unfamiliar, "cost plus" means the DOD/DND says "build this", $COMPANY says "we don't know how, or what it will cost", and DOD/DND says "build it anyway, we'll pay for all costs plus 15% profit". It's zero risk, and guaranteed profit.

If the company spends 10 hours doing a job, they get paid for 11.5. If they spend 100 hours, they get paid for 115. If they buy an Ada compiler for $20,000, they get reimbursed $23,000 for it. If they buy a different Ada compiler for $100,000, they get reimbursed $115,000, etc.

You can see the incentive structure that was formed. The company was filled with lifelong employees who considered this all perfectly natural. The company culture reflected this, and inefficiency was not only not punished, it was rewarded. Competent employees actually cost the company money. A few were required so that at least some projects shipped, but incompetent employees where the company's bread and butter.

Enter... the system manager. Or "System Mangler", as he was commonly referred to.

It was a VAX shop, and the SM administered it. His administration skills were already legendary. It was perfectly normal to log in and discover that the C compiler had disappeared. Or that the disk pack had only 3kb of disk space (for 40+ developers). Or that you no longer had write, and sometimes even read, access to your own files. There were two printers, one being a laser and the other a typewriter ball "high quality" printer. Files sent to one printer routinely went to the other, resulting in dozens of dead trees worth of printed gibberish.

The SM did the weekly backup at high priority during the workday on Friday. This mean locking all files, so users could do no work until he completed the multi-hour backup. Of the swap volume. And if the backup didn't fit, he'd delete user files to free up space. When users demanded their deleted files back, the SM would explain that he deleted them before the backup, because "otherwise the backups take too much tape", and there was only a small number of installation tapes to do the backup on.

So when we got a contract that stipulated Unix, we got an Apollo unix machine for users to telnet into. And clearly, SM was the man to administer them.

Work proceeded smoothly for about six weeks, as the unix-capable devs just coded away quietly. But then the SM decided it was time for him to learn unix. He logged in, as root of course, and started running a "how to unix" introductory tutorial.

It took him 17 minutes before he reached the "rm -rf /*" step of his introduction. All user files were gone. Unix was gone. The entire disk was wiped clean, as he repeatedly answered "y" to all "are you sure?" prompts. Naturally, he blamed the OS for allowing him to do it, and management was happy with that answer.

So, when the replacement Unix installation tapes arrived two weeks later (he'd overwritten the originals with VAX backups, of course) and the Apollo had a clean install, the first thing the lead dev did was change the root admin passwords, and locked the SM out of the system.

When the SM complained that he wouldn't be able to administer the machine, he was told that was the point.

When the SM pointed out that meant there would be no backups, the lead dev said "we had a meeting about that. the team voted unanymously that we'd rather work without backups than let you near the system again"

Your computer's not working? Sure, I can fix that problem – which I caused

billdehaan

Re: I've done something similar - but without the evil

Being in the right doesn't always help.

Oh, absolutely. One of the adages my lawyer friend (referenced above) mentions a lot is "when the law is against you, pound on the facts; when the facts are against you, pound on the law; when the facts and the law are both against you, pound on the table".

I'm well aware that a corporation with billions of assets can keep me tied up in court, pounding away on the table far longer than any disputed contract is worth, regardless of whether I am in the right or not. They know it, too. As the saying goes, "the process is the punishment".

That's why I deployed the logic guards in the first place. By doing so, it makes the time any time in court just as painful for the customer as for me, and likely a lot more painful.

I'm not going to sue them and spend six months in court for $20,000 and they know it. But if they refuse to pay, and the software stops working, it's going to cost them a lot more than $20,000 in lost sales and other business costs to if they try to take me to court for six months, too. That's especially true if they're going to try to argue that failure to pay in unrelated to the paid-for product being disabled. It's not a good argument.

As I said, for the great majority of customers I've dealt with, it's never an issue. But even good, reputable companies can have sleazebag employees who try to stiff suppliers.

billdehaan

I've done something similar - but without the evil

I learned early on that most customers are reasonable, honest people. I also learned that a minority... aren't.

One thing that a friend who sold hardware taught me early on was to add the line "Title and ownership transferred only upon receipt of final payment". He learned that the hard way when a customer went bankrupt. All of the customer's assets, including those my friend had sold to him, were liquidated at 15 cents on the dollar. Fortunately, it only resulted in something like a $300 loss, but it could have been a lot worse. From then on, by explicitly retaining ownership until payment was complete, hardware sold to later customers that went bankrupt could be recovered without going through the receiver or liquidator.

Software and licenced intellectual property are a little more difficult to repossess, however.

So, I always programmed a expiry date in any custom application I wrote for customers. It wasn't for blackmail purposes; after final payment was made, the date check was removed in the final build. But if the customer decided to renege on that payment, and/or pass along a copy to other users in violation of the contract terms (and a few have), they'd eventually see a popup that stated "The software licence for this software has expired. To extend or purchase a licence, please contact XYZ at 416-xxx-yyyy".

I had a few shady customers basically laugh in my face when I presented the final invoice and tell me to go fornicate myself, because they had the code deployed and "there isn't a damned thing you [sic] can do about it". And they weren't all fly by night operators, by any means. A couple were in the Fortune 100.

When dealing with a Schedule A bank with billions in assets, it's a lot easier to get them to pay their bills when not doing so affects their business.

I will always cherish the look on the face of the MegaBankCorp lawyer who was initially full of bluster when the private banking side of the bank had sicced him on me for "sabotaging" their customers with my logic bomb. The lawyer was in full bore "sue you into the ground" mode. But when I asked him to provide proof of payment, the bank - the bank - financial person at the meeting said "umm, actually, we haven't been able to find any cheques or money transfers, but we're sure we paid him", his face went white. The finance guy showed the first two payments via money transfer just fine, but the third one, the biggest one, had no record of being paid. Then he said "oh, we must have paid him in cash, and forgotten to ask for a receipt". He paused, and asked the lawyer if that was a problem. The lawyer just looked at him, and in an absolute deadpan voice said "big time".

And just like that the problem went away. When I invoiced them for my time with their lawyer, they paid without even challenging it. As a lawyer friend put it, "suing people for not doing work they haven't been paid for is rarely a successful argument in court"

The empire of C++ strikes back with Safe C++ blueprint

billdehaan

Re: Rational Innovation

Oh, I agree, absolutely.

I started with C++ in 1987 or 1988. We were doing C work using Lattice and later Microsoft C (the early versions of Microsoft were just rebranded Lattice, then they went separate ways). We got a Zortech C++ compiler and played around to see what we could do with it.

At least back then, C++ was basically "smarter C". Most of the structure that C++ imposed were things we were already doing with our coding guidelines. If you think of classes as being structures with associated functions, it's just a cleaner syntax.

I've used inheritance, and templates myself. But I've used them sparingly, with simple base classes.

But I routinely see code that inherits from 47 base classes, which are passed into triply-templated layers of abstraction. The average coder can't understand that, and the average architect gets snotty about it when questioned, and usually tells management the problem is that the developers are too stupid to understand it.

In bad companies, management agrees, and they end up with unmaintainable systems.

In good companies (and I've seen many), management calls the architect's bluff, and makes him clean up his own mess. It's always amusing to see an architect who's said that any developer who takes more than two weeks to implement an XYZ function with his gee-whiz framework should be fired ordered to implement XYZ, and see him struggle with it for months. I once gave an estimate of 1,000 hours, ie. six months, to do something that the architecture assured the PM could be done in a week, easy. After four months on it, when it still didn't work, the architect upgrade his estimate from 40 hours to 6,500 - more than three years.

billdehaan

Re: Closing the barn door

I was a contractor for 20 years. I am a duct-tape programmer type, and I would say that at least 80% of my work was undoing the damage of architecture astronauts.

The problem is that many companies mistake complexity for intelligence, and equate buzzwords with intelligence. So the more complex and abstract something is, the more impressed they are by it. They reward complexity, and the result is that they end up with systems that are so complicated that they can't be understood. And often, that complexity is completely unnecessary.

I've replaced 3,500 lines of C++ inheritance with 30 lines of code. I've replaced 18 pages of Pascal code with a one line set definition and a four line boolean function. And in both cases, the architects fought tooth and nail to keep their existing megabytes of navel-gazing code that did absolutely nothing that my half page routines didn't do.

I joked at one company that their architects couldn't write "hello world" without using parameterized templates and code generation. The PM (project manager) I was talking with told me not to exaggerate. Three weeks later, as he was reviewing bug fixes made to projects to determine whether he should approve their being ported to the main product, he read a bug fix that had a title like "Enhancement: automate adaptable functor generation via variadic template to allow for polymorphic reflection". Other than generation, he had no idea what any of those words meant. He walked up to me with a printout of it, and said "I kind of thought you were kidding". Would that I was.

And yet, he approved it, because he was afraid not to.

And that's the problem with architecture astronauts. They get away with their nonsense because everyone is afraid to touch it because they don't understand it.

billdehaan
Unhappy

Closing the barn door

I pretty much gave up on the idea of C++ ever being safe when I heard two architects debating, seriously, the difference between "protected abstract virtual base pure const virtual private" destructors and "protected virtual abstract base pure virtual private const' destructors.

When you see phrases like "transflective binodal surrogate", and dragging things through reinterpret_casts of dynamic_pointer_casts of static_pointer_casts, you've reached a level of complexity and abstraction where it's pretty much impossible to account for memory safety.

The funny thing is that C++ was supposed to make C programming cleaner and easier, but in many ways, it's done the opposite.

At least in the days of C, we had lint to keep us honest.

The end is in sight for Windows 10, but Microsoft keeps pushing out fixes

billdehaan
Meh

I joined the Dark Side, they don't have cookies. Or drama.

Late last year, one of my three Windows 10 PCs demanded I run a Windows Update, which refused to run. A week of futzing about later, the MS solution was "wipe your PC and reinstall Windows from scratch".

Looking at my re-install options for it, I discovered that (a) Windows 10 would expire in October 2025, about 18 months away (at the time), and (b) the PC was not Windows 11 compatible.

Since I had to reinstall an OS anyway, and there was little point in installing Windows only to have to do it again in a year and a half anyway, I fooled around with a half dozen Linux distros, picked Mint, and the machine has been boringly productive ever since. There was a learning curve, and it took about six months to properly migrate everything over and get up to speed, but the end result was a stabler and faster PC that's supported until at least 2029. The other two PCs followed suit and were migrated over as well.

What's been most notable about running Linux for these past few months is how little drama there is with it. I still use Windows at work, and there is constant drama of bad updates being pushed out, anti-virus software messing things up, cloud outages, the security nightmare that is Recall, the dependency on having an Outlook account, privacy settings constantly being reset to the least private values possible, and on an on.

Windows itself is fine. But the most common thing I hear from users is "just leave it alone and stop changing things all the time". In comparison, my Mint machines are boring in comparison, honestly. All of them offer updates, which I as the user can approve, deny, or delay, and none of them require me reconfiguring things after an update.

What's amusing is that it actually takes a while to get used to having a stable system, when you've become so used to the OS vendor constantly changing it on you all the time.

Client tells techie: You're not leaving the country until this printer is working

billdehaan

Thanks for the flashbacks, el Reg

the time he was despatched from his UK home to an African nation, where his client operated both a mining company and the national airline.

An unstable African nation (pick one) had a certain government agency "forget" to pay their internet bill. This made the ISP (a multinational) rather upset, so they sent out an expendable *cough* junior tech who "looked the part" to do support on site. And also, to get the customer to pay its' (huge) overdue bill.

When I say he looked the part, the criteria was essentially "Joe is black, he's less likely to get shot, so send him". That may or may not have been true, but "less" did not mean "zero", a fact not lost on Joe.

Joe cleaned up a lot of the bad processes and discovered that the nonpaying customer was not only paying them (or not paying them) for internet access, but also VOIP. So, when they refused to pay for the third time, he pulled the plug on the agency, disconnecting all of their phones.

Unknown to Joe, the agency was the parent agency of an "off the books" security agency. He was told this 5 minutes after he pulled the plug, by terrified local co-workers who realized what Joe had done, as they were fleeing the building before the pissed off Black Ops soldiers arrived.

Sure enough, ten minutes later, about 20 Toyota Technicals arrived at the ISP building, and it was soon surrounded by a hundred armed guys with fatigues, AK-47s, and black sunglasses, as the immaculately dressed leader calmly entered the building with a dozen of his crack troops.

They went into the head office, where Joe was sitting behind the desk. The leader lit a cigar, placed his AK-47 on the desk, and calmly said "I am here to ask why you have disconnected all of my troops' telephones".

Oddly enough, the meeting was strangely uneventful, and even cordial, as Joe explained how much was owed, and how, if he couldn't get some of the money owed back, the head office would simply pull the plug and cut off the entire country.

He was successfully able to get about 30% of the bill paid, and also not get shot in the process. That may not register in the company's financials, but Joe certainly appreciated it.

When the money transfer was confirmed, he turned the phones back on, and the soldiers left. So did Joe, the next day, to non-African pastures.

When he returned, he wasn't exactly covered in glory by his bosses. His direct manager wasn't impressed with the 30% payment he had negotiated, but begrudgingly told him that "you earned your ticket home", apparently thinking that was an option the company could choose to ignore rather than a requirement.

Reading the writing on the wall, Joe quit six weeks later. While the execs were not impressed with the 30% payment he had negotiated, it was 30% more than anything else they got in the future.

IBM Canada can't duck channel exec's systematic age discrimination claim

billdehaan
Big Brother

I worked at IBM Canada in the 1990s

I was there from 1990 to 1992, and I saw the changing of the guard in real time.

The old guard had been taught that *they* controlled the accounts, not the customers. The customers were expected to shut up and buy what they were told to buy. That had worked for 50 years, why wouldn't it work now? The new generation thought that the old guard were out of touch technically, and they were. They have been more current with technology, but they still viewed customers as livestock who were expected to follow whatever IBM dictates.

The problem was that the world was changing, and the money wasn't all in multimillion dollar mainframes and the associated high-margin support contracts. Cheap PCs were the new thing (a $3,000 Compaq 386 could do about 75% of what a System/36 could do, at about a tenth of the cost. IBM viewed PC users as defective corporations, and treated them as such.

IBMers who tried to buck the trend were let go. IBMers who challenged orthodoxy were let go. The result was a corporate culture that not only stifled innovation, it punished it. Engineers were treated as interchangeable commodities. If the loaded cost of that commodity is $175 in Toronto and $1.38 in Mumbai (seriously, it was under $2 in 1997), they were going to do everything they could to replace Toronto resources with ones in Mumbai. Some Toronto people were even apparently offered positions in India when they were told their local positions were being phased out.

LaMoreaux's comment that it's not IBM policy is technically correct. It's what is legally termed "Constructive Dismissal". There is no directive from senior management to fire older employees, but what there is are targets that are given to middle management that cannot be met any other way except through the dismissal of the older employees. They're not telling the manager to fire the most senior engineer; they're cutting his budget so that he either has to fire the senior engineer or two (or sometimes three, or even four) younger members of the team. Obviously, the middle manager will cut as few people as possible, and that just "happens" to be the oldest, and most expensive, member of the team.

Linux updates with an undo function? Some distros have that

billdehaan
Meh

Another argument for backups

I switched from Windows to Mint a few months back. Some things are better, some are the same, and some are worse.

I much prefer Linux's text file based configuration design over the Windows registry. It's so much easier to back up the config directory and edit human-readable configuration files than it is to go hunting through the Program Files, ProgramData, and %APPDATA% directories, never mind determining whether you should be looking in the Local, LocalData, or Roaming profiles. And then there's the registry, which can be incredibly convoluted and difficult to deal with.

My overnight backup went from 90 minutes under Windows to 18 minutes under Linux. Score one point for Linux.

On the flip side, VSS under NTFS made things like live imaging of the boot partition/disk possible under Windows. Linux isn't there, yet. I run Timeshift and backup, but if I really want to back up an image of my boot partition, I need to boot my PC from a Linux USB in order to use RescueZilla, dd, or the Gnome Disk Manager and back up the boot drive. Score one for Windows there.

I'm still happily running Mint 21.3, and from I'm reading on the forums, it looks like 99% of the Mint users upgrade with little or no problem. But, as always, the 1% can still bork their system, even if they do everything right. And even if they don't screw up the upgrade, you can still find the new OS version doesn't recognize your wifi card, whether the previous version did.

For whatever reason, people may want, or need, to go back to the previous release. Fortunately, with Mint, even the previous version gets security support until 2027, so there's no need to rush.

The moral of the story is : do a complete backup before you install, and don't rush out to install first day unless you need to. I'm quite happy to let the early adopters find all the bugs for me.

Secure Boot useless on hundreds of PCs from major vendors after key leak

billdehaan
Thumb Down

The only thing worse than bad security

is false security.

I ran the check (efi-readvar -v PK in Linux), and my systems are okay. Mind you, they're HP, Zotac, and Lenovo, who weren't listed in the exposed systems. But honestly, the idea that we need to protect the BIOS / UEFI in the first place is a design failure. Putting crap like programmable vendor logos and user-defined backgrounds into the boot sequence is just asking for trouble. And now that they've handed the bad guys a way to load undetectable code that executes before any defence or corrective action can be taken, the only solution is to replace the motherboard entirely, which more often than not means junking the PC.

Thunderbird is go: 128 now out with revamped 'Nebula' UI

billdehaan
Meh

I think I'll wait a bit

I've been using Thunderbird for literally decades, and there are some annoyances that have never never been addressed, even though they've been in the bug list for 15+ years. The one that's annoyed me more than anything is the search rules lack of a simple "From or To" option. There's a From, and a To, and a From/To/CC/BCC, but there's not From/To.

Oh, you can have rule 1 be From X, and rule 2 be To X, and OR them, but that only works if those are the only two rules. If you want to see only the mails between you and person X over the past year, there's no way to define [[From X OR To X] AND [Age less than 1 year]].

I tried Betterbird last year, and the first thing I noticed is that they've implemented search groups, which is exactly what I've been wanting for 15+ years. There are a lot of other fixes as well, but that was the big one for me.

If Thunderbird has finally implemented something like that, I might consider switching back, but unless there is some security issue in Betterbird that isn't being patched, I don't see this new Thunderbird version as a reason to go back.

I'm happy to see the project moving forward, though.

Microsoft makes it harder to avoid OneDrive during new Windows 11 installs

billdehaan
Facepalm

This isn't new, I dealt with it years ago

As I just posted in the "Windows: Insecure by design" story, I had a customer be unable to backup to her external USB disk because OneDrive intercepted the drive to drive copy operation, see that there wasn't enough space on OneDrive for the file, and abort the disk operation.

Had she called MS for support, I'm sure they'd have told her she had to purchase 2TB of cloud storage to be allowed to use her local hard disk. Fortunately, she called me instead, and I defanged her Windows set so that she didn't have to go through Microsoft servers to make a local backup.

During this investigation, we discovered that 5GB of her proprietary customer data was in the cloud on OneDrive, without her knowledge or consent.

When I played with setting up Linux last year, I tried almost a dozen distros before ultimately deciding on Mint. One of the marked differences between Linux and Windows is that while you still have to configure your desktop environment to suit you, with Linux, you're customizing the interface, not trying to defeat it.

Linux isn't perfect, by any means. In Mint, you still have to enable the firewall which is disabled by default. But you don't have to go though hoops to not associate your desktop computer with an internet account, you don't have to run a "decrappifier" or hunt through dozens of configuration screens to disable telemetry, and you don't have reconfigure your machine after every operating system update (which in Windows, can be weekly) because the update has reset all of the privacy settings and file associations to what Microsoft wants them to be, not what you set (and reset, and reset, and reset) them to be.

Windows: Insecure by design

billdehaan
Facepalm

I can't wait for Recall to be backed up to OneDrive and then have OneDrive hacked

What I can't stand is Microsoft automatically sets up OneDrive to back up my folders whether I want it to or not. Not cool, Microsoft! Not cool at all. If I want to back up my files, I'll decide where I want them to go – not you.

A few years ago, a customer called me up, frantic that her hard drive was failing. Although I'd set her up with an external backup disk, it apparently had died and she'd simply never bothered to mention it. So she had something like 17 years of her videos/photos/etc., almost 2TB of data, that was slowly dying. She'd bought an external 4TB USB disk, but it "was out of space", so she needed help.

Her 2TB of data should easily fit on a 4TB disk, and by definition, a newly purchased 4TB disk should have 4TB of free space. At first, I assumed it was improperly formatted, maybe with a MacOS format, but no, it was readable by Windows. I thought perhaps it could be FAT32 formatted, which would explain why files over 4GB (which described the majority of her work) wouldn't copy, but it wasn't that, either.

She showed me the error message. She opened two Explorer windows (that was her workflow), one with her 2TB C: drive, and one with the external 4TB D: drive. Drag a folder from C: to D:, and sure enough an "insufficient disk space" error appeared, along with a hex error code, in a popup.

Hmm.

I tried the disk with my laptop, and I could copy to it just fine. It also wasn't user permissions, or anything like that. In fact, she was able to copy some small files, but that was it.

Looking up the hex error code, I was surprised to see it was a OneDrive error code. WTF? She wasn't even using OneDrive, this was a local drive to local drive copy. Or at least, it was supposed to be.

Well, as it turns out, in Windows 10, if you copy a folder from one Windows Explorer window to another, Windows intercepts that, and copies the content to OneDrive, as well. And since she had only 2GB (or maybe it was 5GB, whatever the default is) of OneDrive space, it was completely stuffed, and couldn't take any more.

So, the copy aborted, with a "disk full" error. Not a "OneDrive disk is full" error, just "disk full".

She didn't know what OneDrive even was, Microsoft simply enabled it by default in the Windows installation process.

Thankfully, when I disabled it from starting up when she logged in, she was able to actually use her external drive and do a proper backup.

Luckily, she had called me for help, rather than Microsoft Support. I have no doubt that they would have told her the resolution was for her to by 2TB (or 4TB) of OneDrive space, and then have her struggle with trying to back up 2TB to online storage with her (at the time) 1MB ADSL upload speed. That would have taken months, but probably less, because her disk would be dead long before the backup could finish.

And yes, she told me that after I disabled that "OneDrive virus", her file copies were "so much faster now". Imagine that.

Out of curiosity, she checked out what was on her OneDrive, and was horrified to discover that confidential customer files were online. They were uploaded without her knowledge or consent, simply because that's default behaviour. Default behaviour that slowed her PC to a crawl, prevented proper backups, and uploaded confidential data to the internet without her consent.

If Recall is enabled, it will take screenshots, with no regard for what's private and confidential, and depending on OneDrive configuration, those screenshots may end up online. By default, Recall will be disabled, at first, but how many privacy-breaking settings "accidentally" get reset to the least private after a Windows Update? Far too many.

And that, boys and girls, is why I switched to Mint last year, and why I've been fielding a surprising number of calls from people asking me how hard it would be for them to switch, and whether I can help them do it.

You're wrong, I'm right, and you're hiding the data that proves it

billdehaan

Re: Have you proven a colleague wrong...

Yes, I have. To both. A number of times, surprisingly.

The most extreme example was when a VIP executive was nearly in tears because his laptop didn't work on site. It worked fine in the office, but at the customer site (several flight hours away), it just would not connect to the mothership database, which was necessary for his job function. This was in the early 1990s, before the consumer internet was a thing; we're talking about a high end IBM laptop with a top of the line (for the time) modem.

The IT department tested it, blessed it, and he went to field with it, only to have it fail to connect when in the onsite meeting with the customer. He came back, yelled at IT, they tested, again, decreed it good, again, and he went to the field, only to have it fail in front of the customer. Again.

So, he went back to IT and escalated to the top of the IT food chain. He explained that he had a last-chance meeting with the customer on Monday, and he was flying out on the weekend, so it *had* to be fixed by end of day (this was Friday). The head of IT declared that the problem was the hard drive, and that it would be replaced. He gave it to a subordinate, and the VIP went away.

At 3pm on Friday, he called IT and got dead silence. He called my boss, who told me to look at it. It turned out that the IT dweeb had simply put the laptop on a shelf with a sticky note that said "look at on Monday, first thing", ad left early.

Basically, VIP was being hung out to dry.

So, I, uh, "liberated" the laptop (in violation of company policy) to look at it. The idea that it was a hard drive issue made NO sense, since everything worked in the office, only not on site. I could called the database no problem. I wasn't on site, so I couldn't test, so I called one of our offices overseas, and had them set up a phone redirect to the database in my office. I called the number, and it called back, connecting from overseas. And sure enough, it failed.

I did some digging, and found the issue was the dialing prefix. Basically, it was using the local profile for everything. I set up a remote profile, tried again, and tested it successfully. I called VIP, explained it to him, showed him how to toggle profiles, and gave him my home number.

On Monday morning the IT dweeb noticed the missing laptop, reported it stolen, and filed a formal complaint against me. He reported me to the head of IT, who also filed a second complaint against both me and my manager.

When the VIP returned from his successful trip, with a signed multi-million dollar contract because he'd finally had a working laptop in the field, he was shocked to learn that the IT screwups who'd failed to fix his PC - twice - had formally complained about the guys who had actually fixed his laptop.

The VIP was the boss of the boss of the boss of the department head who was the boss of my VP *and* the IT department VP. As it turned out, one of our two departments was scheduled to be moved to the sub-basement in the coming re-org, and at the time, the odds were 70-30 that it would be my group. After this little escapade, the IT group was sent to the dungeon, instead of us.

I'd call that triumphant.

billdehaan
Facepalm

Thumbnail, raw image, what's the diff?

I'm an embedded type, but I have occasionally been ordered to support web types, and it never goes well.

Back in the early 2000s, when a customer's site was reduced to a crawl, "my" back end was blamed for an astronomical increase in data usage. The web team (a different vendor) didn't bother to ask me to look at it, they went directly to the customer and reported that "the crappy database" was causing problems.

Since (a) it had been running fine previously, and (b) I hadn't even logged into the site, let alone made any updates in over two weeks, I suspected otherwise. When I contacted the web vendor to talk about it, my contact there (a decent enough chap, for a web type) didn't answer. I was informed by the customer that he had in fact left the company, and that his duties were taken over by a new hire.

In a surprising coincidence, his replacement had started only a day or two before the performance problem was reported.

Imagine that.

So before digging into the server side database, I decided, on a whim, to look at the client side HTML for the site.

The old site had workable, but very inefficient code, of the form:

[a href="D:\Site\Images\20MB_Image.jpg" target="_blank"][img src="D:\Site\Images\8kb_Image_thumbnail.jpg" width="128" height="64" alt="Image Text" /][/a]

(pseudo-HTML so el Reg's editor will allow it)

The new web designer was aghast that every single image was duplicated, one being the image itself, and another being a thumbnail.

As you can see, this is extremely efficient, when all you have to do is use the "width" and "height" parameters. So, he recoded the thumbnail page accordingly:

[a href="D:\Site\Images\20MB_Image.jpg" target="_blank"][img src="D:\Site\Images\20MB_Image.jpg" width="128" height="64" alt="Image Text" /][/a]]

This way, all of those silly "8kb_xxxxx_thumbnail.jpg" files could be deleted. And were.

This resulted in 1100 image files being reduced to 550, with the added benefit that there was no need to keep all those href and img tags in sync. Just use a single tag, and there was no chance of them getting out of sync!

What the web designer clearly didn't understand was that the raw "20MB" files were named that way because the images were, well, 20MB in size. That's why there was a downsized and compressed thumbnail that the previous web designer had tried (but not always succeeded) to keep under 8KB in size. The site had 500+ such images, split over something like 10 pages, so each page had 50 such thumbnails on average. That was about 400KB of data. That doesn't sound like much in 2024, but in 2002, when most people still had 56kb (or even 28kb) modems, downloading those thumbnails alone took 3-5 seconds. Downloading one of the 20MB files took more than 5 minutes.

The web developer seemed to think that "width" and "height" parameters magically compressed the images, rather downscale them. So his "optimization" resulted in 400KB of thumbnail data being replaced with something like 384MB (raw pictures ranged in size from 500kb to 20MB). And, of course, since he tested it in a local network with 100MB connection rather than a 56kb modem, performance was not an issue for him.

Since the web team had gone directly to the customer with the issue, it was only proper that I do the same. Fortunately for me, if not the web designers, the basics of "the database didn't change, the web site did" was not lost on him.

Unsurprisingly, when the web designers' contract contract ended, I found myself working with a new web design company on the front end.

Your trainee just took down our business and has no idea how or why

billdehaan

Re: Sounds unlikely.

I worked at IBM on contract in the early OS/2 days (as in, OS/2 1.x days). And while I have many (many, many, oh so many) criticisms of IBM, one of the things they did right, and better (in my experience) than any other large organisation, was the on-boarding process for new hires (and in my case, contractors).

My first day, I was assigned a (very) small office, a phone, a short tour, and a map of the building floor I was on, highlighting the paths to most important things I needed to know: the fire exits, the washrooms, my group leader's office, the coffee machines, and the cafeteria.

Most importantly, I was given a huge (200+ page) 8.5x11 inch binder of paper. Each page was to be read, and initialed that I'd read it, and agreed to it. There was a scratchpad where I was to write out any questions and/or objections. The binder included not only job duties and responsibilities, but restrictions, processes, and how to escalate questions. The overall tone was "if you don't know or understand, don't guess, ask".

Being young, and this being early in my career, I thought this was silly, and overkill, as 90% of it was self-evident, or of the "well, duh" type of information that should be obvious.

Later in life, when I saw the disasters at other companies because they didn't have a good on-boarding process, I understood the importance of it. It may well have been that 95% of that initial on-boarding was redundant or useless, but the 5% that prevented new hire disasters more than paid for itself over time.

Of course, although everyone agrees that 95% of it is useless, no one could agree on which 95% can be cut, so it stays. Today, whenever I see one of these new hire disaster stories, I keep looking to see if any have hit IBM yet, but they don't seem to (although many other types of hits occur, certainly).

This was 30 years (or more... sigh) ago, so it could well have changed, but back in the day, IBM's new hire on-boarding was the gold standard.

billdehaan

Has an ignorant kid broken your boxes? Have they ever

I've worked in the defence, finance, energy, transportation, medical, food, and general IT sectors over the past few decades, and almost every one of them has some variation of an "unsupervised new hire brings the company to a halt" story.

Bank trading floor brought down by a new hire plugging in incompatible equipment? Check.

Server room and business center evacuated because new hire thought the big red button was the "unlock the exit door" in the server room, when it was really the HALON fire system? Check. "Fortunately", the HALON actually malfunctioned on the new hire wasn't killed, at least.

Run the "build a hex file from the source tree and copy it to the EMPROM programmer" scripts in wrong order, and accidentally overwrite the project's entire, and not recently backed up, source code base? Check.

Start the test bench sequence in the incorrect order and start a small fire? Check.

Send confidential (fortunately only embarrassing and not legally concerning) information out company wide by using REPLY-ALL and attaching the wrong file? Check.

The details all differ, but the common problem was that an untrained and most importantly unsupervised new employee was given duties/responsibilities/access to resources far beyond their current state of knowledge, and/or training, and expected to have the same skill and knowledge as an experienced employee. In many cases, it wasn't even standard industry practices, but an in-house created, and usually arcane process that the company was convinced should be obvious and intuitive when it was anything but.

In looking at the aftermath of some of these disasters, my reaction has been "well, what did you expect?". In one case, the poor new hire had to execute a script that included warnings like "Does the J: drive have enough free space?", and "Is the M: drive mapped correctly?". How the hell is a new hire going to know what is enough free space, and what the correct drive mappings are?

In one case, the FNG (fricking new guy) was told to "run the script on the G: drive". When he asked what the script was called, he was told he'd know it when he saw it. He saw the script directory had half a dozen scripts with extremely similar names, picked the most likely one, and caused a near-catastrophe. In the end, it turned out IT had incorrectly mapped his drive letters, so his G: drive was mapped to a completely different system than it should have been. There was literally no way the poor guy could have even accessed the script he needed, he had no idea what it was called, and when he asked, he not only got zero help, he was called an idiot for not being able to figure it out.

While most supervisors blame the new hire for not being omniscient and magically knowing undocumented corporate lore, there have been some good ones. The best response I ever saw in this situation was the new hire, having caused high five figures of loss because of his actions, fully expected to be fired by his manager. The manager's boss, the VP, interjected, and said "why should we fire you? Your manager just spent $80,000 training you!", clearly showing that he understood the real fault lay with the manager and the lack of guidance provided.

Trying out Microsoft's pre-release OS/2 2.0

billdehaan

Be careful with those rose coloured glasses

I worked at IBM (on contract) doing OS/2 work from 1990 to 1992. I take issue with this statement:

The surprise here is that we can see a glimpse of this world that never happened. The discovery of this pre-release OS shows how very nearly ready it was in 1990. IBM didn't release its solo version until April 1992, the same month as Windows 3.1 – but now, we can see it was nearly ready two years earlier.

The phrase nearly ready is completely untrue.

I was booting OS/2 2.0 and using it for my work from June of 1990 onwards. These were internal builds, from the same code tree as the MS version being discussed here. The OS was certainly bootable, and usable for testing, in 1990, but in no way could it be considered "ready" for consumer adoption.

It ran great on IBM PS/2 Model 80s, with the MCA bus, but that wasn't what consumers had. That early version of OS/2 2.0 was basically a 32 bit version of OS/2 1.3. It allowed multiple DOS boxes (or "penalty boxes"), where OS/2 v1.3 had only allowed one, and not being limited to the 80286 architecture, it had better memory management and virtualization.

It was, however, buggy as hell, driver support was almost insignificant for non-IBM hardware, and the WPS (Workplace Shell) development had barely even started. The SOM/DSOM (the replacement for Windows' COM/DCOM) was also in its' infancy.

I could, and did, run OS/2 at work every day. And I was installing new builds at least twice a week. Stability improved, as did driver support. But it wasn't until the middle of 1991 that I could successfully even install one of those internal builds on my non-IBM home PC, even though it was SCSI based system with an Adaptec 1542-B controller. And even when I did manage to do it, I still couldn't get my ATI video card to go above 640x480 resolution until the GRE update of November 1992.

Yes, that 1990 build could run Windows programs, but it took almost 8 minutes for Notepad to start up (as opposed to 13 seconds on the same hardware with the a 1992 OS/2 build). It didn't support SVGA. It didn't support WinModems. It didn't support EIDE drives properly. And don't even ask about Stacker, or tape drives.

What MS and IBM had in OS/2 in 1990 was a bootable kernel that was suitable for development. It was not even close to being "nearly ready" for commercial release.

It's like saying the Windows 95 was nearly ready for release because there was a beta of Chicago (as it was then known) in 1993.

Fresh version of Windows user-friendly Zorin OS arrives to tempt the Linux-wary

billdehaan

Like a new bicycle with training wheels

I used a lot of Unixes (Nixdorf, Siemens, HP-UX, RS/6000, but mostly SunOS/Solaris) in the 1980s and 1990s, but I never really did much with Linux, other than set up servers in it.

Back then, it was a great OS to run on decommissioned Windows machines as headless servers. Firewalls, backup, FTP and NNTP servers, etc. were great, but it wasn't really a user-friendly daily driver. Despite all the claims otherwise, there was simply too much tinkering needed with obscure configuration files for the average user to bother with it.

Today, with Windows pushing people away with all of the unavoidable and intrusive telemetry and snooping, not to mention unwanted AI nonsense, more people are looking at Linux than before.

I've played with Zorin, and I like it. Although I run Mint as my daily driver, Zorin has a lot of things to recommend it, especially for new users.

Complaints that it's using a two year old kernel don't really mean much to potential users who prefer to stay on their existing OS. Microsoft usually has to drag people kicking and screaming to their new OS, by discontinuing security and support for their older OS. Zorin may be running a two year old kernel, but (a) it works just fine, (b) the differences between it and the latest version aren't likely to be even noticed by new users, and (c) it still receives updates and security patches.

It's entirely possible that new users may outgrow Zorin, and decide to move to Mint (or Ubuntu, or Manjaro, or Debian, or whatever) , but in terms of friendliness for first time users, I haven't seen anything close to it. Not only is it very approachable for Windows users, it's surprisingly similar to MacOS, depending on which skin you select during the installation.

In many ways, I prefer the UI over Mint. Mint has a number of things that aren't available in Zorin (at least not easily), so it's not really for me (although I may set up another machine with it soon), but for expat Windows and MacOS users, it's won over quite a number of friends who are in the "my PC can't run Windows 11 and we can't afford/don't want to buy a new PC, what do we do" camp. And for reasons I don't understand, there are Windows apps that run on Wine under Zorin without problem that give installation and setup faults on Wine under Mint.

For the average home user, where email, an office suite, and a good browser covers 95% of what they do on their machine, Zorin is a much cheaper solution than needlessly spending money on a new PC just to keep doing what they're doing because Windows 10 is expiring.

Zorin definitely has some weak spots, as do all Linuxes. I'm not a gamer, but I'm told gaming on it is still an issue. It's much improved from years past; gaming on Linux has gone from being impossible to merely difficult, but it's still not up to Windows' level. But for non-gamers, I think a huge percentage could switch without any loss in productivity.

Moving to Windows 11 is so easy! You just need to buy a PC that supports it!

billdehaan
Devil

Moving the Linux is easier

It actually is. I have four machines, with one (32 bit) laptop running Linux, and the other three running Windows 10.

At least I did, until one of them could no longer download Windows 10 updates because the 32GB boot drive couldn't handle a 105GB "patch" that was almost four times the size of the OS plus applications.

I read the Windows support voodoo, which recommended all sorts of nonsense that pushed OS management onto the end user (clear this cache, delete this subdirectory, change the ownership of this file, then edit this registry entry, then reboot to the USB disk and copy this file to this directory, they reboot again, blah blah blah), and even spent a couple of days batting around with it, without too much success.

Then, I downloaded a few Linux ISOs, booted off them, installed Mint on the machine, set up a Win10 VM on the other disk in case I needed to actually run any Windows app on it (I haven't in the four months since I switched), and left it that way.

The second machine I switched to Mint, and likewise haven't touched.

My primary machine is still Win10, but I'm slowly migrating things off, and will easily be finished before October 2025.

Not one of my PCs would run Windows 11, according to Microsoft. All four of them run some variant of Linux. And since I can run Windows 10 in a Linux VM, all I have to do is disable networking for the VM and there's no worry about security, either.

Why should I throw away perfectly good working computers simply because Microsoft stops security updates for them?

Linux didn't pull me in; Windows pushed me out.

Top Linux distros drop fresh beats

billdehaan

Re: Preparing for October 2025

Well, the terminology of the time was a little vague.

The original SparcStations, like the IPC and IPX, didn't use the Sparc processors. I think the first pizza box we got was a SparcStation 10, which had used the Sparc I chipset. The IPC used a Fujitsu processor, and the IPX had a Weitek.

So, generally speaking, Sun (at least our Sun reps) referred to the IPC and IPX as such, and only used "Sparc" to describe stations that had a Sparc (later SuperSparc or HyperSparc) processor in it.

As of HP-UX, you're right. So many of the terms uses slashes (OS/2, A/UX) that I forget which is which.

billdehaan

Re: Preparing for October 2025

I think the first Windows version I bought was Windows 95, although you could argue that the OS/2 v2 copy I bought (for $1) that included WinOS2 might qualify. I only ran the earlier versions of Windows at work, and after Windows 95, I was a beta tester for NT4, so that was free, and I think my Win2000 and XP copies were from MSDN, so I got those for free, too.

Like you, I paid for Win7 when XP was deprecated (or defenestrated), and it was a good upgrade from XP, although if XP support had continued, I'd probably have stuck with it. The same was true for Win to Win10, but at least that upgrade was free. But even if Win11 worked on my machines (which it supposedly won't), and even if the upgrade was free (which it isn't), I doubt I'd upgrade it even if I could. The data collection, the privacy issues, the mandatory online account, and the move towards AI integration are not improvements, but downgrades, in my opinion. And since they aren't optional, and cannot be disabled, there's simply no reason for me to support it with my hard-earned money.

billdehaan

Re: Preparing for October 2025

I have no illusions that there is going to be a mass migration off of Windows onto Linux (or Mac) in 2025. I expect some, certainly, but people who are expecting to see an 80%, or 60%, or whatever drop in the number of Windows machines, to be replaced with a huge adoption of Linux (whatever flavour) are going to be disappointed.

On the other hand, I expect to start seeing lots of really good deals in terms of used computers as perfectly good Windows 10 machines that cannot run Windows 11 are thrown into the market. Some companies will continue to pay for support for Windows 10, and apparently, even consumers will be able to for the first time, but most will just buy new machines. Since all those machines will be Linux capable, there will be some great deals to be had.

billdehaan

Preparing for October 2025

I'm pretty much the textbook example of the type of user this appeals to.

I've been an on and off again Unix user since 1983. I've booted Nixdorf and Siemens boxes, I spent five years developing on pre-Oracle Sun machines (the ipcs and ipxs that predated Sparc), HP/UX, and a number of others, and I migrated SCO Xenix stuff to Red Hat and Mandrake in the late 1990s.

Although I frequently ran my backup PCs on some Linux flavour over the past 20 years (whether Mandriva, Ubuntu, or something else), my primary machines were always Windows (XP/7/10). But while Linux was fantastic for firewalls, backup servers, NTP servers, download boxes, FTP transfers, etc., the desktop experience simply wasn't enough to justify a switch, especially since I was working in Windows at the office.

That's not to say that Linux was bad, or incapable. It wasn't. But there really was nothing to justify switching away from a working Windows system. If it was "just as good", or even a little better, that didn't warrant the effort of switching; there was not enough benefit to justify it.

Until now.

The sheer amount of telemetry and spying that Windows 10 does, and the amount of effort required to neuter all the data collection is absurd, and unacceptable. As the saying goes, you're either the customer or the product. But with Windows, you're now both.

With free online services, you expect, and accept, that they will collect some data and/or provide advertisements. With a commercial operating system that you pay money for, the vendor should not be collecting your data, or shoving advertisements onto your machine, but Microsoft is doing both.

That alone sours the desire to stay on Windows. Fortunately, there are lots of free "decrapifiers" that make Windows less intolerable (if not great) on the privacy front, and ways to get past the MS requirement that you have an online Windows account to use your PC. But why on earth should users be fighting against their OS vendor, trying to defeat OS functions that they don't want in the first place? And not only that, they pay for the privilege.

Add to that the fact that many fully functional Windows 10 PCs won't run Windows 11 (mine say they won't), and that means in October 2025, people must either run an insecure and unsupported operating system (a bad idea), throw out perfectly good hardware (just as bad an idea), or switch to Linux.

So, I've switched one PC to Mint, with the other dual booting Zorin and Windows. And although I've tried MX, I wasn't really that enthralled with it. Zorin wins in terms of easy migration off of Windows, Mint wins in terms of customization, and both are excellent choices. Unlike 15 years ago, the software available for Linux is largely on par with Windows (at least for home users), so it really won't be that difficult to turn off Windows next year (and if necessary, it can be run in a VM with network connectivity disabled on a Linux box).

The sad thing is that it's not so much that Linux made a compelling case for people to move to it, but that Microsoft made a compelling case to move away from Windows.

Snow day in corporate world thanks to another frustrating Microsoft Teams outage

billdehaan

Re: I was wondering why things were so quiet today

Oh, there are SLA (System Level Agreements) all over the place.

The problem isn't just that the outtages themselves, it's that things that shouldn't be moved to the cloud in the first place are.

Before the cloud, there were internal backup servers, where users' Office documents were backed up. If there was an outtage of the backup server on Wednesday night, if meant that the most recent backup was Tuesday. If it didn't come back up until Thursday, that meant users were working without a net for two days. Not great, but work was still getting done.

With the move to the cloud, when the net connection goes down, that's it. No more Office access until it comes back on. Customers don't just lose backup capability, they lose access to everything, hence the term single point of failure.

billdehaan

I was wondering why things were so quiet today

Also, much more productive.

I can't help but be amused by all of these outages. IT and IS departments convinced CTOs to spend massive amounts of money to outsource all of their infrastructure to the cloud, so that it would be more reliable, and yet many companies are experiencing more downtime and data loss.

It reminds me of the time some execs ordered us to save money by getting rid of those "pointless" co-located backup servers and the "useless" in-house redundant server, and just put everything into one really big box. Simple, clean, none of that "replication" nonsense that slowed things down.

It wasn't until it was fully in production (which we did under protest) that I was asked what the machine name spof.companyname.com meant. When I explained that SPOF mean "single point of failure", the CEO (the CTO's boss) went white as a sheet, and wanted us to explain what would happen if it we to fail.

One rendition of Monty Python's dead parrot sketch ("it's pinin' for the fjords; it's ceased to be; it shall be an ex-server") later, he demanded we explain and justify "our" decision to do this. Several CYA emails were displayed, and the new CTO that arrived the next month promptly reversed the decision, and we were able to restore multi-site before there was any disaster.

Today, "SPOF" is becoming synonymous with "the cloud". AWS, Office 365, and the like mean that if your net connection goes down, so do you.

40 years of Turbo Pascal, the coding dinosaur that revolutionized IDEs

billdehaan

I'm not sure what school you went to, but when I was learning Pascal, it was not done in an IDE, and we did compile binary executables.

As for why it wasn't use commercially, the short answer is performance. Compared to the other languages at the time (BASIC, Fortran, Lisp, C, Forth, and a few others), Pascal was extremely inefficient. It was great to teach concepts, but doing type-safe checking in the compiler, rather than relying on the developer to do it, resulted in lots of processor time spent on bounds checking and other things that the other language compilers simply didn't do.

In an education setting, performance is less important that the ability to teach the concepts. In industry, the reverse is true. I did some timing tests once of an application that had been written (very elegantly, I might add) in Microsoft Pascal for the PC, to a hacked version that had been written in Microsoft C (this was at the time when Microsoft C was just rebranded Lattice C). The Pascal code was much easier to follow, the logic flow was cleaner, and it would have been a lot easier to maintain than the C version, which was doing address arithmetic and never checking array bounds. However, the Pascal version required 50% more memory to run (at a time when memory was very expensive) and was about 30% slower.

Since time is money, Pascal lost out.

There were several attempts to make Pascal more commercially viable, but with every attempt, they ended up needing to make certain tweaks to the syntax ot meet the performance goals, at which point it wasn't Pascal anymore, it was Modula 2, or Modula 3, or even Ada. Of course, Pascal itself had the same relationship to Algol, so it was just continuing the trend.

Microsoft gives unexpected tutorial on how to install Linux

billdehaan

Tacit admission, or hardware reality?

Could it be a tacit admission that you might need a free-of-charge OS for your PC?

I recently discovered that Windows update was no longer working on my 2018 Zotac PC. Update blared the usual hysterical "YOUR PC IS UNPROTECTED AND AT RISK", because Windows security patches were missing. Of course, it ignored the fact that Malwarebytes and the like were running just fine, only the native Windows stuff was out of date, because Windows couldn't, or wouldn't download itself.

After shrieking "update me! update me!", if started the download of the necessary components, and stalled at... 0%. A quick search showed this was a very common problem, with Microsoft's official solutions involving steps to clean out the SoftwareDistribution directories, running DISM and SCANF a lot, killing various tasks, disabling and reenabling core Windows services, messing with TrustedInstaller, and removing the WinSXS\Temp\InFlight directory.

I'm a software dev myself, and my eyes glazed over at all the support voodoo the Microsoft was expecting end users to do in order to make the update process worked. Someone pointed me to a github update tool which, hilariously, could download the Windows updates from Microsoft's servers that Microsoft's own Windows Update could not. The mind boggles.

One of the reasons is that updates have ballooned to ridiculous sizes. The PC in question has a 32GB SSD, and although the security updates were only a few megabytes, the Windows feature patches were over 100GB in size. Each. And so Windows Update refused to download anything.

There are a lot of utility PCs like this, with small (32GB/64GB) SSDs that are fixed in the machine, and aren't going to be upgrade. Although I got the update going, as an aside, I tried a few Linux installs, which (unlike Windows) would cheerfully load off of the 1TB hard drive rather than the SSD. I installed Zorin OS, booted it, and configured it in less time than it had taken to run the Windows Updates.

When Windows 10 hits end of life, am I going to spend money on a Windows 11 licence for those machines? Even assuming that they could run Windows 11 (which is unlikely), there are Linuxes out there (like Zorin Lite) that explicitly support legacy PCs that Windows doesn't, are currently maintained and secure, and are free to download. Even the professional editions which cost money still are half to a third the price of a retail Windows licence.

So showing users how to install a Linux setup might be a way for Microsoft to relieve themselves of "problematic" end users that are not cost effective to support.

IBM Software tells workers: Get back to the office three days a week

billdehaan

Re: Hilarious

How things have progressed.

You have no idea.

A friend in Toronto was in a team working with their Texas office, and it was decreed that teleconferences were insufficient, in-person meetings were essential. So, the Toronto office packed up their entire team and flew them down to the Texas office to spend 3-4 weeks in person with their colleagues.

Upon arrival, they discovered the Texas team couldn't put them in the same building as the group they were working with; there simply wasn't enough space. But that was okay, there was another building, about a mile away, that they'd rented, and it had high-speed connectivity, so they could just teleconference. Unfortunately, hotel space was tight, so the team had to stay in a a hotel about 30 miles away.

So, for about a month, my friend stayed in a hotel, took at 45 minute cab ride into the building where his Toronto team was located, and teleconferenced with the Texas team in the building a mile away, then took a 45 minute cab ride back to the hotel. This apparently was a much better solution than staying at home, commuting 15 minutes to the Toronto office, and teleconferencing with the Texas team remotely from 1,500 miles away.

At bonus time, the cupboard was bare, because the company had "unexpectedly" spent so much money on flights and accomodations on that trip that they were in dire straights financially. Did anyone have any cost-cutting ideas, they were asked?

Many the team subsequently, and "unexpectedly" left for saner pastures.

billdehaan

I worked on a contract where the project manager decided that such productivity could be measured by counting the number of semicolons in the source code, and got someone to write him a script to do so.

The term for this is Goodhart's Law, and I've seen it hundreds of times over the years.

In an example almost identical to yours above, when I worked at one large, unnamed (*cough* mentioned in the article title here *cough*) Fortune 1 company decades ago, the metric was kLoc, or "thousands of lines of code". Yes, they measured software productivity by weight, essentially.

Management had a parser that went through each developer's code, including all included headers and etc., and counted the number of lines. There was a database module that was particularly huge, including hundreds, if not thousands, of 3-5 line modules that handled the numerous permutations of data. It was completely and totally unnecessary to every subsystem but the database module. One week, every subsystem, including the graphics, communication, and all others, suddenly included the root header file for the database, because doing so dragged in about 8,000 lines of headers.

Build times went up from 90 minutes to about four hours. Ugh.

When I asked what was going on, I was told "next week is audit week". Sure enough, after the audit was completed, the code was "re-factored", and through brilliant optimization, the developers were able to improve the four hour build time down to less than half, about 90 minutes. Management was extremely impressed, and I believe one developer even got an award for his brilliant optimization work of, err, removing the pointless header files that they'd only inserted a week earlier to make the kloc audit look good.

You're too dumb to use click-to-cancel, Big Biz says with straight face

billdehaan
Meh

Always check cancellation procedures before signing up

Like most people, I've had horror stories about the difficulty, and sometimes near impossibility, of cancelling a service. Bell Canada stands out as one where their stores, phone support, and web site all pointed to each other as being responsible for cancellations. Despite their contracts clearly stating that the consumer "contact Bell" to terminate the contract, no one could actually clearly explain how to contact and how a cancellation could be achieved.

Despite no one in Bell having a clue how to cancel an account, once I did successfully manage to do it, I received a phone call from their retentions department less than 20 minutes later, and three followup calls within a week trying to get me to sign backup.

Of course, that's nothing compared to the guy who spent 20 minutes on the phone with his phone company repeatedly saying "cancel the account, cancel the account, cancel the account" to a service rep who simply refused to cancel it. Once he posted it to the internet and it went viral, he was able to cancel it, but the company had to be publicly bullied into cancelling an account. That's absurd.

Ever since my dealings with Bell, I've made a point of checking out cancellation procedures when I've considered signing up for any recurring service. I do a search for "cancel $SERVICE Canada", and it's surprising how many of those searches link to long lists of horror stories. I'm sure it's saved me money, as I've skipped signing up for a lot of things.

There are definitely reasons to not make it too easy to terminate an account, because it could be done accidentally (any service rep can tell you customer horror stories), but it should be no more difficult to terminate than it was to sign up for in the first place.

Page: