Aaaaargh!
I hate METRICS!
So often I have seen metrics being introduced to improve the performance of a job/team/group/whatever! But all that happens is that people stop doing the job and start working the metrics!
Welcome once again, dear reader, to On-Call, The Register's Friday feature in which we share readers' tales of being asked to address avoidable annoyances. This week, meet a reader we'll Regomize as "Tom" who once worked as the sysadmin supporting a large software development team. "We had an in-house developed problem ticket …
Safety-related projects often have a requirement to achieve 100% code coverage. I've seen some where automation is used to generate the test vectors, and they are blindly accepted without verifying that the results are actually correct ("but we've got 100% coverage") - all that has been proven is that the code works as it has been written, not that it works correctly.
Just get ChatGPT to write your unit tests. I mean, they'll look like proper unit tests and might even be testing the right things a high % of the time, but since, after checking the first few, a human will never lay eyes on them, the purpose and scope of what you are testing will slowly drift over time, until the software is doing something completely different to the design.
Do you want Skynet? Because that's how you get Skynet.
"all that has been proven is that the code works as it has been written, not that it works correctly."
Not jut on safety related projects. My experience is that this is almost universal. The most common responses when a functional deficiency is pointed out are on the lines of either "that can't be done in [language X]" (usually because there isn't a ready made library method for it) or "that's due to the way [some mechanism such as garbage collection] interacts with the code". Almost always, on investigation these arguments are found to be groundless, the required functionality being possible to implement with a little independent thought.
The bottom line is that we have a development culture based on "look, it runs. My job's done" but this merely reflects a societal culture of least effort for maximum reward so there's little chance of changing it.
I was once on a (waterfall, ahem) project where some management bod decided that system test would be completed when we had reached 500 passing test cases.
This number became a fixation, despite protestations. We achieved this dubious milestone, proclaimed success...
...and unsurprisingly, we found tons of bugs during UAT. No heads rolled for this one...
Another part of the MS model is that documentation for (say) the flunkwot button is something like "Activate this button to invoke the flunkwot function." Undoubtedly this will pass the unit test and the documentation requirement. Never mind that nobody knows what the flunkwot does, as long as it does it when you press that button.
Totally not in a company that didn't do this, recently acquired by a company that does.
Totally not infuriated by dealling with people in said other company.
Me: "Oh, Edna has a problem that affects their next release and which involves my module, I will drop everything and spend working with them to fix it with no ticket."
Droid: "Sorry, I can't give you access to project B because this ticket was only for project A. Please close this ticket and create a new one for project B." - after they tried to close the ticket and it was reopened by my boss 3 times.
might see people focusing on that metric rather than the actual Job at hand?
Worked with one outsourcer - we noticed that there was a considerable uptick on ticket closures the day before the metrics were due with a subsequent re-opening of the tickets the day after.
We amended the metric to count tickets closed then re-opened like that to count as open tickets and modified the KPI to penalise over a certain number of tickets that required re-opening.
Next contract break we ditched them - by which point (because of metric and KPI failures) we were basically paying them nothing.
Sadly, while their behaviour was more blatant then most, that sort of behaviour is pretty rife in the outsource world. Which is why we've bought a lot of the functions back in-house (service desk and networks being the main two still outsourced).
At one place I worked, the helldesk boosted their stats with this little wheeze: any time I had to chase them about a job that was taking longer than expected (like getting me the access I needed to do my job when I first joined), they would open a ticket for my enquiry (about the first ticket, I hope you're keeping up with this) and then close it as successfully completed as soon as I hung up the phone. Well, they had successfully answered my question hadn't they? Even if the answer was "we have no idea" ten days in a row!
For us, it was me noticing the number of times they extended the "required completion date" because they hadn't gotten the work done on time. In some cases, the work was a week late because they didn't do anything at all for over a week - and then expected me to approve their request to extend the required due date. Uh, no. Then I caught them changing it without asking, and I notified the manager in charge of that contract...
People will definetly appreciate how management treats them as numbers in these metrics instead of actual people. Just a firm reminder about metrics getting bit worse when someone has big problems in their life will definetly help. A lot.
But you gotta do what you gotta do in order to make it easier for the certain people to fire for example 50 000 employees at once.
Just the kind of places where people will innovate and go for the extra mile on a weekly basis ;)
Likewise, when people within the corporate structure refer to others as "resources", especially PMs.. eg Please can a couple of resources be made available for Project X?
I'm tempted to send a photo of a pile of coal and some iron ore and ask what other "resources" they may need.
My first indication of this was being told "your code will also be analyzed automatically, and one of the metrics will be how many comments you have per line of code" for a college class. I topped the class list on that one, with significantly more than one comment per line.
When I worked for a large blue company, they decided to have a "wellness" push, and decided to issue pedometers and reward individuals and teams with the highest number of steps.
Some ultra competitive teams had members walking at lunch time, stopping every hour and doing aps of the office etc. My team had all the lazy bastards, so we had our pedometers set up on metronomes, and one another rigged a mechano mechanism to create a thrusting motion, and we put all our pedometers on it. We won hands down.
Every technical service manager loves metrics. Metrics is how their yearly bonus is calculated.
There is this multi-billion dollar, American company that introduced a metric which measures how long each ticket is sitting in "Waiting for Customer", "Waiting for the Tech", etc. To the eyes of the VP for Operations, "Waiting for Tech" is really bad (for the company) because it would make it look like the tickets are not being looked after. So the challenge was handed down to the technical support managers to bring those minutes down.
One ingenious way was for the tech handling the case to send an email to the customer. Internal system will immediately flag the status of the case to "Waiting for Customer". Another method is to send RFI emails when it is night time (for the customer) or approaching beer-o-clock on a Friday. No matter how good the information the customer have provided, the tech agents always will always find a way to send the case into Waiting for Customer.
At the end of the day, all the executives got their bonuses at the cost of customers' experience.
"At the end of the day, all the executives got their bonuses at the cost of customers' experience."
Indeed, some years ago, the Veritas basic support was absolutely useless, pushing every customer to buy premium which was the only one being useful.
The basic one had a very particular way to make sure everything stayed in customer info wanted:
1- ask logs X
2- ask logs Y
3- reply some gibberish and request some more logs
4- go to step 1
How can you beat that ? We went nuts about this ... They were simply bots coding a loop.
"the tech agents always will always find a way to send the case into Waiting for Customer."
McAfee (those bastards) would go one further. Send a reply that includes a request to do a combination of things that included rebooting the server. That was back when I was supporting a small shop running SBS Server (ick). With a single all-in-one server, I wasn't going to reboot at random times, so I'd schedule downtime when I was available and nobody else was working. McAfee (those bastards) would auto close the ticket after something like 48 hours "due to no customer response".
It didn't take long to be sure to ping them every day by replying to any open tickets with a canned reply of "This is a production server, I will schedule downtime for the reboot and will update when I do so, do not close this ticket".
It took a bit longer to ditch McAfee (those bastards) and even longer than that to migrate off of SBS and on to something with a bit more redundancy and resiliency.
Several big retail companies have a metric of "number of cash registers open", so one of them wound up with managers signing in to a register, and putting it in sleep "waiting for customer" mode. Later, one of the managers asked me if I'd called the district manager. Apparently they got yelled at for poor security practice, and falsifying metrics. Judging by the physics of the process, most of the goal metrics they'd come up with were physically improbable.
Least probable was likely the requirement to "Log in to register within 5 minutes of clocking in." Both time clocks were about 1/4 mile away in the back room.
Experienced this issue with a couple of our suppliers here. No matter what we're waiting on, the support agents at the supplier will always reply and mark the ticket as "waiting for customer response". Infuriating, as it makes it difficult from the customer's side to know the status of things at a glance!
And teachers throw a wobbly when they are actually appraised by OFSTED!
My business unit has to justify to an external (group level) organisation its performance and continued existence every quarter.
At least Schools don't get closed down if they don't meet every metric per quarter (or per 10-15 years if they were previously rated outstanding!)
I'm sure I will be flamed by the spouses of teachers who don't like their establishment being appraised.
But those of us in private industry will know what I mean.
Yup. Most schools have a good number of staff who've worked in industry for a while, probably most have had some such experience, even if only for a few weeks here or there. I used to have a print union card ( and could probably, in the late 70s have earned a shed load more money by not going into teaching).
But the world is full of idiots who think they know all about teaching without having ever set foot in a school since they sat their last exam.
"Hint: it isn't about metrics."
Exactly that. If OFSTED are so clever at defining what good education is, why is every child not leaving school with a single rating across their entire educational performance with a certificate marked Outstanding, Good, Needs Improvement or Inadequate? Why so many grades for so many subjects when clearly a single word from a list of four is the optimum level of assessing the students academic abilities, just like with the even larger complexities of a school? Not to mention that they have a limited number of metrics to assemble into this one word rating that doesn't take into account the many other things a school does to make it a good place.
It's not that no one gives a damn about them. Within a narrow range they can be useful guides. But that's the most they can be. And the inspection regime is 100% focussed on compliance with governmental regulations rather than the quality of education the kids are getting.
@Terry 6 and anyone else who has an interest
"Gove told Sky News the so-called “limiting judgment,” which means that a failure in one area means failure everywhere else, should be examined."
Politico newsletter quote above is actually quite hopeful. Gove and Cummings are responsible for the current omnishambles in education. Perhaps this is the dawning of insight?
For those 'not in the system' basically it is an INC OR at present on about 15 strands. If your school/college/nursery gets a 'unsatisfactory' on any one of them then the overall grade is 'unsatisfactory'. Irrespective of the grades on the other 14 which could have been outstanding. Dig a little deeper, and you find that an 'unsatisfactory' grade can come down to one student saying the wrong thing in one group interview once.
Now, I'd like to appologise to the vast majority of commenters how came into this thread for some light hearted fun and to share war stories.
At one company, I wrote a dashboard that showed time by engineer. Everyone soon got hold of it and just fiddled the missing time.
Another, the bosses loved to see ticket stats. Some of us had low stats as we got the "stinkers" that no one else would cover. One person used to get really high ratings for one simple reason. The company had ancient old print servers - lots of them, and they would frequently fail, so a lot of tickets would come in very quickly. They'd grab them and then reboot the server and close the tickets. Overall time, around 10 minutes, number of ticket - almost a days work.
They hated it when I got rid of all of the old print servers and put a freshly commissioned cluster (repurposed old kit), just for printing. No more print calls after that.
And THAT is the only thing you're supposed to look for when doing statistics on tickets; all the easily fixed 'nuisance tickets' with the same cause.
Find the worst offender, and fix the root cause, then the next worst offender and so on.
Anything else in the system is not relevant and can probably be faked anyway.
Which means, of course, that you have less time to work on the stinkers that you're an important part of resolving.
Don't get me wrong - I completely understand that it could well be the case that you're thinking about the difficult tickets while resolving the easy ones or just taking them as downtime between moments where you really need to concentrate, but the fact that you have to hit ticket count numbers rather than just being allowed to get on with your job is A Problem.
We had problems getting kits to the shop floor due to component shortages. The bottleneck was in goods Inwards and inspection so they were given some "help" and part of the "solution" was a huge metrics board showing their processing rates against targets. Sure enough, the metrics got better and better, pats on the back all round - except kits still weren't getting to the shop floor due to shortages. Broadly, there are two types of components that come into the company - simple stuff, nuts, bolts, resistors, ICs, etc, that come with a CofC and need no inspection other than counting and checking in. Then there are more complex parts - PCB assemblies, amplifiers, filters and such - that have to be tested. In the time it takes to put a card in a rig and confirm it works you can book in a few thousand M3 nuts and resistors, so, to keep the metrics looking good, that's what they were doing - putting the hard stuff to one side and booking in the easy stuff.
You get what you measure.
Source: Reality...
Considering around 400 fatal road collisions are recorded as being due to speeding in the UK each year, and a total of about 500-600 murders happen each year in the UK, it would be beyond ridiculous to think that burglars are killing more than 75-80% of all people that are murdered each year...
We had a big metrics drive and I was told to get a metrics board up that the high-paid help could come and look at so they could understand what was going on. I knew this would end up sucking in tons of my team's time for little benefit, so I told the boss that I'd create a metrics plan so that everyone knew what was going on. We liked plans in the company, so he agreed. I knocked something up and then added a section called "Leadership Response" and asked the boss to work with his peers to complete it. 'What is it?" he asked. I said that we're creating these metrics for the leadership team to come and look at, so we need to know what they're going to do with them, particularly when things aren't going well. What are the limits at which they do something? What will they do? How quickly will they do it? What metrics should we capture to track their responses? Once he'd agreed all this I'd write it up and they'd get their own section on the board to show how well they responded to the metrics.
He couldn't really argue - it was a perfectly reasonable request. The metrics board never went up.
I can get behind them if they actually are meaningful - but then they are often harder to interprete for the manglement. They, on the other hand, like strange, stupid, meaningless metrics. And then there's colleague who have no f'ing clue about statistics or data analysis, and pull definitions (to stretch that term) out of some nether regions - which then show "interesting results", but in the end are just tracking... something.
I need more alcohol.
And I am glad I am no longer part of that team. Should said person try to take over my team I'll just quit.
Before retiring, much of my business was with SME and bits of government. Mostly databased. I usually came up with two front ends - One for the workers on the coalface and one for the manglement. The "workers" was designed for quick data entry and retrieval, the other produced lots of nice reports.
A Director at one place was really pleased when I gave him a "Export to Excel" button with suitable data filters. After a while he got bored and said that he needed more detail - He was ecstatic when I set up a SQL Server OLAP database that pulled the real data out of the production system. The company accountant said that he almost stopped bothering her, as he now spent most of his time playing with the data - I'm not sure that he actually made too many changes based on what he saw...
Where they go wrong is in giving the teacher the test at the start of the year, when they should give the teacher the textbook the test will cover.
I have no idea why people think everything but students should be tested. Not testing students is how we get high school graduates that can't even spell their own names. But adnittedly, this is a simplistic answer to a very complex problem, for which the Japanese have the answer.
> for which the Japanese have the answer
I don't get that - why, what difference are you referring to? The list of differences is so long, for example the students do a daily cleanup of the class room (something some countries, including my own, should copy - might teach more respect of common property).
"Where they go wrong is in giving the teacher the test at the start of the year, when they should give the teacher the textbook the test will cover."
Not sure what you're answering to here. The Aaaaargh! thread is mostly about support tickets. There's another small part of the thread about school inspections and metrics. But that mostly is still about exam testing metrics distorting teaching. And where do they give the teachers access to the test paper before the start of the year- or indeed use just the (one) text book? Teachers have a curriculum to teach, usually. And in any exam system I'm aware of the actual exam paper is held under conditions of secrecy until the exam starts. Maybe in the USA they do things differently, in some states, or something.
Whether those tests are worth anything, or whether they actually test what they need to test is another issue entirely. Which does bring us back to the subject of metrics. i.e that schools teach kids how to pass the exams, rather than giving a good feel for the subject.
Back in the 60s I was lucky enough to be taught the "Nuffield physics" A level rather than the regular syllabus.
This was based on the wonderful books by Richard Feynman and the whole point was to make you "think like a scientist". I can still remember one of the exam questions - "Design an experiment to measure the drag on a ship's hull". We'd never covered anything about that in the syllabus, the idea was that we'd been provided with the skills to tackle the problem.
A well-known effect in yacht racing. Handicap rules are devised to ensure that yachts of different kinds can compete on level terms by applying a handicap. Designers then start to work out how they can get a better handicap without reducing the speed of the resulting yacht. This was particularly evident in the 1920s, when long overhangs at bow and stern became fashionable. The handicap rating was worked out on the waterline length when stationary, so because they had a short waterline length relative to the overall length, they got a favourable handicap. But when being sailed, they heeled over, lengthening the actual waterline length. It is well-known that the maximum speed of a displacement vessel is related to the waterline length, so the longer waterline length when heeled made them faster than the static waterline length ould indicate!
That is exactly how I do my job. I don't really care if the work gets done so long as I meet my metrics. Should I fix a circuit or two, that's a happy accident as far as I'm concerned. And I meet my metrics, plus 10 percent, every month, which puts me ahead of the layoff fodder
But - if you don't measure it, how will you know how much work you're not doing because you're measuring things?
Any timesheeting system that requires "filling in timesheets" as a loggable category is clearly too cumbersome. Similarly any monitoring system where useful metrics or alerts are buried under noise or false-positives.
God yes. A number of years back my NHS employed colleagues used to have to fill out a detailed log online of how they used their time. Which took a lot of time. Even if it made sense in their context - since working in a multi-disciplinary service chats with Non-NHS colleagues in the staffroom could just be put down as "multi-disciplinary meetings" if they'd wanted to; because none of their work made any sense in terms of NHS bureaucracy it was a stupid activity. The reality is that if anyone was swinging the lead ( and they were all dedicated professionals that put in more hours than they were paid for) that could be picked up by their team leader, because they each had a case load to get through. But the cumulative time wasted filling out the poxy time sheet far outweighed anything that could be wasted in the unlikely event that someone would be a bit work shy.
Worked in a company where the product was *really* buggy, esp late in the dev cycle when we were approaching release. Not a problem for management though. They'd review the bug list and reclassify most of the serious bugs as not so serious. That brought the metrics down enough so that we were good to go. We shipped on time and everyone got their bonus. Everyone was happy and no one got hurt. Except the customers.
I managed one project where this approach was taken, as long as there was a documented work around a critical bug fix could be deferred. A month prior to go live there were 1500 documented work arounds plus as many unfixed less critical bugs, the system was completely unusable. They had effectively given up fixing bugs on the phase 1 delivery to start developing phase 2. I finally managed to get some focus by showing the senior stakeholder the huge paper file of workarounds every single system user was supposed to have on their desk and use every time a transaction failed. It amazing what impact telling a vendor they are about to miss a milestone payment can have. ALso because there had been no work done analysing root causes fixing 300 bugs fixed resolved the vast majority of work arounds, once the devs were allowed to take time out to really look at issues the system stability and usability improved hugely.
In the ancient lands of Redmond, the Lords of Microsoft brought forth Word (for Windows) 3. 0 and 3.1, and it was buggy. So buggy, the Users and the Pundits were sorely vexed and, lo, word of their displeasure was heard in all corners of the computer magazines of the time, reaching unto those who might buy a word processor. And so it was that the Developers retreated to their hermitage and toiled for many months on 3.2.
In the time before the blessed day of release, a Mr. G made words far and wide that 3.2 would be bug-free. No bugs. Zero. Perhaps if pressed he might have said it wouldn't have _critical_ bugs. And certainly no loss-of-data bugs, for those were know as "show stoppers".
I knew someone with live access to the bug list.
Put Word into Show Page Break mode. Select text across a break. Hit delete. Boom! Uncontrolled program exit, no save. Since this was pre-NT days, probably took down all other apps and the OS.
It shipped with that.
WORD still ( at least up to my Office 10 version but I doubt anything has been changed) has an annoying "feature" (i.e. bug) that if you start with a table at the top of the first page you can't insert anything above it. i.e. Like going back afterwards to add the explanation. (The workaround- assuming you haven't remembered to add a blank line at the start- is to add a page break ). It's not a problem in LibreOffice Writer by the way.
But then Windows itself still has an annoying bug that has been there since the year dot. If you set custom icons for the recycle bin they don't change when you add/empty it. At least not until you edit the registry and add a ,0 (comma zero) to the end of the paths. Any time you change the icons you need to go back and do this- I have it in my regedit favourites.
As in
B:\Icons\Windows 3\3000 icons - 2492_0_0.ico,0
Surely they could have fixed this??
BTW does anyone know what that mysterious ,0 does at the end of the path and why it's needed?
ICO and PE (exe, dll) files contain multiple different icons. ",0" says to use the one named zero.
This is even documented in a few places.
Note that you can even specify an EXE file and use some random icons out of it, eg if you want the recycle bin to look like an Excel document...
Interesting. I knew that files could contain multiple icons - I even use icon dll libraries etc. sometimes. But not how the registry specifies it like that. In anything like normal use all it does is break the functionality they've built in. Users select a pair of icons, visually. Those icons display. But they don't change dynamically until the screen is refreshed, unless that index is applied manually in the registry. Since this is breaking the functionality they've provided it is 100% a stupid bug.
Insert page break does it quite easily- if you know about it.
The problem with work arounds for functionality bugs is that, by definition, they work by doing something that the functionality shouldn't require and the user is unlikely to know about. As with the recycle bin icons bug. No ordinary user will be expected to know about the registry, let alone meddle with it.
Several nuts wielding keyboards at my place of work. Clearly other people are more adept than me at solving errors in the live environment when I see this type of code:
...
try {
// Some SQL
...
// Throw exception if SQL failed
}
catch (blah) {
return SUCCESS;
}
...
return SUCCESS;
Ah ah, good one.
One day, I was admin of the main SMTP relay in a big company. I didn't know some idiot had coded it in a world widely deployed PC app, some thousands users.
This email system was always up as part of out world wide DC.
I happened to be quite ennoyed at hundreds of SMTP connections coming from anywhere that should be here, and which were creating havoc on the system.
Then I started to black list some clients, over and over.
After one month, a little midget came to me, explaining his issues: his apps was failing to issue orders. Me: yeah, why ? Him: we tracked it to SMTP connections failing. Me: then retry later if there is an error code. Him: hmmmm, we don't track error code, so all orders are lost. Me: AH AH AH, please go elsewhere, thanks.
Really no sympathy for nuts let loose in coding ...
> Because he used an old-fashioned pejorative term for small people, perhaps?
Oh yeah, leftpondians are overly sensitive when it comes to "political correctness" - though the is a lot of double standard when I see what mindblowing invented nonsense are told "about others" by the extremists on the right and left. Currently the right are winning on double standards with finger-pointing and whataboutism, might switch in a decade or two.
Ah yes, the modern-day language equivalent of "ON ERROR CONTINUE".
To be fair, there's some code I've written recently that does follow the above pattern somewhat, but it's a special case, where it's trying to log errors that have occurred to a database. In the situation where you can't do that (for example if the original error was due to the database not being reachable), you really do want to catch the exception, and not re-throw it. You probably want to at least log it as an error to something like Serilog though, so someone can see it in a log file (assuming the software can actually write those as well).
"What to do if the software doesn't perform as expected" can, on occasion, be the main part of the work. What distinguishes the decent developer from the shoddy one is the existence of that try...catch in the first place, and not just assuming that code which might throw an exception won't throw an exception.
Yeah, it feels like someone left in the debug code after working out that the error didn't actually matter.
Classic example is some sort of log file being locked - you might not want to crash the entire app because of that.
Personally I think it's far better to continue on _certain_ errors - you never know if you're about to accidentally skip over a completely unexpected error, but that's not always practical. You do need to think about how much work you should spend chasing down edge-cases that might not happen often enough to notice - though 'not that often' is a different question for a five-user app vs a five million user one!
It's best not to come up with a rational reason to excuse this unless it's truly exceptional. I've seen codebases where essentially the entire source is just a long series of try-catch-continue anywhere that a bug was hit at some point. Especially in Java, some people simply cannot wrap their heads around checked exceptions and just default to this pattern for everything instead. (Granted, Java's checked exceptions are badly designed and often misused; then there are the libraries that throw for basic flow control and need to be chucked into a molten pool of steel.)
Yep... the incompetence of some of the libraries that I have to use is stunning. Expected errors are thrown as exceptions rather than returning a useful status value. Short sighted developers will claim that this is not a problem because after all, the developer can track each exception type and handle it as necessary... which is true if one doesn't mind then having to murder the program flow at the same time but more critically this relies on every damn layer in the entire code stack working in this fundamentally broken way. In the end what really happens is that you open a file and rather than get a sensible message such as "file locked" you instead get a low level exception cascaded from somewhere deep down mired in a dependent library which reports something like "an error has happened" and you are left with no clue as to what the hell us going wrong.
Exceptions are for exceptional, unhandled cases. Anything expected should be handled gracefully. It's not difficult but there are plenty of troll-wars in certain forums regarding this. In short, if you want an application to have a useful lifetime longer than a week in the real world, handle errors gracefully and log the things that go wrong. An application that compiles is not a tested application.
I don't know if it applies in Java so much, but in C#, the idea that an exception should only be thrown for something exceptional, and not be used as a means of an "early return" stems largely from the fact that throwing an exception is much more computationally expensive than returning a result.
You see some situations where certain code-style choices lead to worse code than you would otherwise find. For example, if someone is enforcing a rule that all methods should only have a single return statement, which in itself isn't necessarily a bad thing, you might find resulting code that, as a result, throws exceptions, rather than returning early. That, or having such a code chevron from nested if statements, that the actual code is somewhere off to the right of you in the next room. Sometimes, rules need to be broken, you just need to be able to justify why you are breaking them, and if it is non-obvious, leave a fucking comment explaining why!
"though 'not that often' is a different question for a five-user app vs a five million user one!"
Like the infamous "some users were affected" when it's a cloudy system used by millions and "some" is a significantly large number when stated on it's own, even if it's "only" 1 or 2% of the total users.
Worked for a company who outsourced some of their work to a very large, well known outsourcer. Their devs were expected to write 200 lines of code per day and were disciplined if they did not meet this goal. Thus, their motivation was not to write good, clean, extensible code but to write 200 lines of code regardless. Needless to say, it was the *worst* code I have ever seen in 30 years of professional development. It had every anti-pattern you could think of and a few I'd never seen before.
We took the code inhouse where our devs motivations were the exact opposite. The code was eventually refuctored enough that it started to improve but it took a *long* time to sort out the mess.
I have actually been in the situation of refactoring the code from a previous (engineer) developer, and it was when my boss was monitoring LOC as a metric. I asked him if LOC was more important than bug reduction. After several months the code base was half the original size, and bugs had been reduced by nearly 90%.
That was when I learned my hate of metrics. I nearly didn't get a pay rise the following year because I hadn't hit my LOC target.
Using bug fix metrics to incentivise developers doesn't work when the developers write the code, fix the bugs, but can also enter the defects. Number of fixed bugs = bonus for the developer. Pretty soon a buddy system develops where bugs introduced by Alice, are then fixed by Bob. And Bob badly codes a defect, that gets fixed by Alice. I couldn't possibly comment on how obvious those code defects were, or how easy to fix.
By that metric I've already doomed myself for the year...
One PR: +45k LoC and -5.5MLoC!
Reduced the build time in our CI agent from >60min (and often timing out) to <2min. But for the "LoC" only metrics is never going to look good.
(I hate tooling the accumulates more and more generated code that will never be run...).
Refuctoring. The art of taking existing code and fucking it up by hacking it to pieces, adding in large amounts of cut and paste code and introducing hundreds of new bugs, due to laziness, lack of knowledge, and complete disinterest in design principles.
I was a lowly tech support guy but had access to the product source code. I could see the correlation between the number of customer calls regarding networking issues and the appalling bag of shite that was the network listener thread. Thousands of lines of code in a single case statement. I got bored with continually entering networking defects, and suggested the whole code should be rewritten from scratch. I was told that the code was good and that I should continue to enter individual defects that would each require a code fix that would continue to expand that massive case statement. In the end I rewrote the whole thing as a state machine and silently created customer patch releases with this new code. Reports of networking defects dropped significantly. Eventually the development group gave up on the not invented here mentality and suddenly, something very similar to my code, replaced the horrendous spaghetti monster from before.
Refactoring; I once got some piece of code that crashed occasionally. Hugely complicated. Now remember that refactoring means: Change the code with no change in what it does.
After three days of the refactoring, the code was much smaller, very simple, and because refactoring doesn’t change behavior, in the middle of it was a statement if (condition1 && condition2 && condition3) crash();
QA checked the old version and it did indeed crash exactly if these conditions were all met.
What staggers me most is that some PHB managed to get up that morning and get dressed by themselves and yet they couldn't see the complete idiocy of such an idea as holding people to account on the number of lines submitted in a repo commits?
I've been in the biz for a good few decades, met some people who can only be described politely as "Complete f**kwits!" and yet this one has got me! Ha ha!
One guy I know almost got an award for negative defects per lines of code.
He put in a thousand statements like "x= 1;" knowing the compiler would optimise these to one statement.
As a result his defects per lines of code was really small. The bean counters algorithms could not handle this (due to rounding errors in their code) , and and some times produced negative defects per lines of code.
He was nominated for an award till someone queried the results, and looked the code.
His argument was he did this to give him time to write the really difficult code without being chased by the "project managers" (and wasting his time).
Many years ago, a certain telecommunications service provider incentivised its sales force to go out an sell a fax service that required an (expensive) 'smart' fax machine. The point was that the customer didn't pay the capital cost of the machine, but paid a premium for each fax sent by the service.
The metric for the sales people was number of contracts.
So corner shops were getting these 'smart' fax machines installed to send, perhaps, single-digits of faxes per week.
The economics of the product were designed around businesses sending thousands of faxes per month.
The losses on that product were legendary.
Many years ago when at working for a large computer manufacturer, I saw a presentation given to the operating system development team, who were extremely skilled but also very Ivory tower. One slide was in the form of problem/solution.
Problem : The sales team only sell what we pay them to, not what we want them to.
Solution : THIS IS A FEATURE, NOT A BUG.
First day Service Now was put in, like all good boys and girls we found the API specs and made our lives easier, found ways to make it the defacto dumping ground and rarely have to go near the abomination of the web GUI ever again!
Every GUI form in Service Now has "Order Up" or some such shit on it, not sane buttons like "Submit" or "OK", but "Order up" like it's been written for Deliveroo and I'm in desperate need of a Chicken Chow Mein meal for two instead of trying to get some AD group assignments.
In our early development (late 80's) we had an unintentional tester - one Higgins. He could and did break anything. So somewhat cruelly and arbitrarilly we made our standard unit of testing the 'Higgins' So if could use it for 10 hours without breaking it, it had passed to a standard of 10 Higgins.
Oh, around here Higgins would be fired! The chorus of "he's finding too many defects" would be so deafening that development management would waste no time in showing Higgins to the door! How many times have I been ordered to change sev-high defects to a feature request or lose my job, and told "Stop Testing; your finding too many defects, and we won't be able to release".
We had two wonderful ladies do our testing - Caroline and Cheryl
Brilliant on a night out - absolute PMT bastards when they got into the testing room, and so, brilliant at that too.
Such gems as "This is a payroll system so I need a minus entry on number of children. We have a plus for when the staff progenerate, but what about the case of a divorce and the children go with the spouse?"
Icon as they were two of these ===============>
I was the original developer, then System Architect with an ever-expanding team, of a successful, very early, online insurance system. There were one or two occasions when I may possibly have decided to assist the testing team in finding bugs of sufficient severity that manglement had no alternative but to pull a release that, due to other "lower severity bugs", should not have been being put live in the first place. It wasn't that, by that time, there were any remaining bodies long ago buried by myself, I just knew exactly where to look for freshly-dug graves.
We had one guy in the test test team was great at finding bugs - from the simple to the most complex. We were asked to write up our test strategy for a prestigious magazine. . Based on his test philosophy the article was like ...
"You have been asked to test this new car. Go and visit a local farmer. Go at speed down the dirt track, aiming to hit all the potholes. Whoops you've got a puncture. Raise defects for the following. 1) no instruction book on how to change the tyre. 2) No spanner for loosening the wheel nuts. 3) cannot see where to put the jack. Once those are all fixed - do the same except drive it in reverse - at fast as is safe. Cross a muddy field - raise a defect because you cannot attach a towing hitch to pull you out. Run the car till it runs out of fuel... then add some (from a can) and check it continues to work ok. Give the car keys to the kids and say they can do anything but not drive it. By now the car is a bit scratched, take it in to the workshop and make sure they can do a paint job - what no paint of the right colour ? Another defect. Park it on a steep hill - not in gear - so it is just held on the handbrake.
Once you have done these basic tests - get your granny to drive it.... she is not very tall - and you find she cannot reach the pedals without the steering wheel pressed against her chest.
We submitted it - but it was rejected as being to "simple" and understandable. One reason was there were no graphs or complex equations.
I remember a tale of someone being given a seriously hardened laptop to field-test. He gave it to his young kids with the instructions "Destroy this". ISTR the kids squishing a peanut-butter-and-jelly sandwich by closing the lid on it (PB+J getting ground into the keys) and beating it against a tree, amongst other attempts. It was still functional afterward - test passed!
Many years ago a friend of mine produced a "hardened" piece of cryptographic kit for the military. It was tough (run over by tanks on Salisbury Plain, dropped out of aircraft) and dealt with a wide range of voltages ( AC & DC), reversed polarity, etc. He failed to meet the final requirement - it had to be simple to destroy in an emergency.
You also see that at Drive Through queues as well, cars get priority over people in the store.
And people on the phone.
It was once so bad, I just used my mobile to call the restaurant and gave my order that way. When they asked where to deliver it, I gave the table number. The look on her face was priceless.
I had found that, at a certain fast food place ("Les Arcs d'Or"), if all I want is a coffee, it was much faster to go inside and order it at the counter, rather than waiting in the drive-through queue. Given that that is the only item in my order, it got filled immediately, often by the order-taker. Recently, it seems, they have a separate drink-making specialist, so I may have to wait longer than previously. But it still seems faster than waiting in the single queue behind folks who want 17 Happy Meals or whatever.
"Why do they call it fast food? Because they make you fast before you get your food!" always gets a chuckle from other patrons waiting at the pick-up station.
Yep - it was reported a while back that at one establishment in London people were waiting up to an hour for service while the delivery couriers were waltzing in and out in a couple of minutes.
Unfortunately my usual chippy for a Friday if I've been in the office has now started doing Deliveroo and Uber Eats - not too bad last time I was in but I'll be monitoring it. Trouble is all the other local decent chip shops have gone downhill.
In my favourite curry house, since Covid, the couriers now outnumber the customers actually sitting down for a meal :(
I know the owner quite well and he doesn't like it, but he wants to stay in business. Things have started to pick up again, so hopefully people will continue to come back.
The thing about internet/phone orders is the kitchen has had plenty of time to prepare the food, whereas the people who just turn up and order the food will have to wait, unless there is some already made.
It would be quite inefficient if the delivery driver had to wait while the food was being prepared.
I worked on a system for UK customs and we were instructed to build in a "fudge factor" that each site could set for themselves to adjust their stats so that they were comparable with other sites - e.g. a small detatchment at a minor harbour could tweak their figures so that they were "comparable" with a major port such as Dover.
As somebody once said of the Indian government, which is even more stats obsessed than the UK government - they take the figures and come up with all sorts or reports and policies based on the figures, but when you drill down, the base data is entered by a lowly civil servant, sitting in a village in the back of beyond, who just makes up the figures just so he can say he has sone his job.
Usually attributed to Sir Josiah Stamp in his book "Some Economic Factors in Modern Life",
"The Government are extremely fond of amassing great quantities of statistics. These are raised to the nth degree, the cube roots are extracted, and the results are arranged into elaborate and impressive displays. What must be kept in mind, however, is that in every case, the figures are first put down by a village watchman, and he puts down anything he damn well pleases!"
A colleague told me of attending a conference in which a US Education Dept official announced, to a large number of in-the-trenches State officials, targets that States were to meet on some common educational metrics.
The finely-delineated target ranges were backed up with all kinds of rationales and studies and incentivizing policies. Release of Federal funds depended on hitting the targets.
One State official stood up and said what everyone was thinking. Something along the lines of "We've been doing this for years, and the best any of us in the room have ever done on our stats is plus-or-minus 5%. Why do you think we can measure to two decimal places? Are you in the real world?"
He said the deer-in-the-headlights look from the speaker, and the whole front-of-the-room table, was priceless.
No joke, that actually happens. Not a good idea.
Join a dev team that is not exactly up to speed on the latest techniques and, after introducing Make to replace the compilation batch files, asked if I could start a Bugzilla server for devs to log into. Quite happy to start it locally on my PC if that helps kick-start the process.
Some years (!) later, we finally get Bugzilla approved and installed (at least it was on a server in IT, so that bit was good) and devs start to use it. Including noting that you can mark an entry as an "Enhancement" - not actually a bug or problem, but a good idea either from a customer or even devs themselves. We trundle happily along, including BZ references in commits and even in release notes. As you spot links between BZ entries you link them together and add notes "turns out, this if really easy if...". This is an old-style program, we do new releases at varied times, to match bug fixes or new functionality for new contracts, rather than forced releases every three months for now good reason.
Time passes and even Project Management learn how to enter bugs. Then PMs start dumping out stats and track how long BZ entries are left open. One day, management gets a Dashboard web page and the PMs stats are proudly offered up.
But they don't bother filtering out Enhancements or any filters at all. "It is called Bugzilla, it can only be used to hold bugs" and "We don't need devs to help with this, we know about Dashboards and our web guy knows how to put raw MySQL data into an IIS page".
One day, my BZ admin screen shows a hundred or more entries have been closed and a huge number of refs between others removed. Our direct manager has pruned out all the items that showed up as "too old, must be closed", including all Enhancements. Someone has read enough of the manual to be dangerous, got admin rights and removed everything but bugs from the input screens.
A new customer, new project, new requirements: all changes get entered as "bugs", we aren't allowed to link them to old entries that discussed how to do any of the items we'd already discussed as Enhancements.
The Dashboard bug graph peaks, blood pressures follow, the older devs grind their teeth to dust...
New contract arrives
I want send as from mailbox and not sent on behalf of. In case anyone doesn't know the reason for that is I don't any replies from my automation.
Open Ticket.
1st Reply - Please send screenshot
Close Ticket
Re-open ticket.
What would you like me to screenshot? Me not being to send the email?
2nd Reply - I will pass this on.
Close Ticket.
Re-open ticket.
Why was it closed?
3rd Reply - because we are working on it.
Close Ticket.
Re-open ticket.
Stop closing the ticket. Close the ticket when you fix the problem.
3 days later and multiple teams calls to explain what I want to many different people and actually telling them how to do it and who will have access to do it. Problem fixed.
Close Ticket.
I have learned and adapted to start all tickets with this does not require a screenshot as it saves me hours of time. I've also started telling them which teams to send my tickets to at the start.
Life shouldn't be this difficult. This is what happens when people focus on stats rather than actually doing the job. I bet my ticket looks like it was closed the same day. High fives all round.
This story remind me of a 'Letter to the Editor' in the magazine Viz, which went something like this:
"Someone once told me that the most dangerous part of a car is 'the nut behind the steering wheel'. So in the interest of safety, I removed this nut. Later, whilst driving at high speed along a motorway, the steering wheel became detached, resulting in a severe crash. So it goes to show that you cannot believe everything you're told."
I've just been in contact with the customer service for In-Post ( the parcel collection lockers) because we never, ever, got the confirmation email receipts to say they have our parcels. Not any of the dozens we've sent. Ever.. Their response below was irrelevant and useless. I'm pretty sure we've been caught on the quick close of ticket hook.
The response was
I am really sorry to hear that you were not able to generate your receipts via the QR code on the screen. You can always generate it via this link: https://inpost.co.uk/receipts/ by entering your parcel number and e-mail.
Please accept my apologies f........ I do hope this information is useful,........
Which is not an answer. It totally misses the point.
The QR code on the screen of the lockers does generate a link on my phone containing the parcel number. No different to the one in that so called reply. We just don't ever get the receipt. Either their response was generated by an AI or the agent is just closing a ticket with a boiler plate response without bothering to do anything. And they do need to do something. Because their software for generating receipts isn't doing its job either.