"Managers were less than impressed with the effort, and the response to the vendor from up the chain was something to the effect of 'Thanks for stopping by, no sale'."
You mean they were given the boot?
As the weekend approaches, a question for our readers: was your week a success? Asking because in this week’s instalment of On Call – The Register's regular reader-contributed stories of techies being asked to assess the asinine – a failure to recognize success was the crux of the matter. To start this tale, meet a reader we' …
Sounds to me like it was checking the log files, badly.
They turned up the log level for this install, which meant as far as the installer was concerned there was extra unexpected garbage in the file, so something must have gone wrong.
Turn the logging off and I bet it would've declared a successful install.
Been there done that, had a vendor come to us for testing of their latest and greatest version. Install is on a machine not connected to the rest of the network and at first everything is fine. We’d only just finished uploading data onto the system when less than a day in there is a problem which stops everything from working on this box. The bloke on-site doing the install calls UK office who dispatch bodies to assist but they too draw a blank. They ask if they can connect to the Internet to get assistance from their head office abroad. We rig up a connection just for this machine and various people VPN in from across the globe over the next 24hrs.
Eventually they find a rogue character at the start of the output file that causes the thing to fall over when being read. It took them another day to find what was causing that. Turns out to be a flag at the start of the file to confirm everything was as expected. Except it was the wrong character which was found to be the one from the test build that no one had changed to the one required for the prod version. We were not impressed and that’s a gross understatement.
Anon because I still know people who work there.
Probably. They most likely had a nested set of scripts, each doing one part. All the scripts responsible for doing something worked as expected. The umbrella script for monitoring those scripts had a problem interpreting their longer logs. It's even possible that the umbrella script properly handled most of the logs correctly, only failing on a log relatively late in the process.
It could easily be that the company's systems had a configuration option which changed the reported message at one stage and the installer's logfile parser couldn't hack it. Or possibly some combination of install options broke the logfile parser's logic. Either way, the software vendor's QA department didn't catch it during testing.
Circa 1980 I was involved with an early computer retailer. An accounting package was offered by a vendor, which supposedly ran on systems we sold. It had been written in BASIC, and used the factory-standard interpreter. (It was a different time, OK?)
Within the first CRT page of code was a line that caused the product to crash. Unconditionally. The as-distributed copy had no way of ever running.
Fixing that, and upon examination by someone with bookkeeping clue, the columns of numbers didn't add up. Somehow they were printing the detail of credits and debits in the wrong columns, but the totals were in the standard locations. Checking the example reports in their printed marketing materials showed the exact same little oopsie.
Phone call to the software supplier went as expected. "Our product is perfect, you must have a bad computer, you did something wrong, this software is in use daily by hundreds of satisfied customers..." You know, the usual blaming of the victim.
The software was not offered to our customers, nor did we offer a contract to repair it. Oddly enough it avoided becoming a well-known accounting product today.
Inadequate testing and probably several problems in the design. It sounds like the success checker was using the logs to establish whether something worked or not (bad idea), and it wasn't tested on all the different levels of logging available (inadequate testing), and the logs were not analyzed when the first failure occurred for some reason (not optimal operations). There might be more parts of this problem.
I've dealt with this kind of thing on several occasions, but usually on scripts that were written for speed rather than quality, and thus more frequently for internal or open source projects than for companies who need to make a once-or-never sale and therefore have an incentive for it to all look good at the start. One easy way to get this kind of false negative behavior is to rely too much on exit codes from programs. I've debugged a few scripts which assume that the only successful exit code is 0 and programs where the programmer thought that multiple successful exit codes would make use in scripts easier. One of those was going to need to change before the scripts worked, but sometimes, the situation lined up in such a way that the testing environment actually got a 0 out of a program which, in actual usage, would return another successful code. Of course, I've also seen the programs that didn't check any indicator of success or failure and just plowed on under the assumption that nothing would break, so at least the ones that print an incorrect failure message can be tracked down to a specific part that supposedly failed with minimal effort.
I'm presently working with a 10+ year old database product, as required by our organization. (I won't debase myself pro-fessionally by mentioning its name).
It offers the ability to code up routines that iterate through database rows to do things like total up columns, find minimum values, and so on. You can even write nested loops where the key field in the current row of one table is used to look up records with a matching key in another table!! Such power was liberally used by my predecessor to write gobs upon gobs of near-unreadable code.
But perhaps I shouldn't be so harsh. The database product segfaults -- every time -- when you run a query in the SQL window.
Many years back I worked for a company selling configuration software (such as choosing different options for a new car). We would send out updates to the program via diskette, using RTPatch.
Our customer support had an inordinate amount of support calls dealing with failed updates, for which the solution was to reboot their PC.
The programmer looked into it, and no matter what he did the blasted error would not happen when looking at it in debug.
In the end the solution was to change the error message to instruct the user to reboot and try again. Cut the calls *way* down.
My worst was a memory-corruption bug in an Amiga application I was developing. No memory protection, so although corruption was guaranteed, just what got trashed was sensitively dependent on all that had gone before. There could be any of a number of symptoms, from an immediate crash to screen corruption to nothing at all (which really meant unpredictable behaviour down the line).
Finding an almost-reliable reproducer required rebooting after every attempt, whether I'd learned anything or not, so that the machine would have a (mostly?) repeatable memory layout, so that the same data structure would hopefully get trashed each time. (Just quitting and restarting the app wouldn't do, since things would load into different places due to fragmentation, even if the system was still stable, which it probably wasn't). Then, drilling down in the debugger to actually find the problem required more of same -- reboot; one stab at it; repeat -- and if, for e.g., I screwed up and stepped over a function call I'd meant to step into, reboot and start over. Given how long the boot and app-launch time were, it was a long, boring slog. I remember it as taking weeks of dedicated effort to track down the bug.
(I could have kicked myself when I finally found it. Stupid thinko (analogous to a typo). There was some graphics call that needed to be passed a scratch buffer and its length. I'd passed the length in bytes, but it should have been in units of sizeof(SomeStructOrOther), so the call scribbled off the end of the buffer.)
I guess that sort of thing is all in a day's work for people working with microcontrollers and the like.
The worst bug I ever came across was a memory corruption bug that only occurred if the username had an odd number of characters. The programmer who kept encountering the bug did. The programmer who was trying to debug it had an even number of characters in their user name. That was _days_ of fun!
(This was before valgrind.)
I worked on a problem like that when working 2nd-tier support in the late 90s for a (then) well-known relational database product. The issue only seemed to occur on MIPS R4000 & 10000 chipsets. The error was different every time - SEGVs, SIGBUS etc. - and different again when run under a debugger, and different again under a different debugger. I'm recall spending 9 months trying to track this down and never did. The most nightmarish problem I ever had to deal with.
I developed a PHP and MySQL based inventory management application a couple of years back. Everytime a new item was added, the record was tagged with a unix timestamp and - because I knew I'd eventually hand this over to someone else - human readable year, month, day, hour and minute. A few weeks back, another system in our OSS chain broke, and the people running that system came to my old team (I'm still on the group email list), asking for any equipment that had been added after April of last year. An easy enough thing to figure out, only when I dug into the database, the human-readable month field for all devices was "01". The Unix timestamp was OK, though, so I used that to generate a list of all the equipment for them.
Looking into the code later, I found that while extracting the human-readable values from the timestamp using PHP's date() command, I'd accidentally typed "$mnth" instead of "$month". So when the next line was $month = sprintf("%02d", $month + 1); the result obviously always becomes "01".
Had I written this part of the application in any other month than January, I'd like to think I'd caught it.
'I'd accidentally typed "$mnth" instead of "$month" '
Which is exactly why I abhor the current fad for abhoring variable declarations. They're the programming equivalent of double-entry bookkeeping -- a bit of deliberate redundancy as a simple, mechanical way to catch dumb mistakes (so that you can spend your time hunting down not-dumb mistakes :-/ instead).
Not declaring variables also near guarantees that the application will be as slow as a snail unless there is a very good compiler optimiser at work which will derive the exact type of variable required. If not, a variant type variable will be used instead and these are incredibly slow and inefficient.
A small lifetime ago some manager thought that he could code so knocked up scripts to do maintenance on systems, largely collecting data for remote transfer. Potentially destructive maintenance too as part of the process involved deleting old data.
No matter how many times I told him, he was of the opinion that handling dates in string variables and not in variables of the appropriate was fine. Even when demonstrated and shown how to fix the problem, he just carried on producing fail. Fail that involved deleting data on systems that didn't have the UK date format, which inevitably happened for German installed systems of course, let alone the default American install.
It largely reminds me of a similar story happening, during the mid-00 to a colleague.
Dude was the SAN admin for a big central DC of a 9BUSD turnover company.
We were after a good consolidated SAN and storage reporting tool and wanted to evaluate a couple of those.
It turned out IBM had just bought a company to put their product into the (gigantic) Tivoli portfolio. Tivoli Storage Reporting it was called, I think, back then.
So, my colleague asked and obtained a fresh windows server VM with all requirements and admin account of course.
Got the media from the vendor but never was able to install it. I don't remember the details but it never installed and he couldn't figure out why.
Call to support etc ... was totally unsuccessful for weeks.
We were in Europe, but IBM then decided to send a guy from the product engineering, from Texas (yes !) to Europe for a couple of days, to sort this out.
For 2 days, we looked at the poor desperate texan dude, totally puzzled at what was happening, calling, restarting etc ... He left with the product still not installed.
We never have known what the issue was. End of the PoC.
Android auto is annoying as heck with its constant requests to be removed and re paired with the car even though there's nothing wrong with it. It has some odd quirks that potentially are the fault of the cars software too no doubt. There's also the constant pop ups asking me how I find my experience with Android Auto which I dismiss, thinking how about you actually test your bloody product or make use of that lovely telemetry you are no doubt collecting (c'mom it is Google).
I had endless joy with Ford Sync - a load of crap that Microsoft supplied which explains a lot of the issues in it. The Bluetooth connection is entirely hit and hope, with the "connected" and "disconnected" status values being dependent on the current direction the wind is blowing. For example, hit the "connect" button and the Sync Bluetooth system will report that the device is now connected. Go back to the status page and it shows disconnected. This repeats until the Sync system is powered off which requires removing the battery terminal or figuring out which unmarked anonymous fuse controls power to the Sync system. Performing a "reset" (menu option) doesn't do a thing, only powering off.
This hell hole may have been fixed, but as the device is denying all knowledge of the USB connector applying the latest update is impossible.
"Has your tech worked without reporting it's ready to roll?"
Well, yes. In fact, this was perfectly normal for the first several decades of my career. My code still works that way.
Silent success is where it's at, at least for professional software. As Doug McIlroy wrote back in '78, "Don't clutter output with extraneous information" (The Bell System Technical Journal, Volume 57, Number 6, page 1902). No message means a clean install ... you only need to throw a message in the event of an error.
Almost everybody should agree that extraneous information is a bad thing. Unfortunately nobody will agree on what counts as extraneous.
For an installation, having a blizzard of a thousand messages appear on the screen saying things like "Copying eijnefnf.pwq to ..../inst/conf" for 2 milliseconds each is pointless, and a sign of someone who's too in love with the details of their own work to realise that it's a waste of time.
A succinct "installing..." message followed by "installation complete" is harmless and reassuring. And providing no feedback at all for an action, as well as assuming that the user understands that "no news is good news", breaks other good principles of human-computer interaction.
Just don't add that horrific idea that Microsoft had to replace a progres baar with "something that just moves on the screen - and then starts again. And again".
Whoever dreamt that up is a sadistic bastard whose beard and sandals I'll set on fire first when the revolution comes, especially given how long of these installs take. I have never seen anything else on any other platform install this slow, on any platform, and available network speed does not seem the controlling factor.
I think it has more to do with how much user impatience ChatGPT picks up via the camera that is now ubiquitous thanks to &^$ Teams.
Ugh. I changed Linux distros over a bug like that - over 95% of the main system message logfile was simply telling me it was switching users/processes/something. With no one logged in or attempting to be, no cron jobs, nothing. And no way to shut that "feature" off that I could find, either. A Fedora version, I think.
This is why you have progress bars that nominally do something.
Microsoft's Windows Installer (MSI) UI used to have a progress bar that sorta kinda did that... except it didn't really work out how many files there were to copy and then updated the bar accordingly. So the end result was that you saw a progress bar progressing smoothly until the file copy started. Then it sat there without moving while the other UI element showed filenames flying by... And then, when that was done, boom, the bar moved. It's kind of pointless to do that.
And as @richardcox13 points out, the verboseness should go into the log (if there is one). That is one thing the Windows Installer did right with the flag '/l*v'. Usually looking at the logs, people like me (who used to work in tech support and had to support products using MSI) could figure out exactly what had gone wrong and where. Those were good days. Even today I can still work it out.
Capita used to have a lovely progress bar on one of their SIMS updates (I'm thinking W95/W98 days here), well before the days of SOLUS and the joy that inflicted upon the world.
It would get up to 67%, then drop back to 33%. It eventually finished at around 127% IIRC. I have no memory of what happened if the install failed, so it was obviously one of their more stable and reliable updates!
A past job of mine involved working with unusual software with potentially very verbose error logs. This taught me a lot about practical file-parsing, stripping out unimportant detail and ignoring duplicates to find the actually important bits.
I had no idea at the time that one day those skills would be how I kept a minecraft server running. By god Java likes to fountain detail at you in the logs.
"Obviously the developers of Java [and Python] don't believe in this at all either, since every damn failure is a font of useless error reporting."
That's the fault of application developers, not the languages themselves. The stack trace is only produced for an unhandled exception. For every anticipated error path, the exception should be caught and dealt with -- where "dealing with" might well consist of aborting with a user-appropriate message.
Back in the day, "Bus error -- core dumped' was all you needed, but the days of memory dumps and post-mortem debugging are long behind us. If a Java or Python app goes TITSUP*, well, that stack trace may be meaningless to the end user, but it's all the developer has to help them figure out what went wrong.
* Total inability to solve user's problem
> the days of memory dumps and post-mortem debugging are long behind us
Once again, it feels like the more time passes, the worse everything gets.[1]
Once upon a time, devs were proud of the ability to debug a User's problems, now they have to see the stack trace for themselves or it is marked "can't reproduce".
[1] and me with this pain in all the diodes down my left side.
The sentence you quoted contradicts your assertion. After all, what was the core dump? It was a lot of data about the user's situation that led to the crash. It contained a lot more data than the stack trace does. Those dumps were useful to debugging programs for exactly the same reason and in exactly the same way that stack traces are, and both only happen when the programmer did not appropriately detect and handle an error condition. In the absence of the dump or stack trace, the programmer has to try to reproduce the problem from what the user describes, which we certainly will do, but it is less effective because they may have omitted an important factor because they didn't think of it, didn't know it was important, or wouldn't know about it at all.
This reminds me of a debugging exercise I did a few years back where I was asked to fix a problem my software had when opening a certain file. They gave me a copy, and I ran the software on it with complete success. I did so repeatedly and in a variety of environments. I duplicated their server and ran it on that image. Only after a few days did they explain that the thing they were doing which caused the failure was run two copies of the software on the same file simultaneously, set to write output and logs to the same places. Of course we start with anything the computer can tell us before we ask the user to spend a lot of their time providing information they might not have.
How did I contradict myself?
You yourself said it - the core dump contained *far* more information than a stack trace does - with a core dump you can do a *full* post-mortem debug: pull the stack *and* examine that state of the in-core data structures. And the sentence I quoted was:
> the days of memory dumps and post-mortem debugging are long behind us
So, yes, having *at best* a stack trace is nowhere near as useful as having the full core dump. Examining a core dump and doing full postmortem debugging is skill that is being lost, IMO, so things are getting worse!
Not sure how your scenario adds to this argument: of course, if you have the data, the precise version of the program and resources to replicate entirely the client's actions then you can recreate the problem (in this case, a simple enough overwriting). But note that you didn't attempt any postmortem debugging and were in a far more favourable situation than many when faced with a User crash.
But it was an interesting story of the programmer not expecting the User to do something that daft.
The part I thought was contradictory was this:
"Once upon a time, devs were proud of the ability to debug a User's problems, now they have to see the stack trace for themselves or it is marked "can't reproduce"."
You act like seeing a stack trace is a crutch, but seeing a core dump, which provides even more information to the debugger, is not. These statements don't seem to align. My story was not exactly the same, but an indication of why we try to use core dumps or stack traces before relying on less reliable user information, which is what I thought you wanted devs to be doing instead of asking for stack traces.
You appear to think that debugging got worse, when in fact the same tactics are being used in the same ways. People still collect logs and information generated by a crash and use those to diagnose problems. If they can't get them, they attempt to reproduce from other information.
Your reply suggests you might have had a different point: "So, yes, having *at best* a stack trace is nowhere near as useful as having the full core dump." I interpreted your previous comment to be expressing dissatisfaction with developers' skills, not their tools. However, I'm not particularly concerned about the loss of dumps; they're still available for programs that make them useful, but for many complex programs, unexpected errors still have some chance of cleaning up some resources from an unexpected error condition. A dump is a rather catastrophic fail method and it provides little information to the user, not all of whom can call the programmers quickly.
The "informational" and "verbose" channels for this. Modern shells don't have just stdout and stderr (aka known as 1> and 2>), they have five or six channels to separate verbose, warning, error, stdout, debug and informational.
I like using the verbose channel for my programs and scripts since it does not end up in log files, but I can output "still alive, working at point X of Y points" so I can guess how long it will go on, or see that something is weird since it takes way too long.
"Silent success is where it's at"
It depends. For every-day utilities like "ls" and "cat", I fully agree -- and that, I suspect, was the context of McIlroy's advice.
But I've had enough software fail silently that for anything important, long-running, and/or complex, I want to be explicitly told that it succeeded.
This post has been deleted by its author
As a consumer, rather than a pro, I'd like accurate information when it comes to installs. When my Mac installs or updates the OS I get a couple of types of screen which are less than useless. There's a series** of progress bars, each one with a massively optimistic "time remaining" estimate that just serve to annoy me when I've been looking at a stationary progress bar stuck at 10% saying "9 minutes remaining" for 35 minutes. Then there's the black screen that can stay black for a long and worrying time; has it bricked? is it stuck? do I need to do something? In the old days the hard drive**** gave some confidence that stuff was going on under the hood, but today you've no idea and when you're 40 minutes into a promised 6 minute install it's concerning. I don't understand why they can't give an accurate install time and I don't want to stare at a blank screen with my anxiety levels going up faster than the install. What I'd like to see is a series of short messages indication that something is happening which get updated every couple of minutes just to show me something's happening - or a list of everything that needs to happen that gets struck out as they do.
**and why can't I have one progress bar to rule them all?
****or the floppy "Please insert disc 8 of 11" - even more reassuring!
.. but how do you figure that out? If you try it by number of files copied it'll be woefully inaccurate, since copying large files takes a lot longer than copying small files. If you try to do it by amount of data copied you get the opposite problem - copying large numbers of small files is much slower at transferring data than copying one large file. And both are entirely dependant on the speed of the target drive. Then there are all the other operations that need to be done during copying; tidying up old files - of which there can be an unspecified number, updating config files, generating new caches, etc etc. All of which take an amount of time that is largely dependant on the speed (and degree of fragmentation) of the target disk and is impossible to calculate in advance.
This is largely why Microsoft switched to a simple animation instead of a progress bar Inaccurate timing information is less than useless and creates anger and confusion.
I agree that the black screen on MacOS is bad, but I think at that point it's updating things that mean that screen can't be on. Maybe we need a flashing light on the side?
Personally, "sudo apt upgrade" gives a lovely amount of verbose output and gives me all the confidence I need that something is happening. Provided it's still pumping out lines of data I know it's not dead, and that's the best you can really do.
.. but how do you figure that out?
You ship a pre-installer, this then benchmarks the users hardware and scans their current install generating enough stats to create a much more accurate time based progress bar.
Obviously you call it something ominous like a 'Compatibility Assessment Tool'; have it display marketing slides when running so they know it's doing something.
"... that right, just run that... I'm afraid it can take up to 36 hours to run... if you email us the "Measurement of Users Software Environment" report when it finishes we'll get the install media sent right over..."
"No message means a clean install ... you only need to throw a message in the event of an error."
That works great until your script takes a while to run. If your install script has been printing nothing for three minutes, is it still working successfully or did it get stuck in a loop somewhere? Of course, this is also the kind of script, that, should I terminate it because it appears to be doing nothing (whether or not it was actually stuck), no output means I'll have little idea on exactly what it has done and what I need to do to clean up the mess.
Yes, there are some ways to investigate what the script is doing by looking at files it accessed, attaching monitors to the process and its children, and reading its source (of course everything is open source, right). However, the average user doesn't know how to do that and the technical user doesn't want to do that unless it's necessary, which in most cases it is not. This model also requires the programmer to detect, handle, and have a message for every error condition that could occur on any system capable of trying to run it*. Of course, that's ideal and I'd like everyone to do it, but don't pretend everyone does.
* For example, install scripts that can run on any vaguely Unixy system, but weren't intended for every single one. If I run them in a tiny Unix-style shell that runs on an iPhone and has no services stack, most scripts won't be expecting that and will fail if they try to interact with one. It would be great if every install script checks that, but most do not try at all.
It matters. If you can't get the simple stuff right then I won't assume that you can do the difficult things.
I once rejected a CV with the entry:
Marital Status: British
While that might have been exactly what they meant to write, it made me wonder what their SQL would look like but unwilling to find out in our own codebase.
Obviously they were just making it clear that they were married in the UK/ under British Law ... as opposed to being unmarried, married in France (or some other country), or married according to whatever miscellaneous or informal cultural practice that might have seemed a good idea at the time. Perhaps an unexpected style of answer, I agree, but I'm not sure it's actually wrong...
Does "unmarried" have a jurisdiction?
Hmm, I suppose it might, as there might be cases where a marriage in one jurisdiction might not be counted/valid in another[1]; and even weird ones where someone (might have) married you somewhere without your knowledge, so you might only be sure of being unmarried in the (e.g.) UK.
But I guess it depends on what associations a writer might think come with "status", and I think the applicant's answer is arguably not unreasonable.
After all, it's not like the applicant wrote "Marital Status: giraffe laser seven".
[1] You might have trouble getting a UK same-sex marriage accepted as valid in Saudi Arabia, I would imagine.
Minor nit: I _think_ even Alabama now insists on children being at least 14 before marrying.
A 13yo legally married to an adult in one of the United States is probably legally married in the UK. They just can't have sex in the UK (and the adult is at risk of being prosecuted for having sex in America).
"Minor nit: I _think_ even Alabama now insists on children being at least 14 before marrying."
There is no Federally set age for marriage, and the 50 states have completely different laws on what the age of consent is, or indeed what "consent" means.
Remember, kids under the age of consent can't enter into a contract, and a Marriage is a legally binding contract.
The age for general marriage (meaning "the couple don't need permission") across the United States is 18, with exceptions in Nebraska (19) and Mississippi (21).
With parental consent and/or a court order to issue a marriage license, that age can drop. Many states allow underage marriages at age 16, with a few at 17 (9 States) or 15 (3), and one at 14 (Alaska).
6 states do not allow minors under the age of 18 to marry at all.
9 states have no minimum marriage age codified in Law ... in theory, a newborn could be legally married with both parental consent and a court order. Good luck getting that court order, though, even if the parents are stupid enough to allow it. In these states (which includes California (surprise!)) it is extremely rare for a marriage to be allowed if one of the parties is under 16. Last time I looked it up, 15 year olds were around 4% of the total underage marriages. Under that age, the numbers were well under 1% in total, with 14 year olds being the vast majority. The youngest I am aware of this century are three marriages of 10 year olds in Tennessee (all girls), and one 11 year old boy in the same state. Tennessee has since changed their law, making it illegal for anyone under 17 to marry.
"A 13yo legally married to an adult in one of the United States is probably legally married in the UK. They just can't have sex in the UK (and the adult is at risk of being prosecuted for having sex in America)."
I suspect that this would be so incredibly rare that there are no codified cross-pond laws or agreements governing it. If it has happened at all in this century (which I doubt), they probably wouldn't discuss their private business with the nosy busy-bodies in the foreign land ... and would probably travel under the aegis of Uncle and Niece or the like. just to stop idle tongues wagging.
"I suspect that this would be so incredibly rare that there are no codified cross-pond laws or agreements governing it. If it has happened at all in this century (which I doubt), they probably wouldn't discuss their private business with the nosy busy-bodies in the foreign land ... and would probably travel under the aegis of Uncle and Niece or the like. just to stop idle tongues wagging."
I seem to have a vague memory of some US rock/pop star visiting the UK with his wife, who under UK law was below the age of consent. This would be last century though, not this century. It made the press but I don't remember if there was any actual fallout from it or not.
Yes, JLL, back in 1958. Well over half a century ago. And HE was playing fast and loose with the law, even in that rather permissive time and State. Ruined his Rock & Roll career before Rock & Roll was really a household name ... so he turned from one gullible crowd (teenagers) to another (gospel music aficionados) in order to make money.
Kinda brings new meaning to Otis Blackwell and Jack Hammer's "Great Balls of Fire" ...
"Marital status" is a standard phrase and if the person didn't know it means "are you married or not" then they may not have been suitable.
Actually why would someone include that on a CV? I am sure it would not be allowed to request it in a job application form.
Times change, it used to be common to put things like age and marital status.
I've just checked the CV I wrote when looking for work after finishing my degree, it includes my nationality, birthdate and marital status. Information I'd be quite offended if was insisted upon these days.
I will have based it on an example from someone else, and had lots of 'writing CV' info at that time of life, so I expect it was just a lot more common back then. (20ish years ago).
Just checked one of mine from 2006 - I was "Single, no dependants". I was also the owner of a clean UK driving licence.
British Nationality was included, but the sort of work I was looking for in those days would have required it and a passport to prove it at the job interview.
Like FIA, I'd have copied the format from somewhere else, many years ago, so I assume it was more common to include this stuff.
I once had the opposite problem: to obtain my superannuation/pension entitlements from Switzerland, I had to prove I had never married. Simple in CH, but a problem in UK/AUS.
Fortunately I was sharing an office with a UK Justice of the Peace, so I typed up a document, "I declare that I have never been married", and had him notarise it. I got my payout.
it'll be a legal thing, at that point logic goes out the window and it becomes a directional cyclic state graph.
There's no path from 'Married' to 'Unmarried', only to 'Divorced', 'Widowed' or 'Died', (one of our Kings did add a 'Beheaded' state, but it still only returned to 'Married').
When I married in Taiwan 20y ago, I had to prove that I had never been married previously. This is much harder than you would imagine, and required several contacts with the British embassy, and taking out an advertisement in the local newspaper to meet certain criteria. After almost 3 months of to-and-fro, the resulting "Certificate of no impediment" document was very impressive, with seals and ribbons and the like.
A few years ago a friend has a similar but opposite problem. He wanted to marry, in the UK, his fiance who was previously married to a highly abusive polygamist (he managed to be married multiple times). In order for the divorce/annulment to go through in her home country she had to have the approval of her husband. He had disappeared, likely for legal reasons, and they had to go through months of trying to track him down to the point that they could get to something like the legal status of "presumed dead". It took many months and cost a huge amount both in upfront fees as well as bribes in order to get the requisite amount of ribbons and stamps on the paperwork. Then they had to get this paperwork notorised by an acceptable individual which caused even more problems...
They did it in the end, but it was incredibly stressful for them both.
Perhaps an unexpected style of answer, I agree, but I'm not sure it's actually wrong...
A/C's response reminded me of a joke a colleague told me awhile back. It goes something like this:
Two folks were flying in a helicopter along the west coast (of the US, for our Continental readers). At one point, a fog bank rolled in, and they became disoriented. After flying for a while, the pilot decided to descend to see if he could find some visible landmark he could use to regain his bearings. He descended into what looked like a college campus. On the ground he spotted a couple of folks looking back at them. He got on the 'copter's speaker, and asked, "Where are we?" One of the folks on the ground produced a marker and a large piece of cardboard, on which he scrawled: "You're in a helicopter!"
The pilot put on a wry grin, turned to the passenger and said, "I know exactly where we are. We're in Redmond, Washington and that's the Microsoft campus." The passenger, looking more than a little puzzled, asked, "How can you possibly know where we are from that response?" "Easy," the pilot replied. "I know we're over the Microsoft campus, because that answer is technically correct, and absolutely useless."
This post has been deleted by its author
I worked at a company that was looking to install a new project management software.
One of the tenders came for a product called Project Manager Workbench. Not a bad solution, at the time...
The tender team sent each of the companies a specific set of documents, including a complex test project that was to be used for the demonstration - if it couldn't cope with one of the standard (seemingly complex) projects, there wasn't much point in looking at the product. A couple of the companies gave up and withdrew. The rest came and presented...
The PMW team rocked up and started to demo their product, using their standard test project, not ours. After 5 minutes, the project manager stopped the demo and asked where our test project was, "oh, we didn't do that, we always use our test project"... End of demo and a quick exit from the site of the PMW team.
"oh, we didn't do that, we always use our test project"
Yah, because it's been carefully crafted to avoid all the known pitfalls, is my guess. That's arguably acceptable for a long-before-release dog-and-pony show, but for trying to make a sale of a "finished" product? Yeah, no.
I bought a Gateway 2000 P90 PC just before Windows 95 came out. It was one of the first machines to come with EDO RAM.
Having spent years testing and benchmarking kit at work, the first thing I did was run the diagnostics software included in the package with the PC. Memory errors all over the place.
I contacted Gateway, they had no ideas, sent out another set of RAM. That came up with the same error as well. So I put both sets of RAM inside and used it (a MASSIVE 32MB!) until the technician arrived with a new motherboard. The PC seemed to work fine in Windows, just the diagnostic software kept saying the memory was faulty.
The technician looked at it, swapped the mainboard and the memory, again. Diagnostics ran, same errors...
Then I had a brainwave. I asked how old the diagnostic software was, and whether it knew how to test EDO RAM. Long pause on the phone as the technician relayed this question.
No. The diagnostic software didn't know what EDO RAM was and, because it was responding differently (but correctly) to normal RAM, the diagnostic software thought there was an error.
An expensive learning lesson for Gateway, but not for me, luckily.
I got through after about a 5 minute wait, the support was quick and competent - maybe being a professional IT worker and telling them what I had already tested helped. They sent out replacement memory without question and when that didn't work, quickly agreed to send out an engineer...
It was the late 1980s. "Search for this address and change the byte from xx to yy",said the engineer responsible for the SW bits of our prototype comms kit to me, the HW bloke. Then he buggered off to sort a family emergency while I was left on the customer site to complete testing. The xx to yy change was in an eeprom emulator being run on a Compaq portable PC (portable in the sense that a fridge is portable because it's got a handle) connected to the kit via a bunch of wires to the eeprom socket which was used to set the configuration of the kit.
Naturally, the first time I needed to change the config. it didn't work. There were no mobile phones to get help and I couldn't make it work. I had an actual eeprom for one of the configs and managed to get some of the testing done, but the the customer wasn't very happy. When the engineer came back it turns out it was all my fault. I'd searched for the address, switched the xx for yy and hit return instead of just doing nothing. Turns out the Compaq emulator he'd written interpreted everything from the keyboard as bytes to be "written" to the eeprom. So I'd changed xx to yy, then the bytes in the next two addresses to 0D and 0A.
Saw that one coming !!!
I have seen that sort of input before ..... you are not entering an input field, like an on-screen form, but actually changing the memory directly.
If you had looked carefully when entering the xx change then <return> you would have seen the other bytes change.
I have many years ago used a 'simple' in-memory hex editor that worked the same way, every key was a literal hex value unless you exited the editor with a special key sequence.
Fun in a strange logic sort of way !!!???
:)
In the early 2000s, I was supporting a University computer lab. As I'd had a lot of video capture and streaming experience, I was called in to arrange for an important international conference to be live streamed, and made available on DVD. The video was a stream of the computer screen, and the audio was the speakers.
The AV systems weren't equipped for streaming then, so I set up another PC with a capture card and got permission to attach it to the hardware for the AV system (the projector output was a handy source). Thankfully, I booked the lecture theatre for two days before the conference, because when I opened the AV Cabinet, all of the interconnecting cables were black or white.. It took hours, and a lot of cable tracing to get it working, and I did get it working. Thankfully putting it all back together was a little easier because I had my own little map I'd built up.
I once had a guy from work ask if I would stop at his house and look at a little electrical problem; I agreed. When I showed up and saw it was custom lighting with wiring all being the same gauge and color. I sinking feeling and the thought "O' sheeet". The question is "when you saw the wires was it lessor or greater than the three 'E' sheeet and did the wire map get saved in your know-it-all book?
I popped down to the local tech recycler and picked up a venerable Linksys EA6900 (v1.1) for $4.20 to repurpose as a client bridge for this stack of servers I’ve got heaped up in the back bedroom, to be confronted with this peculiar installation procedure for dd-wrt firmware:
-obtain oldest possible linksys firmware 161129;
-obtain linksys “tftp” abandonware utility from archive.org;
-ping 192.168.1.1 -t and watch for TTL 100;
-after about 25 pings, hit “upload” on tftp;
-repeat if timeout, retry if boot succeeds;
-login to diagnostics and revert to the older version on the alternate partition;
-repeat all that first step so both partitions are the same;
-now reset to factory defaults;
-now obtain dd-wrt #23194 dating from Dec 22, 2013;
-manual update firmware to that one;
-repeat again for the second partition;
-administration / factory default again;
-now you can pick which iteration to choose, whether the “Kong” or the “brainslayer” edition (I couldn’t decide so I just picked the latest June 12 2023 version);
-to be confronted with the bootup sequence followed by “Linksys” lamp flashing off and no response to telnet, ssh, or WebGui! “Brick,” except it’s responding to the ping -t with TTL 64.
-power cycle 3x with 10 second pause each time while crossing fingers & toes will trigger the second boot partition, except obviously, no.
-ULTIMATELY, after farting around hard enough, I discover it’s merely ignoring the WebGui request when stated as “https”. Deleting the “s” reveals the damn thing really was running, the whole time.
-
Software vendor does half arsed install? Sadly, that doesn't surprise me. Part of my job is preparing various Applications for enterprise wide deployment. One application is the Meta Quest/Rift drivers. You'd think, bearing in mind that Faceache/Meta is a massive enterprise, they'd have a decent method of silently installing their software. After all, they potentially have to deploy it themselves to thousands of users. Nope. They have an installer, that has to download the software from Meta every time it's run. This is a terrible idea from an enterprise point of view : you want software to be adequately tested before you roll out a new version, so ideally you need to store an offline installer somewhere and run that. It also uses an undocumented switch to install silently. And yes, I've tried using Admin studio to snapshot the install, and also manually extracting the drivers and other software and installing it manually. Entirely unsuccessfully.
Also, one I'm having trouble with atm, Unreal Engine 5. The instructions Epic give *mostly* work. After all, in theory, all you have to do is copy a folder (and everything in it), but it has a slight problem that for debugging, it needs to open a port. We have the firewall running on each machine. Now, I can use group policies to allow the trace server it runs through the firewall, in theory. In practice, I can't, because it insists on copying the trace server to a random folder in the user profile, and running it from there.. FFS, just run it from a central location, it's already in a central location as part of the UE install!.