Re: The fault's with Microsoft
"....But let's be clear on this, the OS is Microsoft's linux. Why do we put up with an OS that can be felled by one program with a problem?"
You mean like... systemd?
203 publicly visible posts • joined 28 Dec 2012
"dates of birth, social security numbers, and driver's license or other ID document numbers"
to purchase parts for an automobile?
It bothers me to no end that there are 2.3M people morons that have given this information to some corporation for this purpose.
Does anyone question anything anymore - especially when being asked for this information?
That company's CEO and CIO should be sued out of existence and banned from ever being able to be in those positions again and the company should be fined into bankruptcy. Maybe that would give other companies that ask for this information pause.
This is the exact reason that I will never do business with Best Buy - because of the demand for my driver's license so that they can check against their DB for fraudulent returns. FU Best Buy..
I am seeing red right now and I can't respond any further...
The answer to this is very simple, the solution is not.
Semiconductor manufacturing is very capital intensive. It takes a lot of money, a lot of infrastructure and a lot of smarts to coordinate this and the pay back (in monetary terms) is very long (many years).
There is absolutely no appetite or desire by the West to invest large amounts of capital in the semiconductor market and wait for a return on investment. The Western population prefers to invest in short term projects and expects immediate return - every quarter. Where as Eastern counties don't (China, Taiwan, Vietnam, etc.).
The US is attempting to put the needed items in place by throwing many billions at Intel. They need to build plants, supply chains (chemicals that mostly come from China), educate high-tech workers (that currently graduate university with 8th grade math skills), environmental needs (lots of water) and deal with the NIMBYs...
I believe that due to the above reasons (and others), the US will ultimately fail at this attempt. If there ever is a war with China (over Taiwan), the entire West's economy is fsck'd. There won't be anything, and I mean anything, that won't be impacted by the cutoff and lack of semiconductors and rare(sic)-earth elements. In 10 years, we would be back to the stone-age for the ones that survive the resulting famine. And China won't have needed to fire a single shot.
Because we (the people) are so damn short-sighted, selfish and don't hold our pandering politicans accountable...
Sorry for bringing politics into this, but it's very sad and I am not hopeful for the future.
(I debated the shouty icon vs. this one - did I make the right choice?)
As one who used to work with GreenHills compiler and their "Spotlight" debugger (via serial port) on pSOS systems running on all kinds of processors, it kind of was a real PIA to set up and get running. But at the time, it was cutting edge to be able to debug on an embedded system (at least in the commercial market). I was writing BSPs (Board support packages) for both pSOS and VxWorks at the time.
Up until maybe 2 years ago, I was still getting emails from them. Haven't seen any since then, I guess I must have fallen off their email list.
Now for deeply embedded micro-controllers it's mostly FreeRTOS. Anything above that I see mostly linux now, but I don't work with any safety critical systems much nowadays.
Have an up-vote.
IBM (real-deal) PC XT (8088 @4.77Mhz, 640k RAM, Hercules monochrome video card, dual 5-1/4" full height 360k Floppies.), had to remove 1 floppy to make room for hard-drive. I think it was about $2500 USD circa 1987?
1st upgrade: added hardrive
5-1/4" full height 20 MB Seagate (don't remember the model), using MFM (Modified Frequency Modulation?) controller
2nd upgrade:
replaced MFM controller with RLL (Run Length Limited) controller and same drive now 30MB ! Woohoo.
Now, those weren't the days...
"If you were building a house would you expect to have a standup meeting at 6am every day with your builders to review what's happening that day?"
I don't know if this would be such a bad thing... I think it would depend on how this was executed. A short review of what was done yesterday and what's going to be done today might discover (and prevent) potential minor problems before they turn into a major problem. Whether it's a house being built or software being written. At a minimum it lets each person vent a little (if they had problems the previous day) and gives the supervisor a little insight as to how the overall job is going, as long as they aren't judgemental about it. Would this be a bad thing to know? I hope not.
I will purchase something like this when:
- 17" + 4K display becomes available.
I'm good w/ everything else.
Until that happens, nope.
I've been using 17"/4K displays on laptops for well over 12 years. First was Alienware, then Alienware (via Dell) and currently Dell (Precision) purchased a couple of years ago.
I've noticed 17"/4K option seems to come and go w/ Dell. Every couple of years the option disappears and then comes back for a few years. Dell currently seems to be in the "not available" cycle when I checked recently. So they don't get my money until / if it becomes available again.
As I age, I like the larger displays, don't care about size/weight. Has to be 4K, I dislike seeing the pixels on 1920x1080 displays.
Good luck to them.
What happens when the robot dogs from opposing sides meet?
- Do they sniff each other first?
Seriously though, there will probably be an escalation of anti-canine(?) measures:
- detecting other dog's signatures:
- motor's whirring?
- lidar?
- radar?
- emi?
- rfi?
- jamming (GPS or otherwise)?
Which one will have the fastest target acquisition and then fire?
Which is most agile? Could one move fast enough to dodge a bullet (matrix style?)?
Inquiring minds want to know!
Maybe we need to have the ability to write code using an individual's cursive handwriting :) Then AI could be used to establish the provenance of the code... I can (and do) read/write in cursive, not so sure too many of younger generations can. Although I am led to believe that this may be changing.
The only effort I am expending on VMWare products is migrating off of it. Which I am actively doing - albeit slowly, me being the turtle vs. the rabbit. It's going to take some time, but now that I've started, I'm not going back. My money's destination doesn't end up in Broadcom's (or its investor's) pocket's. I really wish there was a better commercial alternative that was more reasonable, but alas, they would probably be absorbed by some Borg as well.
Why is it that every good product gets purchased by a company that then destroys it in the name of pure profit? This behavior doesn't bode well for the long term business of human civilization, as it presently exists. I think in the distant future, humans are the actual predecessors of the Ferengi.
Yes, I am feeling a little bit "StarTrekky" this morning...
before moving to Promox completely.
In a way, I'm glad their (Broadcom's) intentions were shown early. Just rip off the band-aid and let's move on..
VMWare is now persona non grata in the IT world as far as any of the shops I've been discussing this with.
Nutanix is getting a lot of air time in these circles as well, but in the past I've had some issues with their hardware/certifications that they provide when I inquired a little bit deeper.
I do feel some empathy for the VMWare partner/consultant participants, I suspect there will be a negative impact on opportunities for them. I'm sure some will do okay, but quite a few deployed ESXi test beds for their professional training/learning - we'll have to see how that works out for them.
Posted by me years ago..
- I was once at my dentist and we were scheduling my next appointment and I happened to look down at the keyboard the assistant was using and noticed that the space bar had a piece of paper with a printed message on it with transparent tape to keep it in place. The printed message said "any key". Later, I asked my dentist about and he said that one of the programs that his assistants occassionally run displays a message "press any key to continue" and it was confusing to them, so this was his solution. (this is not made up).
I use Clonezilla quite often and for all of my cloning needs (Windows & Linux systems), the interface requires some getting used to, at least for me it did.
My only wish/request is that I wish the developers would move to an OS that did not utilize systemd. For this specific (single-use) application, I would think that it wouldn't matter and it only encourages the (systemd) crap to continue.
I almost stopped using it when Clonezilla updated to an OS that used it. I still use the earlier versions (when I can) that didn't have systemd.
I enjoy reading about a lot of different options & opinions on this topic..I may even try tilde...
For me, at the top of the list is cross-platform availability.. I am willing to (and can) learn any editor, the point being that it's just an editor, a tool and as such, I only want to spend my effort/time learning one. For me, that's been emacs (butterfly-effect and all) for the past ~25 (?) years or so. For the last couple of years I've been messing around with visual code (yeah, I know...) just for a change... YMMV
I know vi because it integrates well with sudo to provide capabilities in a restricted environment, not sure how many other editors have similar capabilities..
Does systemd know about about this?
Is the windows kernel going to be dependent on systemd -or- is systemd going to be dependent on the windows kernel?
"don't cross the streams... it would be bad..."
https://www.youtube.com/watch?v=wyKQe_i9yyo
https://quotegeek.com/quotes-from-movies/ghostbusters/206/
would have been able to determine the limitations by just looking at the Pi4B's architecture and not wasting money/time on building this monstrosity.
The "GigE" ethernet and the USB3 share the same PCIe bus. That's death for any kind of performance for use as a NAS - As Network requests and drive accesses will throttle each other - reducing throughput. As it is, I believe the Pi4B's GigE really is around 300 Mb/s at best due to the PCIe bandwidth.
A single SSD works and is the most cost effective, highest performance approach with the Pi's (3b & 4b). Done. Simple. Next.... I have several of these in this configuration and they work great, for what they are/cost.
I could never understand why anyone would put Pis in a cluster - perhaps to learn about multprocessing, distributing workloads, but that's LEARNING, not expecting performance.
I'm hoping Pi 5's will have dedicated PCIe lanes for network and USB3 and an open-source blob. That would be cool... And fix the audio, put an amp on there so that it can drive a speaker directly (my wishlist)
I once worked (as a dev) for a startup in the late mid-90's. It was a pretty small startup (30 people or so) and I worked closely with Sales. One time he and I were returning from a sales meeting with a potential customer and after just going through the airport, I commented to him that he shouldn't be so rough on his treatment of his laptop while going through security. He said, "If I did't throw it around so much, it wouldn't need to be replaced every year with a newer one.", smiling. It was his way of getting a yearly upgrade. He was the top Salesman in the company so no one really challanged him on this. After the 2nd round of VC funding that saw my stock options diluted 6-to-1 (the first round had diluted them 3-to-1), I had left the company. Eventually the company was dissolved and the IP was sold off to another company.
"I really don't understand the totality of Microsoft's game. On one hand, it looks like they are moving away from Windows as a primary product, and on the other, they're making everyone else's life more difficult with these measures. There's something in the picture that I'm not seeing."
My view of what MS is trying to achieve: They are trying for the following:
- Reduce hardware support to a common set, so that testing resources (costs) are reduced. By (arbitrarily) eliminating certain processors, they accomplish this.
- MS doesn't give a crap about Windows anymore - they view it as a service, a delivery mechanism for mining the end user's data. That's all. Any company or persons that develop Hardware-based products running on the Win OS, be aware, your Days Are Numbered (DAN). MS is slowly weaning the small players out by implementing signing (as a service). Eventually they'll get to the walled garden (actually it will be a prison that can't be entered or exited from).
- Even their web browser is now mining end user data, in the name of security
- Starting with Windows 8 (and worse in Win 10/11) is the complete destruction of any sense of real-time operation. Ever notice how the timestamp of a file doesn't change for minutes, after it's been written/closed? The file system on Win 10 is a complete joke. They've sacrificed this for telemetry. Writing software in Windows is exponentially, progressively getting worse, combine this with MS changing their mind and dropping support for a technology (UWP, WPF, WCF, etc.) and it's time to move away from Windows.
Where technology is concerned, MS can't plan their way out of a paper bag...
Thank (insert deity here) that there are alterative OS's.
We've been running ESXi 5.x, 6.x for close to 10 years w/ the internal SD card booting. NO issues.
But, on every machine, any tmp/temp files, all log files get redirected to external (Enterprise) storage where possible. This is done to reduce the exact problem they have (finally?) started thinking about: SD card writes.
Rather than making such an idiotic move (eliminating/not supporting SD boot), they should provide some guidance on how to reduce writes the SD card.
Really VMWARE? Stop being lazy.
An alternative is to copy/replace the SD card every once-in-a-while (every year or two?) during a maintenance cycle.
Although, I recently moved to SSD boot for RPi4's for the exact same reason, now that this capability is easier to set up and reversable, but there is also no convenience penalty, like there is on an enterprise system.
"right now it's time to... kick out the Chome"
Chrome on Pi is the new IE on Pi.. Except for using Chrome to download Firefox ESR, I have been (happily) using Firefox ESR on Pi for quite a few years now.
This includes installing/running my favorite plugins: Ghostery, Ublox, Adbloc, NoScript.
Yes, yes I do change the agent string to x86, otherwise there are just too many web sites (i.e. all of them), that when they detect ARM, they almost universally serve up mobile websites - web-designers are friggin' idiots...(I'm lookin at you, Amazon).
"I never disposed of anything ever since"
- I was chuckling to myself before I read the following line: "Oh, yes, I've had all the beardies chuckling"
Anyways:
Re: "512 kilobyte Compact Flash cards" - I have a bunch of 512MB CF cards because I use them in an older (very expensive Canon 1DsMII) camera that doesn't recognize cards > 2GB. At 16MB imager - the camera is still in service and takes awesome images. I just used it last month to take pictures of my niece's junior prom.
Re: Old 80GB (40-120GB) SATA SSD drives - I have just recently started using this pile to add SSD USB 3.0 boot drives to my Pi 4b's. They make a great pair - performance is 10x over any SD card I've tried.
I just inventoried my "card" drawer and the smallest CF card I could find is 4MB and the smallest SD card I found is 16MB along with numerous CF->PCMCIA adapters - remember those? Cards aren't worth much (now-a-days), but they're small and don't take up much room, so I keep them...
"I've surrendered on that front'.
I haven't yet. As previous posts have stated, it takes a little bit of tuning. And I've also found that preventing access to twitter, facebook, doubleclick almost never causes any issues with displaying pages for the majority of websites that I visit. These are the top websites on my "no-go" list. noscript is actually fairly flexible for tuning.
The biggest issue that I have is that I generally, temporarily disable noscript if while I am purchasing something. e-commerce websites get very knarly if they can't access something and enabling noscript while in the middle of a transaction can result in double charges... Ask me how I know...
I'm fairly comfortable writing software in C, a little less so in C++ and C#. I've been coding for 30+ years and I don't always feel the need to use the latest whiz-bang feature introduced in the latest iterations of C++ and C# (I only use templates in every other C++ project). But I'm getting a little tired of Microsoft's "3" year development kill lifecycle.
I've been looking to make the move to Rust over the last couple of months. With this latest C#/memory alloc feature likely leading to disaster, I think I will be accelerating that move. At least for now, I like the idea that MS doesn't completely control the fate of Rust. There seems to be momentum with Rust and I would like to contribute to the momentum. I'll likely never stop coding in C/C++, but I think Rust will be the path forward - at least for cross-platform work and C# will get left behind (for me).
I agree re: Moor's Law, but Moore's Law was framed in the known transistor physics at the time. At the simplest level, the transistor is just a switch, 1 or 0. That's it. Yes there are billions of them and they switch very fast, but it is still just a 1 or a 0..
I think (hope) that in the future (hopefully years, decades and not centuries) that a new technology will be discovered that can provide a switch that holds a 1 or 0 state. Maybe somewhere along the way, the (fundamental?) bulding blocks of atoms (quarks?) are able to be manipulated and their states/spins are then used. I suspect that at this point Unified Field Theory will be reality or at least better understood.
I don't think "quantum" computers are the answer right now either. Biologic computers - maybe, but I suspect that solving/understanding the Unified Field Theory will come first.
Of course there is always the possiblity of trinary (ternary) computers becoming mainstream, along with their theoretical efficiency improvement, but Industry manufacturing inertia will likely prevent that.
If I had to do it all over again, I would have entered the material sciences, room temp. superconductors, graphene, carbon nanotubes - cool stuff and much more yet to be "discovered", invented.
"While this remains in preview, it is yet another demonstration of Microsoft's determination to embrace a cross-platform world"
Sure, as long as:
- you purchase a Windows OS license from MS
- you purchase a Visual Studio license from MS
or better yet, purchase a Visual Studio subscription.
EEE (embrace, extend, extinguish)
One of the most important lessons that I learned early in my (software) Engineering career is:
"Just because you can do some thing, doesn't mean you should do that thing".
This applies to so many things in life.
It is so applicable to technology in the world. Unfortunately, I've yet to encounter a graduate of a university where they taught this.
KISS, when possible.
Make it only as complex as needed, and no more.
Needless to say, I'm not a fan of:
https://en.wikipedia.org/wiki/Rube_Goldberg_machine
And realize that just because this early release runs in a VM doesn't mean that the final released OS will be allowed to run in a VM.
I often wondered how VMWare emulates a TPM for a VM. Because one would think that this would be a major source of security issues if the VM were running in a production environment and would seem to be contrary to the TPM concept where simply cloning the VM clones the TPM info as well ? That doesn't seem to be consistent with security.
At some point, I think future versions of Windows will not be allowed to run in VMs. Since Windows 8, the OS is VM aware (the info is shown right in the task manager).
Once MS requires every citizen to have an on-line account to be able to log on, all sorts of things become possible (and a real danger) such as:
- Maybe your account has a special flag (that you need to purchase) to allow to log into a VM
- Maybe your account has a special flag (that you need to purchase) to allow to log into an OS that doesn't find a camera
- Maybe your account has a special flag (that you need to purchase) to allow logging into an OS w/o using facial recognition.
- Maybe as a politiian, you have special privileges that disables logging of certain user activities
- Need more? I can think of many more..
The seeds are being layed now (have been since forced updates in Win 10)
These are some of the reasons why Windows will never be any of my primary machines (for personal and professional use).
"Nothing prevents cameras with hardware disabling features, either to cover them or disconnect them entirely."
Are you sure about that? What happens when the OS finds a mobile CPU/Chipset and doesn't find a camera? Maybe facial recognition is the trojan horse that MS is trying to institutionalize. Maybe the OS won't let you log on/use your machine at all w/o a camera in such circumstances.
But I do agree, I think the camera requirement is more significant than the TPM requirement.
Now, imagine what would happen to the price of a precious metal (gold, silver, palladium, etc.) if 90% of its mining capacity had been removed?
I can guarantee that its price would not be going down!
It's interesting to note the dichotomy.. between a pure concept vs. physical. Recognize that neither cyptocurrencies or gold have any intrinsic value, but being able to take (and presumably own) physical possession might have advantages.. Both are speculative in nature and not an "investment" in the the classical sense of the word since neither produce anything - the expense is in the mining operation and the result is based on that + an artificial "profit". The latter contributing to the volatility.
Not knowing a lot about crypto, but it seems like it can be manipulated (if a single entity owns enough of the processing endpoints) just like fiat monies can be.
I wonder how many years (and updates) it will take before all of the security holes that this will introduce are patched.
Hackers are going to have a field day with this.
MS had better have a damn-secure way of when this is allowed and when this is not allowed.
MS track-record for security isn't exacly stellar...
I learned a long time ago:
That every position/job within a private/public company where the goal is to be profitable (so, excluding government positions) has a certain value to the company. If the company has any sense of being organized, the value of that position is known.
This is, for the most part, completely independent of where a potential candidate lives - assuming the basic requirements of remote access, ability to perform, security are met. So to not list a salary range, I call bullshit. It's the oldest game in the world, whoever (you or the employer) gives a range first, loses. This is the same tactic used by car salesman.
But I don't think it's all bad news for the employers, because what they get is a range of applicants that are presumably qualified for the job and have made (or are in a situation) where they can live at the salary specified. The applicant maybe resides in a modest location with a modest lifestyle with a modest pedigree, or not, that's up to the individual - this determines what an individual can accept in terms of compensation. If the employer doesn't get any responses to the range, then perhaps the value of the position is too low, or there isn't anyone available. Either way, the employer needs to re-assess the position's worth, or go without.
I think the spirit of the law has good intentions, we'll see how companies will react.
As other commenters have said, it is a two-way street - each is evaluating the other and both have to accept, there is freedom of choice involved.
It seems like this effort is akin to converting a PLC's (programmable logic controller) ladder logic into another programming language or vice-versa. I don't quite see the utility of this. If they're just trying to covert X number of inputs to Y number of outputs, then it is just a state machine. State machines can be very elegant (but those are usually quite obfuscated) or can be very inelegant (and usually easier to understand). Computers are very good for predicable behaviour (even without AI). Granted most, correctly written, software, excluding AI, can be distilled down to gigantic (predictable) state machines. Lucky for us humans.
To extend this thought, this is what FPGA tools already do. Take verilog / VHDL and turn it into a set of bits that define a huuuuge state machine that runs in the FPGA logic gates. Again, this has been done.
Taking examples from human programming for examples of (good) security just seems..... wrong. We (humans) aren't very good at that.
And lastly, why would computer (AI) generated language (designed by humans) to run on a computer be desirable? Once it's generated, humans are going to review and comment on the correctness, after the AI has already generated it based on learned examples (from potentially billions of input examples - both good and bad)?
We go from:
problem -> human -> (programming) language source -> preprocessed -> compiled machine code of choice
And with AI:
problem -> AI -> (programming) language source -> preprocessed -> compiled machine code of choice
Why not just:
problem -> AI -> compiled machine code?
We exist to serve our AI overlords.
"Python's decision to stab 20 years of Python codebases in the back was an exceedingly poor one, which may take at least another decade to get somewhat over."
^^^^
This
As a 25+ year C/C++ developer on everything from 8-bit micros to Enterpise software, I can't imagine rewriting substantial amounts of code everytime the C++ standard is updated. (Although lately MS is trying its best to break shit from VS2019 update - to update). Java seemed to get it (backwards compatibility) right, at least for a while
I suspect one or more of the following is true of the Python deities that decide such things:
1) They aren't involved in large scale project development where 100000 LOC and 100's of developers are working on a project
2) They aren't involved in projects where the code life is expected to span 5,10 or even 20 years
3) They are lazy because it's much more difficult to add / fix functionality w/o breaking backwards compatibility than it is to keep APIs. I know this because I just finished spending 2 months on fixing an issue w/o API changes to a library, when it would have taken me 1 day if I had made a change to an existing API. This would have forced a relink / recompile of the library by the mulitude of applications that rely on it (yes, my paying customers appreciated this).
4) They aren't involved with developing software that runs on FDA (medical) equipment, where a single change can force the requalifying of the entire software chain that can take months of testing and cost $$$$
I actually don't have much respect for any programming language that relies on non-printable characters to format source code for successful compiling of it.
I usually allow one mulligan (redo) for major version changes, the real question is what will python 4.x be - will it break compatiblity with 3.x? Hopefully the language designers learn from their mistakes.
This idea that all code, everywhere, has to be rewritten every time a spec changes is just lunacy, IMO.
I Know.... ;TLDR
Users using the CD-ROM tray extended to hold their coffee cup. This was in the early days of CD-ROMs - when StarBucks and el Grande sizes didn't exist yet. The CD-ROM tray, when opened, would allow a stryrofoam cup of coffee to be suspended.
Also ran into someone that kept having their 5-1/4" floppy disks fail after taking them home and then bringing them back to work. Turns out the spouse was putting them on the fridge door (with a magnet) so that they wouldn't forget to take them back to work the next morning....
I have a cricut, but it's been packed away since my mother passed about 1-1/2 years ago. It's an older model, based on the fact that it uses the cartidges, of which there are about 30 of them with the printer. She used it a lot and I know it worked without being attached to any computer and not accessing the internet. I know this because I maintained her laptop (and it was/is running Linux Mint the entire time).
I don't know what it's worth (if anything).. I should probably let it go so someone can get some (possible) use out of it.
Re: Cricut's business model. Most of my Mom's friends that did the whole "maker" scene did it to 1) save money and 2) be creative. 100% of them were also on social security (fixed) income. I don't know if that thinking would allow a very lucrative business. Profitable? yes. Lucratively (Obscenely) profitable - maybe not, but what do I know? I've not made millions in the investment scene. Of course if the central banks keep printing 240 thousand million USD per month, inflation will demand that everyone will be millionaires because a loaf of bread will cost $5000. Those who ignore history, are doomed to repeat it..
"used to put grotty keyboards through a dishwasher"
Yep, same here. In the mid-90's worked for a company and personally saw the crew that worked on computers systems to use the dishwasher to clean up Sun 3/110 workstation keyboards (optional and very expensive in their day). Worked like a charm, after ruining the first 2 by not removing them before the "dry" (heat) cycle..
By experimentation, they had the best results:
- wash with keys facing down
- no detergent (just hot water)
- remove before heat cycle
- blow off w/ compressed air
- let air dry for a couple of days
My best support story:
Back in the late '90s the (East Coast) consulting company I was working for had been engaged by a (West Coast) ASIC manufacturer for creating a dev kit for their product. This involved several sequences of them sending us new hardware, we developed the SDK for it and shipped the software back to them to test/integrate with their solution.
After a couple of cycles, I get a desperate call from the mid-level manager (we call him "Grendle") and Grendle was completely distraught that their developer couldn't debug the latest release we had sent. I asked repeatedly that I be able to talk with the Dev and Grendle let loose with a stream of expletives and denied my request. He demanded that I immediately fly out there in person and fix the problem. After getting a written request from him and confirmation from my manager, I booked the next flight out the following day.
I arrived around 10AM (local), took a cab and arrived. I couldn't find Grendle when I arrived, so I just went and talked with the Developer. He showed me what was happening, I looked down at the hardware, plugged in the 2nd serial port cable (which was laying right next to the kit) and said please try again - and it worked, just as it always had in the past (yes the 2nd cable had always been required and in use). I asked the Dev if there were any other problems, he said no. About that time, Grendle showed up.... I explained what the problem was. He got very quiet and red-faced as I walked over to his Manager's office, explained to him why I was here and that I was now leaving. I caught the next flight home - running through the terminal and JUST as the flight attendant was closing the terminal door to the walkway...
A couple of month's later, Grendle was no longer working for the company.. Twit..