The New Aircraft Hangar-like Building Appearing Suddenly on My Property
... is totally unrelated to this incident.
1153 publicly visible posts • joined 26 Mar 2010
RS-232 is great, but it does have line length limits (I've seen it work effectively at 9600 baud at the end of a 1200-foot cable). I suspect a tin mine shaft is much-deeper.
Did they have the RS-232 repeater/extenders they have now, back in the 1980s?
The computer store I worked at during the 1980s sold a lot systems with ST-225, ST-238R, ST-251, and ST-251R drives in them, but got few returns. However, we had a rigorous setup/testing procedure:
1. Fetch a new drive from the unheated warehouse into the heated work room. 2. Connect to the computer. 3. Power up the computer. 4. Run the computer, doing nothing, for 45 minutes so the drive would warm up to its normal operating temperature. 5. Low-level format the drive via the controller card's BIOS, guessing at the best sector interleave and track skew values. 6. Use CORETEST to determine the transfer rate. 7. Re-LLF the drive with our next guess at interleave and skew values. 8. Re-test with CORETEST to see whether the effective transfer speed went up or down, and choose new values of interleave and skew accordingly. Go to step 7 until we determined the best transfer times, and associated interleave and skew values. 9. If needed, re-LLF the drive using the best sector and interleave values. 10. Go into the disk controller's BIOS and enter the manufacturer's defect list. 11. Test the entire drive for bad sectors using some program whose name I don't recall. 12. Enter the bad sectors found by the testing program into the drive's defect list (via the disk controller's BIOS). 13. High-level format the drive via MS-DOS (checking for bad sectors). 14. Install full MS-DOS and the store's standard software packages.
"More terrifying" ... sounds like the Test of Courage on our communally-owned mini-bike, a Hiawatha ("Super"?) Doodle Bug. The original engine, cowlings, and paint job were gone. We pre-teens bought a used Briggs and Stratton 2.75 HP motor, installed it, and connected it to the rear wheel via direct drive (no clutch, centrifugal or manual, just some down-gearing). It took three of us to push-start the machine with the rider aboard.
The Test of Courage was that the only way effective way we had to stop the engine was to apply full throttle, and wait until the engine flooded out. It took one-to-two twentieths of a mile for this to happen.
I have a Chromebook I got back in 2013, and I get my-lifetime of free software updates and upgrades for it.
I installed SeaBIOS on my Acer C720, installed OpenBSD on it, and never looked back at ChromeOS. Modern Chromebooks lack all 12 F-keys, a SysReq key, etc., and probably don't have the jumper pins which let you reflash the BIOS EEPROM.
(Icon for "Machine capabilities previously provided by the manufacturer have been removed in newer models for (perceived) profit-related reasons")
Some CEO singing the finger-promise song ("Yubikiri genman, uso tsuitara hari senbon nomasu.") is pointless unless there is an organization both willing and able to enforce that promise (e.g., whomever lies [breaks their finger-promise] will be made to swallow one thousand needles and cut off a finger).
The author of this article seems to be ignoring some history. Let's set aside this particular example (government ERP and huge software systems which have to be seemingly-endlessly customized, and still ending up a poor fit) for a moment.
How many attempts been made over the decades -- and failed -- to create a high-quality, "universal" set of subroutines, access methods, APIs, objects, frameworks, etc. for even more-limited problem domains?
Many failures of systems analysis -- for even simple, manual systems -- stem from the people running them not even agreeing among themselves how the existing manual system currently works, and the analyst not being able to resolve the conflicts (determine the truth).
A simple example of this: in a workshop/factory, one department pulls parts and makes up kits (based on order/spec sheets), puts them on carts, and wheels the carts down to the department which assembles the kits into products. Sometimes there are "hot" jobs, so marked with a large red "H" on the order/spec sheets. These carts are to be sent to the head of the cart-line in the assembly department. The entire mangement chain schedules things, prepares reports, and makes promises to customers, based on this simple manual system. Yet one of the assemblers told me, "Ehhh, we [the assemblers] don't worry about which cart is in front of another. We just pull out whichever order we want to work on. They all get done."
If the programmers don't truly know how the system works, how can they possibly create a program which implements it correctly?
From TFA: the resulting material has "already drawn higher levels of engagement from authentic social media users."
The "higher levels of engagement" are probably because that shit's funny. I have a collection of not-true-AI-generated images. I think the best one is of a woman in a bikini sitting on the sand. She has three legs.
Run Length Limited has nothing to do with "angle". It is simply an encoding method to make it take much longer for the magnetic spots to migrate toward each other, and thus increase the usable bit density of the drive. Seagate made the ST-225 which was a 21 MB drive. They also made the ST-238R, which was "certified" for RLL 2,7 use, which gave it a capacity of 32.7 MB.
You could use an RLL controller with an uncertified-for-RLL-use ST-225 drive, or other company's similar drive, but the reliability of such use would be ... variable.
... that anything they do will break something or delete important data: they simply have no understanding of computer systems.
It's all witchcraft and magic to them, and they have no solid basis from which to logically decide whether the next thing they do with the computer will result in "goodness" or will result in "badness."
From the CoC: "... by encouraging participants to recognize and strengthen the relationships between our actions and their effects on our community."
1. Pedantic note: "between our actions" should be, "between their actions".
2. An action will have whatever effect it will have on a community; wishing really hard will not make that action have a stronger effect.
From the CoC: Harassment includes: ... harassing photography or recording ...
How does one distinguish harassing photography or recording from non-harassing photography or recording?
From the CoC: If you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify one of the event organizers with a concise description of your grievance. Your grievance will be handled in accordance with our existing governing policies. The "existing governing policies" were not listed or linked-to in the CoC.
These sorts of problems are worsened by cost-driven, bare-functional-minimum-type equipment provisioning, and by too many don't-worry-your-pretty-head-about-how-it-works subsystems. You don't have "discs", you have "vdisks" (or whatever they're called in your SAN vendor's terminology) in a SAN, whose underlying workings you frequently cannot know due to secret-sauce algorithms and software. It's subsystem-turtles all the way down.
You have to have some excess disc space, channel capacity, and CPU capacity to use to shuttle data around when you run into unexpected problems, or else you'll end up spending a long time fixing/recovering from these sorts of problems.
When people start writing (as happened in TFA's referenced URL) or talking like, "XXXX has begun taking its YYYY-related IP seriously ...", it's a sign that they've stepped into one of Simon Bar-Sinister's* Phoney Booths and been converted into a corporate mouthpiece.
*Simon Bar-Sinister was one of the primary villains in the 1960s American cartoon, "Underdog".
including hour-long login times, hour-plus delays from logging in to Outlook actually opening, $100 desktop PCs
Hour-long login times indicates to me that their AD (or NDS) tree structure is badly organized, and/or they have replication issues. I can't speak to Outlook issues. And if all I did with a PC was email, word processing (though not via MS Windows/MS Office/MS Word, gods forfend), small spreadsheets, and [remote] database work, I'd be fine with a $100.00 PC (an RPi, keyboard, mouse, and monitor?).
The DoD suffers the disadvantage of being a very hierarchically-minded, command-and-control-mentality organization. Due to pathological "office" politics, their sub-departments don't play well together.
drugs were not common everywhere or even expensive
1. I don't know where you lived where "drugs were not common" back then.
2. Among other effects, the War on Drugs has served to act as a price support for the drug cartels. ("Ooh, big drug busts, it's hard to smuggle stuff in, we're taking losses on captured shipments, so a dime bag no longer gets you as much product as it used to." Or something like that.)
One of the current major use cases for hardware emulators such as Qemu, VirtualBox, etc. is running older OSes for which one does not have the source code to, and are no longer supported.
Frequently the older OS is being run in order to run older, unsupported application software.
This wonderful new microVM system requires knowledge and cooperation between the microVM and the guest.
As the microVM hypervisor is "improved", the features and interfaces will change, which in turn means no-longer-supported OSes will not be modified to run under the new microVM. This will stop you from using the older, static OS with the new-and-improved microVM. Running older versions of the microVM hypervisor may expose one to unacceptable security risks. Thus the need for hardware emulation.
From TFA: The idea is that you don't need to know anything about the infrastructure: your program calls another program, and the management tooling spawns as many instances as needed to run that specific operation, return the result, and then delete the VMs used to run the calculations. You never need to know where it happened or how.
You do need to know, if you are doing things at large scale. Our ancestors thought the forests, fishes, and game animals were limitless. They were wrong. A CPU context switch is one of the most expensive operations, and enough guppies can eat a whale.
This new technology can be useful, but it is no panacea.
No, not white noise. Each file is a cat video con-cat-enated (*snerk*) with your encrypted data file. Users play your "video", see a cat video, and that's it. Hopefully, Google's streaming system doesn't require the cat-video-watcher to download your entire 20GB combined cat-video-plus-data file to watch just the cat video.
It's still the cloud, but what are the odds of two major cloud providers being affected at the same time?
There are just three major cloud providers: Amazon, Microsoft, and Google. Why do you presume that one of these little in-the-cloud backup companies would not be hosted on the same infrastructure as the (virtual) web- and/or email-hosting company subscribed to by the victimized companies?
(icon to remind you what can-and-has-happened [smoke-cloud-over-OVH])
I had a DG Nova/4 for a while, and vaguely recall reading somewhere that one could cable two CPUs to share the same disk box (a 6045? It had 5+5 MB: DP0 was the lower, fixed platter, and DP1 was the upper, removable [in-a-cartridge] platter). Did you ever see or hear of such a thing being used?
1. Payment for those "wasted" computer resources is akin to insurance payments: you're paying now to mitigate possible severe bad consequences later. This is especially-relevant to retail-related systems which experience transaction peaks around certain holidays and times of the year.
2. You're not paying as much as you are afraid that you are for that usually-unused capacity: CPU cycle, RAM capacity, and hard disc storage capacity costs are going pretty-much continually down.
A major problem here is when management, when shown the cash outlay for two identically-resourced computers, says they can save money on the second server by making it less-capable, and promises the techies they won't be expected to provide identical performance in failover mode -- promises which are immediately forgotten-and-later-denied by said management.
Degrees and certfications are irrelevant to knowing the primary and secondary systems should be identically-resourced.
A job in management, or a job which requires sucking up to mangement, can far-too-easily remove a person's give-a-shits about possible bad consequences when higher-level management wants to save money (so they can spend it on "more-important" things).
Was that the same "80 year old receptionist whos been there since 1972" who threatened me with a ruler when I came to take away her green-screen IBM 3278 terminal and replace it with a shiny-new IBM PS/2 with all our hospital's standard apps (including a terminal emulator to connect to her beloved mainframe)?
I did the sensible thing -- I backed off and told my supervisor what had happened. After presumably some higher-level intervention, some other unlucky sod got the job of replacing her terminal with a networked personal computer.
if it doesn't work like Word, you're up the proverbial creek without a paddle.
Geh. Just because a thing is common, does not mean that it is necessarily good.
WordPerfect (version 4.2 for DOS). PC-Write (for DOS). QEdit (aka The SemWare Editor). All of these were great programs, and all of them had a different UI than did MS Word. All three of them made heavy use of function keys.
It doesn't matter whether or not his pal(s) still work(s) there. He sold his soul (influence) a long time ago when he accepted an arrangement of, say, a long-term, very low-interest loan on a very nice house there, or perhaps college scholarships for his children. He is owned.
"Some day, and that day may never come, I will call upon you to do a service for me."