Re: A tragic and rather absurd death
I don't. Am I 100% sure I drive at a slow enough crawl that I'd notice a bridge at the bottom of a dip was gone, when the guardrails are still there to suggest one is still present? No, I'm not.
1820 publicly visible posts • joined 13 Aug 2007
After one of our big earthquakes in California, a couple CHPs died this way. They topped a rise in the pre-dawn twilight, and could see the road cresting the next hill, but didn't realize that the road in between those two points had ceased to exist.
Technically speaking you should always drive such that you can stop within the distance you can see; but in real life, almost none of us drive that way on the highway.
Eh, this is an area where Apple really does have the edge. macOS uses CUPS and printing -- even network printing -- mostly just works. If you have a big network or different subnets where you can't use Bonjour it gets slightly more complicated, but only slightly. All of our network printers at work can be added to macOS just by selecting AirPrint as the protocol and putting in the printer's hostname. Everything else happens automatically.
I've become a tablet-with-kickstand plus Bluetooth keyboard person. Or, if I'm really traveling light, phone plus Bluetooth keyboard. I found a laptop made my shoulder bag heavier than I cared to carry around, especially by the time I added other necessities of travel.
If I want more computing power than that I can remote into my desktop. No point in lugging it around with me.
There are a lot of concepts in academia that just don't exist in private sector payroll systems, is the tricky bit. People in the private sector don't go on sabbaticals, they don't have their positions routinely terminated and then re-instated four months later, they don't have pay rates that vary with credit load, they don't get paid partially out of grant funds and partially out of general funds, etc. You can make that stuff work (we have), but only at the cost of a lot more busy work on the part of the people managing payroll. You end up with people whose only job is to manually shove changes into PeopleSoft that it can't track on its own. Our hiring process has actually gotten slower and more complicated in order to accommodate the software.
But Oracle always gets the nod because they're seen as the "safe" choice. "No one ever got fired for buying IBM" has become "no one ever got fired for giving a contract to Oracle."
I feel your pain. We only recently got rid of our last mainframe-based systems -- by the end we were literally in a situation where all the data was accessible by web but people needed a 3270 terminal emulator to change their password. I got to remove MochSoft TN3270 from my software distribution system just least year.
We went through this recently at a university I work for. It worked out in the end, but a lot of damage was done in the process of trying to hammer academic payroll requirements into an Oracle Peoplesoft-shaped hole.
I knew of another uni that moved their payroll off a 1970s IBM mainframe. Just figuring out all the subtle business rules embedded in all that undocumented COBOL code took a long time.
Right, if you set "quota=300G", the dataset can't exceed 300 GB *including snapshots*. Using "refquota=300G" instead means snapshots don't count against the quota. Time Machine gets a little confused if it starts deleting old backups and doesn't see any new space appear. ;)
Good point on fruit:resource. I haven't added that because this server is strictly for Time Machine backups, so it just ends up with a bunch of sparsebundle files. I'm considering setting up another one to replace our macOS file server, and if I do I'll definitely check out that option.
You can do Time Machine backups to SMB shares -- I have it working for a few dozen computers, backing up to a FreeBSD system with ZFS storage. The settings that worked for me:
vfs objects = acl_xattr catia fruit streams_xattr
fruit:metadata = stream
fruit:model = MacSamba
fruit:posix_rename = yes
fruit:zero_file_id = yes
And then per share:
fruit:time machine = yes
Each share is restricted using "valid users" to a username unique to each machine, so they can't read each others' backups.
I take timed ZFS snapshots, so that a machine infected by ransomware can't encrypt all of its own backups.
You need to use refquotas to keep things under control, since Time Machine will keep adding backups until the disk is full, then delete the oldest ones. (If you use straight quotas it won't work because deleting files won't delete the snapshots, and thus will never free up any space for the client.)
I've successfully done Migration Assistant restores from this setup. It's slow, but within normal bounds for Time Machine.
If your list of items to sort is *very* short (as it seems to have been at first), then bubblesort can actually be faster.
I've also seen situations where a "worse" sorting algorithm ran faster because it fit entirely in processor cache, whereas a more sophisticated sort did not.
I disagree that backups are easy, at least not if you do it right. One of the drivers of cloud storage abuse where I work was people trying to set up off-site backups, which is not cheap or easy. We have an on-site data center but nothing off-site.
Sure, but if you're just leaving them in a tape library they're not a "backup" by Doctor Syntax's definition, since they could be loaded and written to at any time. To meet their full definition the tapes have to be removed, write protected, and then hauled to another site.
Back in the 1990s, I used to work at a bank that dealt with this by having the sysadmin put the previous week's backup tape set in the trunk of his car every Friday. This did technically create off-site backups, but it always seemed to me that there were security implications. It also didn't do those fragile DDS tapes much good.