Re: I can only warn
that might be a tad extreme. if you could go for 50 unicode chars to handle remote git repos, we could agree.
26 posts • joined 17 Jul 2020
in yon olden times, before covid, i used to really rely on the "just keep going" feature for my commute. 25 minutes on a boat. tap tap tap. close laptop and get on a bus. tap tap tap" 90 minute later, close laptop and hoof to work. plug in power, and tap tap tap.
sure, booting doesn't take that long (full-disk encryption notwithstanding), but it's still 3 minutes. but finding where you left off does. even if every application is fully stateful, you'll have web pages that need to reauthenticate, and maybe a vpn that needs new creds. and the phone tethering. it just goes on.
ianl, but in general you can't sign a contract that gives away basic constitutional rights, or allows one to break the law. two people can't sign a contract that one can kill the other, and then it's not murder. so without any direct knowledge of the relevant CA statues and federal code, it seems highly unlikely that such a search wouldn't be prosecuted as a crime.
not that I'm arguing apple security is reasonable.
rust isn't a panacea.
in my recent professional experience, the code that has has the most catastrophic bugs leaving core dumps and deleted databases has been rust. sure, this is an anecdote but my point is pointing at one small class of bugs that rust does solve and blithely generalizing chanting web scale, er, i mean rust at any class of bugs you can think of isn't correct.
i think rust brings some great safety that other languages haven't brought before, but saying it's a (memory) "safe" language and implying that all rust is safe to run in the general sense is linguistic substitution.
i prefer that you not put words in my mouth so as to preserve room for my foot. :-)
all i observed was the costs in this case mirrored the org structure. the implications are interesting, but above my pay grade. anecdotally using full cd rather than dedicated testing stages has resulted in much less anxiety about making changes, and more rapid fixes. and for the record, i spend a lot of time working with support and i think they are invaluable both for customers, and engineering.
so the problem with failover is that it is not easy to determine when to fail over. who watches the watcher.. a common failure mode i saw was when two storage units mutually declared the other dead, and both operated as master. that was the source of many bad days. assuming that you require durability in the database, you're going to need a paxos-like algorithm to solve this problem. i've got a hunch that no traditional sql server provides this.
i don't have time to fact check the entire article, but using aws' m6g.4xlarge for best network performance is clearly not right as that's only a 10Gibps instance whereas (without looking too hard) m5g.16xlarge is 25Gbips and c5n.18xlarge is 100Gibps. are there constraints that this report used that one has to dig through some notes to find?
Biting the hand that feeds IT © 1998–2021