Re: Kill the orphans!
I once wrote some code with the comment
/* If Child does not respond kill it */
If kill is replaced by stop how does that work SIGKILL vs SIGSTOP becomes SIGSTOP vs SIGSTOP??
51 posts • joined 28 Mar 2013
"Put it in the Cloud it scales and can be flexed up and down dynamically"
Ha ha ha! How many times do we hear..."oh but that service does not flex...but it will when we fix it." or "we can make those requests much more efficient" how about writing it properly in the first place? It is all about time to code and relibability and efficency are an afterthought now.
"The outage, which occurred on 26 March, brought down Google's cloud services in multiple regions, including Dataflow, Big Query, DialogFlow, Kubernetes Engine, Cloud Firestore, App Engine, and Cloud Console."
1 Cloud, 1 set of cache servers, no seperation to be "efficient". It won't all break at once..LOL!!
Your automated deployment could say deploy 16GB VMs for cache server.
Where they get deployed physically could be anywhere on hypervisors of any size with spare resources.
You say increase the memory on each VM to 32Gb.
You SHOULD have unused memory in your hypervisor pools to allow for unexpected growth when you operate at the size google does.
They have hundreds of thousands if not millions of hypervisors so keep x% free so allow for gorwth, as you use it add more hypervisors to the pool!
But....desktops have gone from 10Mb to 100Mb to 1GB...and stopped.
When are desktops getting 10GB?
1GB/s ~= 100Mb/s, my NVMe SSD is supposed to be faster than that.
If I want to backup my stuff software/pics (currently 570GB) to another desktop it still takes hours!
My NUC has Thunderbolt 3
I tried Thunderbolt 3 (specific card for my motherboard and revision) with a £50 cable (!) and
never got it to work.
Waiting for 10GB to the desktop!
I have come across developers who said why they do this....
Feature X is onyl available in Version Y and higher of browser Z.
How do you know when you can start to use that feature..when only a few percentage of your visitors are still using verions older than that..."we do not care about the last few percent of customers we can drop them as our turnover is 10%+ per month anyway"!!
Otherwise everyone would be compatible with IE6 and legacy compatability code would have to be written for every page!
- RBAC is not configured the same way on all platforms
- RBAC may have different capabilities on each platform/version
- Having to configure a person into different RBAC 'groups' on each platform is inefficient/error prone or requires more automation to be built/tested/patched.
- Producing a unifired audit trail across all RBAC platforms is probably either not feasible or painful.
- Not sure how well automation tools support RBAC configuration across platforms
I notice Oracle is not on the list.
All affected people (including on Azure) over to Oracle Cloud.
When that is added then over to Rackspace's Cloud then to...then to..all the way to Dave's Cloud aka Billy Bob's Cloud...aka...keep changing the name/owner every week!
"Well,shucks...sure we can move them there VM's to Suzie's Cloud for yer, yeeee-haw!"
Not that any of the smaller Cloud providers are cowboy's...perish the thought!
The idea is to put your decryption code in the enclave and then then send encrypted text and a description of the operation you want to perform to the enclave.
The unencrypted data never leaves the enclave, not even the hypervisor sees the unencrypted data.
E.g. to search encrypted data in sql server
What I do not get is how you get the decryption keys into the enclave securely!
"The client driver sends the column encryption keys required for the operations to the secure enclave (over a secure channel)."
What secure channel which the hypervisor cannot see? Hmmm..
Sounds like "Always Encrypted with Enclaves http://smooth1.co.uk/sqlbits2018/sqlbits2018roundup.html#2
1. Is this protected against https://www.theregister.co.uk/2018/03/28/intel_shrugs_off_new_sidechannel_attacks_on_branch_prediction_units_and_sgx/ with "utilization of an appropriate side channel attack-resistant crypto implementation inside the enclave"
2. has it been rebuilt with https://www.theregister.co.uk/2018/03/01/us_researchers_apply_spectrestyle_tricks_to_break_intels_sgx/ " Enclave code will need to be rebuilt and redeployed using the updated development kit to be protected from malicious sysadmins."
3. As per my blog entry above "On first use the client driver and enclave negotiate a shared secret and then setup the secure tunnel" Surely to negotiate a shared secret there is a small initial window where you first have to trust the hypervisor?
"In a production environment, we recommend that you use static IP addresses in conjunction the virtual IP address of a Failover Cluster Instance. We recommend against using DHCP in a production environment. In the event of down time, if the DHCP IP lease expires, extra time is required to re-register the new DHCP IP address associated with the DNS name. "
Just checking Wikipedia https://en.wikipedia.org/wiki/Software_Guard_Extensions#cite_note-14 we see that
a) There was a Prime+Probe attack which used "certain CPU instructions in lieu of a fine-grained timer to exploit cache DRAM side-channels" and a coutermeasure was published
b) The LSDS group at Imperial College London showed a proof of concept that the Spectre speculative execution security vulnerability can be adapted to attack the secure enclave and the code is dated 2 months ago.
I wonder if the "compiler-based tool, DR.SGX" which was a coutermeasure for Prime+Probe could be extended to handle Spectre?
6000 machines...so run 200 machines at a time for 30 times.
What is this obession with 10,100,2000,rest and doing a massive population in 5 steps?
Even if 2110 machines worked fine how long would it take to fix the last 3900 machines if enough of them broke?
For failures it is not the number of times you have done it before but the size of the failure domain and how long it takes to fix.
it should be possible to rollout automatically in small batches and even had multiple upgrades rolling out at the same time on an automatic schedule, ripple across the farm!
If it is automated and scheduled who cares how many batches of upgrades are run?
You would catch errors with less impact that way as the failed batch size would be smaller and it would be minimal extra work if designed correctly.
This is the next stage in cloud service design - being able to have slower rolling upgrades with smaller batches!
This will shutdown 1040 servers, please type 1040 to continue.
This will reduce capacity enough to cause a service failure for the following 8 services
Please type "8 SERVICE FAILURES" to continue.
(Transactions) or (integrity checks).
Integrity checks i expect are unique/primary/check/foreign key constraints. We used to disable them when doing data migrations until the end.
"You can't disable logging on mssql.".
Pity, you can on Informix!
Biting the hand that feeds IT © 1998–2020