whatever happened to 1-node, then 1-dc, then 1-site, then world roll-out strategy? What is this, "it compiled, ship it" level of testing and deployment?
72 posts • joined 14 Nov 2013
Microsoft customers locked out of Teams, Office, Xbox, Dynamics – and Azure Active Directory breakdown blamed
AWS is fed up with tech that wasn’t built for clouds because it has a big 'blast radius' when things go awry
the top-of-rackl UPS needs to have enough power for about 30 seconds worth of load - they might have as much as 2 minutes. Buffers can be flushed and checkpoints written before everything goes black. The Gensets fire immediately on loss of mains. But if one were to fail and the N+1 also, then yes, part of the DC goes magically silent.
AWS reveals it broke itself by exceeding OS thread limits, sysadmins weren’t familiar with some workarounds
Re: n^2? Are you #*$&ing kidding me?
The S3 Index tier is a close analog. When we had to mass-bootstrap the tier the nodes fetched their config from a 'static' source before they fell back to 'chatter/gab' mode to converge. I can't remember how we partitioned node sets but the respective 'master's eventually got their immediate peers all registered and sent updates upstream to other cells and eventually every cell got wind of all the other cells. But we sure as HELL did not maintain N^2 active connections!!
This was SOLVED 10 years ago by the S3 team (and probably the EBS team). Kinesis apparently didn't bother to avail themselves of the existing codebase.
This is not an uncommon occurrence at AWS - the teams don't talk and apparently Jasse and his minions haven't beaten the individual service teams with the "REUSE THE GOD DAMN CODE!" hammer enough.
that's who they hire by the bucket-load and most of them are H1B at that. Did you honestly think they had actual CS degrees and wouldn't design something so stupid?
The only way forward is to use key-partitioning (like S3 does) and stop being so damn cheap about refusing to use load-balancers since they have their own in-house design for Pete's sake and don't have to pay Citrix for their NetScalers anymore.
I don't remember how fast the S3 infra converges to a single-system-image, but S3 has 3 distinct tiers for starters and about 350,000 servers globally that need to eventually register and share 'knowledge' about their peers. If Kinesis is not using the correct/latest 'chatter' protocol to discover it's swarm, they are fracking idiots.
AWS admits to 'severely impaired' services in US-EAST-1, can't even post updates to Service Health Dashboard
I can do that now and nothing special needed. Find a local-storage node type, put it in a subnet that doesn't have a routing entry for a NAT gateway. Or enforce no traffic via security group rules or Network ACLs.
And lose the PEM key. If you don't have the SSM agent installed you can't get into the 'console' via SSM either. What's so hard about this?
Shots fired! WordPress's Matt claims Jamstack's marketing is 'not intellectually honest' in debate with Netlify's Matt
Wordpress has a couple of 'publish as static' plug-ins which is what 98% of sites should be using. That way WP sits behind the firewall and can run as badly as they want. People who build WP sites can barely use a browser anyway so programming is completely out of their skill set.
Funny thing, I remember a few tools back in early 90's that were 'fancy' WSIWIG site editors that 'printed' your website in a static manner. Blue-something? The generated HTML was unreadable though.
And no, nobody gives a damn about comment boards on your dumb-ass website. Embed a MAILTO tag in the 'contact us' graphic and the job is done.
use the CLI
it's time to follow the lead of AWS. Go CLI and forget the slow, cumbersome UI. Let pissed off users or sufficiently motivated consultancies write Python interfaces to CLI calls. You don't need a dashboard anyway that's Nagios' job. Write wrapper scripts in Perl or Bash. UI and UI portability is a fools' errand.
Or just go back to the old .NET windows-only client. Nobody cares if they have to keep a Winblows box around to admin via VCenter.
Palantir: Never made a profit, we do something with family-separating ICE, we just lost $580m – please join our IPO
had a job interview there about a year ago. massage consultant and dog walker onsite. And offices in Georgetown. By silly-con valley standards the offices were stark and bare. They were looking for devops help to try to stop doing everything by hand and one-off. I was rather surprised they were so far behind the curve.
What on earth do 2400 employees do exactly? Well same could be said for the ungodly number at FB, Google and Amazon too.
Typical silly-con HR nonsense about how well you work with teams in a nurturing fashion and affirm other's inputs. I told them I don't "do bullshit". If your idea is stupid or you screwed up something, I will tell you to your face. Needless to say they were aghast.
Bratty Uber throws tantrum, threatens to cut off California unless judge does what it says in driver labor rights row
Simple 'market' solution
First, Fk Uber. But the solution to this isn't employee or contractor per se - that's just a tax grab. Deregulate the entire 'for hire' landscape where it concerns fares. Publish MAX fares which are what they are today and posted on every legitimate taxi. Every car must be equipped with an official GPS device with a running meter display - plenty of commercial solutions available. The taxi regulator could publish an app that would do the same. If Uber wants to commit to a lower 'fixed' rate at time of booking that's their prerogative. And so can the regulated taxi companies.
Every registered driver must be charged a flat-rate 'medallion' fee calculated based on distance traveled while under fare. They must also be charged commercial insurance rates per mile. Uber et. al. already collect VIN of all vehicles and DL of every participant. This will be reported to the DMV in near realtime - ie the previous day's active bookings are reported by VIN. As a driver you can supply proof of commercial liability insurance to the DMV and brand your title as 'for-hire'. You can also bulk-pay your medallion fee to the DMV, say $5000/yr, non-refundable. People who want to make gypsy taxi their source of income will elect to do the needful for some savings in fees. Casual drivers will pay the per mile rates out of convenience.
This makes the playing field completely level and will probably kill 90% of the 'casual' drivers and good riddance.
the family of course. People don't get paid just because they are mouth breathers. They get compensated based on the economic worth of that particular person and expected lifetime earnings which is peanuts compared to even 1st world working poor. There's some extra ladled on top for 'pain and suffering' but at the end of the day, if you're from Etheopia and Indonesia your life amounts to very little. Sure, there are the odd doctor and oil or mining exec who brings in big coin so their lifetime earnings are not a rounding error.
Life is short and brutish in most places in the world. What's a life worth in China, in Cambodia, Burma, India, most of Africa or pretty much any Muslim state? Damn close to zero. That's just the facts.
The S3 operations team monitors a zillion metrics but as the article noted, it's almost entirely "inside" activity. No doubt customers started complaining about bucket availability and there might have been metrics that showed a downward move in request rate that didn't jive with historical levels.
The answer will be to either piggy-back off of specialist anti-DDOS providers but also to stand up arms-length availability metrics viewed from 'outside' as it relates to geographically distributed name resolution. Or more likely write their own DDOS implementation and embed it into the Route53 infrastructure. S3 front-end itself has request rate-limiting already. But I couldn't say how hardened it might be against a flood of malicious payloads.
> Someone wasn't running Cloudwatch/Cloudtrail properly/at all then.
Oh come on. The number of outfits that even know what those are is small and the accounts that have it set up CORRECTLY to detect 'bad things' is vanishingly miniscule. Not to mention the people on the receiving end of the messages (assuming sent by email or piped to the federally mandated Splunk don't know what to do with them. The 'security' staff in most places are incredibly bad at their job. I swear, when you fail as a developer/ops, don't want to be a cat herder, you go into security if middle-management is not available.
Now a so-called financial institution in a highly regulated industry should be a cut above the normal cesspool. And yet their failings are as bad if not WORSE than other orgs who don't labor under "compliance" mandates.
Re: Default settings
> This dependence on IAM for everything needs to fucking die
Well using session creds *is* IAM. One of the insecure by default in AWS is the OUT=* firewall rule that has to be explicitly removed from every security group when created. If you want to beat AWS over the head, start there.
Using session creds in user-space is FAR, FAR more likely to engender pathologically lazy and stupid behavior on the part of developers, let alone sysadmins, and lead to credential theft. Not to mention your app will have to 'refresh' it's creds every hour or so. IAM roles are the best, most correct answer actually.
Where IAM roles fail is not the fault of IAM as such - it's the meat-space that can't write a policy worth a damn because the topic is opaque, convoluted, and tedious. STUPID people need not apply. However, the world is primarily populated by stupid people and a lot of them have jobs in IT. So instead of actually identifying the specific S3 operation, S3 bucket and/or key path, they just heck the S3:* and Resource=* and go on their merry way.
I've found just incredible security gaffs in AWS' Professional Services' code and publicly shared solutions and sample code. What's that tell you?
Why do we have the plague of public S3 buckets? First, Amazon had buckets marked public by default way back when (as I recall) but more to the point, people can't figure out what "public" actually means, and can't write a bucket policy to save their life. Only recently has Amazon written a system that traverses the ecosystem of all buckets and sends the account owner an email asking them "did you really mean to do that"? I got mine a couple days ago but the buckets have been public for well over a year. How often does the check fire? Within say a couple hours of a bucket perm change?
AWS is sufficiently complicated and obtuse even people with good IQ and a rigorous approach are easily tempted to take shortcuts. Disaster follows as expected.
When designing nuclear power plants (Ukranian test program aside) it's done by very SERIOUS people, who take their time and have their work checked meticulously by other very serious people who are looking for mistakes. Clearly that pattern does hold for the FAA and Boeing but that's a separate topic.
Now let's look at the typical 'Dev' pretending to be Ops, hell, look at your typical IT bod be they helpdesk or sysadmin. They are some combination of incredibly dumb, lazy, sloppy. How many times has Microsoft f*cked millions of machines because they didn't test their software patches. And they are supposed to be 'smart'.
Security is HARD. AWS does it's users no favors by designing a system even experts shy from. The world would be a vastly worse place if IAM roles were not being used. The trick now is to somehow get people to write policy statements in a responsible fashion.
Jeff Bezos feels a tap on the shoulder. Ahem, Mr Amazon, care to explain how Capital One's AWS S3 buckets got hacked?
Re: Didn't the slurper work for Amazon in some capacity?
certain employees have out-of-band access to objects because they can walk the metadata and hit objects "in the raw" and bypass the ACL mechanism. At some level this criminal had 'privileged' knowledge like which VPC or IP was defined in the bucket policy ACL and had abused/retained sufficient access to scrape the data.
Re: Whose Fault
> Yes - You can have S3 Buckets behind two firewalls using S3 VPC Endpoint within your private subnet without exposing onto the internet.
A VPC service endpoint simply means your request traffic that originates from within the VPC doesn't need a NAT or Internet Gateway to hit the S3 web tier servers that sit in publicly reachable IP space. The traffic stays "internal" to AWS datacenter routing CIDR.
The information disclosure has nothing to do with someone sniffing the request/response traffic. Rather the security permissions on EACH object stored in the S3 bucket. If you mark an object as public or have a bucket policy that makes some/all of the paths public, the object WILL be served to you no matter how you requested the object.
I can sit here in my own VPC with S3 endpoint and grab every public bucket object I want and it's totally legit. The bucket/object owner is exclusively the idiot who allowed people to enumerate and GET the objects.
The reason there are so many cases of public objects which weren't intended to be so 2 fold:
1) early on the defaults made it quite easy to pick 'public' without clear warnings as to what that meant
2) Bucket policies are 'hard' to write and most people are simply too STUPID or lazy to tackle the topic and out of frustration just click the 'public' button in order to shut up the developer who also can't be bothered to use an IAM role bleating about his delivery is being impacted because he can't see the data.
IT is not for stupid people but there are a many millions who are involved because they are cheap.
Re: This is strange an frightening.
> what the fuck has boeing degenerated into?
I wouldn't be the least bit surprised all of the new crop of software team members are well under 35 years of age and have next to no Avionics development experience.
Personally I have a strong suspicion some of the sensors have unshielded leads and/or traces are too close so they are getting cross-talk and data corruption. Or just as bad, rampant failure to do bounds checking, failure to zero-set reused buffers, or logic written in such a way such that datastructures are unpacked utterly wrong (transpose altitude for AoA) when certain fields are set "wrong" - perhaps a really shitty CAN Bus protocol handler with race conditions?
Re: Google is laughing all the way to the bank
it works for the SJW cause of the day does it not? Given the distribution of genders and ethnics in any given department or job tile, the concentration is obvious and all kinds of hue and cry result over the non-representative sample. But 'ageism' isn't worthy of the same political hay I guess.
it's all over silly-con valley
My dad got enthusiastic job interviews over the phone, multiple interviews no less. That he had near 30 years of experience was plainly OBVIOUS on the resume if they bothered to pay attention. Showed up for the final in-person with gray in his hair, "sorry, pass". It's as blatant as it comes. He dyed his hair black and wore a "younger man's" pants and jacket style. Miraculously offers aplenty.
Youth hire fellow youth because they can be brow-beaten into submission. Somebody with decades of experience isn't going to knuckle under your ignorant demands.
The Empire Strikes Back: Trump discovers $10bn JEDI cloud deal may go to nemesis Jeff Bezos, demands probe
It's not that hard!
The 'Intel' cloud was only 600 million and Amazon built a trio of datacenters in the vicinity of Dulles VA. For 3 billion what's so hard about building 3x3 datacenters (eg. Ohio, SW Virginia, Colorado) and stand up a "separate but equal" rollout of the commercial offering? It could be air-gapped to the NIPRNET or under careful access control from the Internet (NSA has been advising DISA on this for years) and that way any dumb-ass cockups don't (trivially) expose the JEDI platform to all and sundry.
That way it can be run by any of the participants. AWS and Azure(?) is the only outfit that has actually done this before. I wouldn't trust Oracle not to fk it up royally. IBM would probably be hopeless if it weren't for their RedHat subsidiary driving the show.
When they changed their name that's what I started calling our on-site DXC "consultants". Pissed them off to no end - but they deserved the reference.
Others I whipped out back in the day.
Don't eXpect / eXecute (the) Code / Change - they were supposedly helping us develop CICD/DevOps and whatever they wrote was either utterly wrong/useless and needed a total rewrite.
Damn eXpensive Crap
They use the same exact chip set everybody else uses - or very, very mildly different. Plus rarely do the big corp actually design the boards. They have SuperMicro or other shops remove (mostly) unwanted parts from the otherwise standard format/layout boards. the FB and other proprietary layouts would rejigger the entire board, obviously.
These guys design SERVER boards, not piddly-ass laptop (with their rediculous contortions) / home computer boards.
Intel has chipsets without the IME but most people simply defeat the chip in BIOS or simply not place it or wire it up.
Anyway they really don't give a damn about theoretical attacks or those carried out by national actors. They are in the business of spying on YOUR workloads afterall...
> Managing this variety of hardware is a "constant pain".
Well, quit iterating the stupid hardware then for no good reason. It's just a storage box - no different in purpose really than an S3 or EBS storage node. Doesn't require vast CPU resources because you're not using erasure coding anyway and handling the metadata isn't exactly hard.
The minimum stack used to be 3 machines but from a practical standpoint I don't think I ever saw a pod that wasn't 5 servers.
> 250,000 servers
oooh, so 2/3 (or less) the size of the global S3 farm. Admittedly S3 site locations number in the 'dozens' instead of every tom, dick, harry ISP's Point of Presence.
Vengeful sacked IT bod destroyed ex-employer's AWS cloud accounts. Now he'll spent rest of 2019 in the clink
Re: Voova should take some heat here...
> however if you deliberately erase the bucket, there isn't a backup of it that they can restore for you.
Actually there is, they just don't advertise that fact. Erasure is 'lazy' so assuming you open a support ticket fast enough (same/next day) there's a very good chance you can still get the data before the housekeeping catches up (about 3-5 days) with the "freed but not zero'd" block list and really does annihilate your data.
Dozens of .gov HTTPS certs expire, webpages offline, FBI on ice, IT security slows... Yup, it's day 20 of Trump's govt shutdown
it really is shameless. And especially since nobody gives a good damn what they do on their respective devices. Really, some twit has been clamoring to have application state follow them from phone to desktop to slate? What, one pathetic puke out of a million? How many thousands of man-hours were spent writing all this spyware when even microsoft doesn't use it in any useful capacity (serious bugs in preview releases being ignored, anyone?); ignoring for the moment they aren't remotely entitled to have it in the first place without my express say-so.
Whatever happened to shooting dead the morons be they marketing, managment, or engineer who though "using my computer" needed to mashed into an activity feed, ala farcebook?
Tape vendors feel the cold, clammy hand of AWS on their shoulders. Behind them grins the Glacier Deep Archive
Re: Retrieval time
> (due to the need to actually get the tapes from a remote location, etc)
I hope you're not suggesting Glacier is tape-based. There isn't a tape drive to be found in any of their AWS data centers. Tape drives are unreliable as hell. And the data-cartridges are barely any better.
The 'nines' thing is ridiculous, I agree. I very much doubt they are calculating those availability numbers in the event a few of the regional datacenters get nuked or otherwise suffer EMP or other massively disruptive event. Granted who cares at that point if your data still exists...
Typical S3 requires half of the erasure-coded blocks to be available. Legacy Glacier used the same N:M ratio because they wouldn't have to reconstitute the data in order to store it. The new service may well change the N:M ratio but I rather doubt it. Instead of 3 datacenters they may have bumped it to 5 and/or transparently copied the a minimum 'N' to an alternate region. Most likely the cost savings are achieved by using ever larger disks (eg. 12+TB vs 4TB) and ever larger disk enclosures (used to be 96 per tanker) which is probably closer to 240 and maybe even higher.
The tiers of service time is just a job-sorting/priority mechanism. If you buy 'fast' restore, your job gets put at the top of the heap to get scheduled. If you have medium and your window is closing, it job gets bumped up so it can complete within SLA.
> In the end somebody decided that money mattered more than safety -- probably not deliberately
> but this is the kind of sloppy "it'll probably be just fine" thinking which sooner or later kills people.
Indeed. Companies need to learn how to say NO. When someone came up with the idea of putting ever bigger engines on the thing and it's no longer stable, the idea should have been buried, not sustained with "hey johnny is systems can write us up a software solution". NO god-damn NO! And in it's soul-less pursuit of sales and profits the aircraft co. decided to do something stupid the regulator should have jumped all over them and stopped it dead in its tracks.
From https://www.pprune.org/tech-log/615709-737max-stab-trim-architecture-2.html I really like the bit about " And then we had STS, which trimmed the stabilizer without pilot input. Huh???? ... But rational was to tell the pilot ( I use pilot to assert whoever was in charge of moving controls), he needed to trim for the new speed/AoA."
This is meddling on the part of the software by do-gooders. By interfering in the natural operation of the aircraft the pilot has now mentally checked-out. Any pilot paying attention would feel and recognize right quick he need to re-trim without being "helped", just like it had been for decades prior. If not, then by god he's not fit to sit in that seat! Then they layered on yet another nanny function because Boeing in this case made a DELIBERATE choice to say 'Yes' to some retard at the airline or in marketing. The customer, as a rule does NOT know what they are talking about and I'll bet airline execs don't have a clue or care about physics, they just want to cram more seats into the same space and have it fly farther and faster or negligible fuel. At some point an adult needs to stand up and say, "No, we're not doing that, this 50 year old design can not be modified further." The bane of modern technology is that the software programmers always pipe up with "we can write some code to 'fix' that". And as we've found out they did a typical CRAP job of it and didn't bother to follow the RULES that had long since been established.
Revised engine nacelle and blade design for better thrust and fuel efficiency is again, fine. Decreasing drag with those winglets - brilliant. Upsizing, rotating and shaping the engine so the plane is no longer stable - STOP right there and do not execute! Or go hire yourself out to Lockheed and work on fighter jets.
We do NOT need to fly at the ragged edge of performance. We do NOT need to carry ever more ridiculous numbers of helpless/hapless souls at one time. We do NOT need razor-edge efficiency in lift or engine performance that *require* ever more complex software solutions to try to bash it back into some flyable shape. We do not need more fancy software to make up for ever less skilled and mentally not-engaged pilots to pretend they know what they're doing. Progress does NOT have an infinite endpoint. Every activity has a cost and human beings are LIMITED. Apparently modern man has decided that all costs can be papered over and with ever increasing amounts of software.
Same shit in motorcycles - not to jack the thread. First it was ECU and FI. Ok, reasonable and simple improvements that didn't overwhelm the meat or fundamentally change the relationship between rider and machine. Now we have cornering ABS, corner-by-corner brake and throttle maps, launch control, and gd fly-by-wire etc. All of it completely pointless and unnecessary to the task at hand - riding the damn thing from point A to B. You now have world-class racers, the best in the world who can literally get away with being as clumsy as a 2-bit street hack; pinning it and not getting their ass thrown over the moon. Worse, you have said street hacks with but 5% of the talent and skill riding machines that without electronic nannies would have found themselves quickly in the ER or morgue. "electronics this, electronics that" you hear incessantly in interviews. NO, god damn it! If you can't *directly* control the hydraulics of your brakes and regulate the engine with the throttle (again with no electronic, "here I'm detecting some slip, I'll take over") then the whole thing is a farce. We want to see skilled individuals doing their craft, not who has the best software developer and smartest algorithm and sensors all but riding the bike for him.
Back to planes - "here, hold my heading and altitude for a couple minutes while I root around in my flight bag for the PB&J and a cup of coffee, so long as sensors appear nominal, otherwise warn me and let go" is/was a proper and acceptable degree of improvement. Though properly this should be and has been solved for decades via "yo, co-pilot, you have the stick". When someone else is doing the flying (eg. the computer) the natural tendency is for the human brain to check out.
How many millions of hours were logged by mere teenagers in WW2 and wars since in transport planes, flying on partial panel, in lousy weather and getting shot at? Yeah, yeah the big bomber losses were atrocious but it wasn't because the pilot didn't know how to fly or the damn computer was second guessing them based on a shot-out sensors.
Chasing unreasonable efficiency and lower costs is now taking lives and as the chorus for "more AI because it's better than people" is only going to make the failures bigger and costlier and more importantly the pilots increasingly helpless to diagnose and recover within the limited (by physics) window of opportunity. If you're going to have a pilot in command then the plane must fundamentally comport with human limitations, not spew thousands of messages and alerts at him to the point that their ability to cope is overwhelmed - the damn programmers again (I don't mean just the guy writing the code, but the whole foodchain). The computer must by definition be no more than an advisor or really dumb help. Otherwise toss the pilot out on his ass and have the computer run the entire show.
If the introduction of computers are making a significant improvement in safety, then the conclusion is some combination of:
1) the damn things are too complicated for humans to fly which by definition means the trajectory of design is WRONG.
2) the skill level of the pilots it highly uneven and probably insufficient
The answer isn't more computer, it's smaller, properly designed planes, fewer, simpler planes and more expensive seats. That or just go to drones and be done with it. If PiC screwup kills only 150 people at a time that's better than killing 800 because the gd computer was interfering and worse could NOT be removed or sufficiently sidelined because the airplane requires the computer to even fly at all, and some programmer decided the software (and it's suppodedly non-dodgy failure detection logic) knew better than the supposedly trained people with hands on the yoke.
I meant to add in my little screed that the first rule of being a pilot is to fly the damn plane. There should NOT be any software systems to second-guess the pilot(s) beyond anything more than an advisory role. There should be NO automatic trim. If you can't be arsed to pay attention to trim and adjust it with deliberate control inputs as speed and elevation changes you are NOT flying. You're barely supervising and probably inattentively at that.
Heck I used to sit in the jump seat on Tokyo to Anchorage when I was a kid for hours and well remember the clattering of the trim wheels as they spun.
I have no beef with how the Airbus (Air France?) software behaved. It's supposed to yield (I would argue software shouldn't be in control at any time...) on demand and for any reason. That the pilots screwed up is unfortunate but is the cost of flying silly fast at silly altitudes where the coffin corner is much too easy to hit. Shaving safety margins so excessively to save money is where this whole modern society has gone off the rails.
If you can't reliably hand-fly the plane from end to end you're NOT doing your job, your fitness to task is not acceptable, and your route is too damn long, and/or you're under-crewed. We don't tolerate autopilot for trucks or chartered busses, so why do we allow 200+ person vehicles to just let the fakakta computer run the show? Flying should be expensive, it should require the absolute best physical specimens with the sharpest minds and attention spans actively engaging the controls.
IMO pilots as a class are incredibly naive about how unreliable computers, sensors, and software is. Part of that I'm sure has to do with the manufacturers lying their asses off as well because millions of hours in fancy computers and control software THEY decided to develop have to be paid back somehow. Like e_is_real_i_isnt said, there are 6 basic controls - none of them should ever be run by a computer. Frankly I'd add FADEC to that as well although that is one "computer" that would be nearly impossible to remove. Then again, there have been crashes caused by them misbehaving as well.
When you have to have a computer run things because it's too complicated for a human to run things, we've gone way too far.
Hey software, get the fuck out of the way!
The pilots are either competent or they aren't. there is NO reason whatsoever for software written by blithering idiots to override and countermand the pilot. He is GOD. If he is an imperfect, distracted, negligent god, then sucks to be you for the passengers but this was a clear case of software elevating itself above the meat.
Programmers, fuck off! Sensors are fallable - it's fine to chime and red light, even buzz/alarm when something sure looks wrong but do NOT interfere.
Re: 6ms+ w NVMe
> I also think that things like erasure coding is just a terrible idea in general.
> File or record replication is the only sensible solution for modern storage.
Hardly. Triple replication for small objects and EC for large is specifically intended to guarantee integrity and availability of data when things get silently corrupted or nodes become unavailable. The obvious tradeoff for this is "wasted" space for duplicates and CPU time to compute EC both on write and read.
Using Ceph to run live VMDK (as opposed to storing the initial bootstrap and subsequent snapshots) is nuts, I agree.
Netapp has fast "write ACK" time because it's simply written to battery/flash-backed RAM and can de-stage at it's leisure. Till of course the write load overruns the ability to checkpoint and flush said 1st level cache and even the 2nd tier if so equipped.
Object/Ceph store for RDBMS workloads would be appalling. Operating System disks are pretty much write never so if there were a way to implement NFS-root on top of Ceph without a lot of work that might be interesting. Or replace EXTFS with native in-kernel CephFS, that might be something.
What's all this about DE then?
f'k me, all this heartache over desktop environments, icon sets, and changing window dressing? If you're running anything more than TWM and some xterms with screen, you're doing it wrong. FileManagers (Nautilus, Konqueror, etc), really? You people can't use cp and mv? Good *diety*, what is wrong with *nix users these days.
IBM is dead-man walking. I've worked on several projects with them (GCS and not) and while there are good people here and there the corporate body-politic is a bunch of morons and looking for every opportunity to screw-over the client with ridiculous "solutions" and vastly overpriced "services".
RedHat only had a value position by virtue of "commercial support" and for a while that support was pretty good. It's steadily gotten worse though and per-node value for dollar likewise. I've personally ripped and replaced hundreds of RHEL for CentOS and kept maybe a dozen or two "critical" systems on RHEL for the so-called support just to keep the auditors or 3rd parties happy. In the cloud-space Amazon-Linux or Debian-derivatives are the default and there is basically no reason to consider any of RedHat's offerings.
RedHat's Gov't consulting division will fit right in with IBM GCS though - peas in a pod.
I'm guessing it's really just an optimization of what a traditional filesystem does. Instead of the OS requesting an arbitrary series of blocks based on it's own housekeeping records of 'path/file' which maps to a series of inodes and from there a series of block IDs, the OS just asks for a 'key' and the SSD has it's own list of extents/blockIDs that map to the 'value'. So instead of all that record-keeping at the OS level, all it needs to do is hash(path/file) and send that result to the storage device.
This removes at least one look-up table being maintained by the SSD and probably gives it flexibility in moving things around. It's probably very efficient on linear read/write but probably sucks on partial writes akin to the RAID-hole.
I could see AWS/S3 using these in the Data tier since Amazon already implements it like that - storage node manages it's in-chassis storage as key (blockID) -> list of device::extents that only it knows internally.
Corp hands me a Windows laptop. Therefore I install Cygwin/X and Windows is just a window manager. WSL is useful when I want to test a real *nix instead of Cygwin's "almost". I don't know, it seemed to me X11 was downright trivial.
'startx' (in Cygwin or other Windows Xserver) with '-listen tcp'
From Windows or Cygwin, '<Xprogram> [--display ...]'
For WSL to access the Xserver do either of:
'xhost +localhost' in the context of the Xserver
inside WSL 'ln -s /mnt/c/Users/<username>/.Xauthority'
Then you can inside WSL
<Xprogram> --display localhost:#
absolutely NOTHING special needed. I'm not opposed to someone wanting to get "paid" for putting in some effort but frankly this barely merits postcard-ware or has everyone really forgotten how to do *basic* X11 commands?
it's not actually policy - it's millennial stupid-fks who can't conceive of doing anything without video. And it's a back-door way to engage in racial and age discrimination. I have no problem telling HR to get bent and not taking jobs offered by imbeciles.
Funny thing, they all back down and continue the interview process over the phone. There's almost always a face-to-face step in the interview process anyway so VC is just bs. I've only had to report one org to the Equal Employment Commission for illegal acts.
Nobody needs or wants to see you. That's what "disable Mic and camera" in BIOS settings are for. Otherwise some sharpie or a quick wrap with an awl takes care of the cursed thing.
"Interviews are via skype"
"that's nice, I don't do skype"
"well, that's our policy"
"you're talking to me on a perfectly good phone are you not?"
Fkn twits and their sorry-ass video-conference software that has atrocious audio quality, can't even do party-lines right, doesn't work worth a damn unless the Internet is perfect, and only works if you install their agent. Bite me. dial the god-da** phone. It's worked for 200 years and 10x better than your retarded VC setup.
Re: Phone cameras
no cameras allowed in some areas and some companies. Forget mass consumers - the great unwashed are morons chasing "shiny" without rhyme or reason and zero consideration of what risks they are thoughtlessly exposing themselves to.
You don't need a different case. The camera hole is a punch out. You don't need a different software stack. You don't even need a different production line. You just tell the pick-n-place machine not to add a camera to the next 100 devices. The camera device simply doesn't show in udev and any software that tries to use a camera simply outputs a "no camera device detected" and moves on. Or is it asking too much of imbecile programmers to do any actual error handling in their code?