
Remember - CLOUD COMPUTING is NOTHING MORE THAN ..
Remember - CLOUD COMPUTING is NOTHING MORE THAN ..
using someone else's computer system .. thinking they care about it as much as you do.
Technical errors with the US-EAST-1 region of Amazon Web Services have caused widespread woes for customers, including difficulty accessing the management console and some other service problems. The issues appear to be centred on the US-EAST-1 region, which is the oldest AWS region and located in Northern Virginia. This can …
Im not sure if serious but thats not how AWS got started. A proposal doc was written, it got approved and Amazon built the first service.
I used to believe the "spare capacity" thing too until I spoke so someone who knew and then thought about it and it didn't really make sense at all - were they just gonna shut down all their customers every Christmas?
"CLOUD COMPUTING is NOTHING MORE THAN ..using someone else's computer system "
It's also someone else's data centres, power supplies, HVAC, physical site security, patching, backups, trained personnel, etc.
Excellent, means I don't have to do all that stuff thats commodity and focus on value add stuff on top.
It's someone else's sewage works...
I've been seeing load performance issues with things relying on googleapis all week, and youtube videos that just sit there not playing for several minutes within the last hour.
More may be broken than people are willing to admit...
(then again this is my anecdotal experience out here on the left coast of the USA, so YMMV)
I can create a bucket in that region without issue. There are no services that span multiple regions. Every service is tied to a specific region, - s3 in Ireland is independent of s3 in Frankfurt. Even the global console is pinned to a single region, the effected region, all other region consoles are available.
Big learning: never just have a root account on AWS, even if you are am occasional user one-man-show.
Apparently individual IAM user accounts can log in through region-specific management consoles like eu-west-1.console.aws.amazon.com, eu-central-1.console.aws.amazon.com etc
While all root account login requests get redirected to us-east-1, the only region that handles root login requests, which is also the region affected by the outage.
Can we accept the premise that 'crap happens'? Regardless of whether I have an on-prem or cloud infrastructure, let us assume that 'something will always go wrong at some point'. Nothing can be done about it, a bad day will happen eventually.
With on-prem, I scramble to sort out the problem, go 24x7, buy take-out food for the grunts, and pray I get it solved before the powers-that-be bring me a platter upon which to place my head.
With the cloud, I can sit back and watch someone else own the headache. And when it is all done, the powers-that-be can take a bite out of a cloud vendor's backside.
Once we get past the denial of imperfection and accept that the inevitable outage will eventually happen, then we can see the value of letting someone else eat the crap-sandwich.
This post has been deleted by its author
Hmm. Maybe designing and operating things so crap didn't occur in the first place was a much better way of feeding your family?
Tech evolves. Get used to finding a new way to feed your family every few years. And as somebody who has spent so much time cleaning crap I suspect you might have some clue on how things should be done. So you get more food for your family, and time to sit and eat with them. Win.
I think this is the biggest myth to cloud that people have when they don’t understand. I have been working in the cloud industry for 5 yrs, and I can say I have not once seen a company that’s moved to cloud suddenly be able to get rid of people. Your network admin becomes the cloud network admin, your security team become the cloud security team. Your widows admins becomes the cloud admins and so on. The only roles that I can think of that are in danger are the security guards abs people who manage the racking of servers.
"and I can say I have not once seen a company that’s moved to cloud suddenly be able to get rid of people"
Absolutely! 'Cos software is built by people ergo it WILL fail, not IF but WHEN it fails the Ops teams are all ready with our IT equivalent of a mop and bucket!
You get a downvote. Your comment is analogous to "racism isn't real because I haven't personally seen it".
Cloud provider's have earned a reputation of "fire all of your infrastructure people because DevOps".
Cloud technologies offer a lot of very good things but the real resistance comes from the fact that a lot of companies have cut loose a lot of their key people because they won't need them in the cloud. We all know that's not true but that's exactly what providers like AWS sell to companies.
"And when it is all done, the powers-that-be can take a bite out of a cloud vendor's backside"
hahahahahahahhahahahhahahahhaha
That's not how it works. They still take a bite out of your backside because you were the one who recommended/implemented/in some way touched the cloud system. The only difference is that you have someone to shout at as well. It's the chain of screaming.
Speaking of keeping the lights on... woke up this morning (West Coast USA) to discover Alexa refused to turn on any of my lights (which are 50% on some combo of WEMO, FEIT, Belkin, etc...). Fortunately the phone-based app and/or the Samsung hub all worked just fine.
Funny, mine is a closed circuit in house light management system 'cos I'm tech savvy, not a troglogyte and know that connecting something vital like heating and lighting control to a server 5,000 miles away is bloody stupid. So my central control unit is 6ft away under the stairs and worst case I too can just override the management system with a wall switcht to bathe my rooms in glorious Swan(*) designed electric light!
(*) Joseph Swan, a Geordie, invented the electric light bulb, no that two-bit, self promoting yank Edison!
AWS Cloud just gives me access to more and better servers, storage, switches, bandwidth and scripts than we could afford or have the time to develop ourselves. It's still going to go down but at least they give me the option to run on multiple regions . . . oh wait, my public sector customers only want their data in the UK, even encrypted and held on encrypted disks and transmitted encrypted and even double encrypted backups . . . oh, ok, so that's London then, so it's just like my old co-lo space but bigger . . . right :)
Yesterday I dragged out the (artificial) Christmas tree, got it hooked up to a smart plug and set up the "Christmas" skill in the Echo to operate it. On testing it each switch worked OK, then the skill......and then everything stopped working. "Damn!", I thought, looking forward to a fun time troubleshooting this because there's zero diagnostics in those IoT things. After thinking about it for a bit I reckoned that since those smart plugs had disappeared and the central heating thermostats had also disappeared that the problem was probably in the link between AWS and the manufacturers' websites. I had to go out for a couple of hours so I left it and sure enough when I returned everything was working as intended. Obviously someone had unplugged AWS and plugged it in again.
I don't mind using "the cloud" for my Christmas lights, garden lights or my Lava lamp. I can't think why anyone would be insane enough to use this technology for anything mission critical. There are just too many points of potential failure. There's zero diagnostics -- its a mess. (Incidentally, my Christmas lights are on the Pacific coast, the AWS outage was in the east, we're told, and Heaven only knows where the systems that drive my plugs and thermostats are based. A ton of technology to substitute for a simple PLC that just sets a control bit......its insane. Obviously vendor lock in is more important than functionality but then I know that, having spent years pushing back against marketing and its so called "bright ideas". Still, its given me a new project -- hack the protocol and get all this cloud crap out of the loop.