Reply to post: Re: Cloud Vs On-Prem

Basecamp details 'obscene' $3.2 million bill that caused it to quit the cloud

Nate Amsden

Re: Cloud Vs On-Prem

Should do the math for how bursty is bursty. At my last company I'd say they'd "burst" 10-30X sales on high events, but at the end of the day the difference between base load and max load was just a few physical servers(maybe 4).

IMO a lot of pro cloud folks like to cite burst numbers but probably are remembering the times of dual socket single core servers as a point of reference. One company I was at back in 2004 we literally doubled our physical server capacity after a couple of different major software deployments. Same level of traffic, just app got slower with the new code. I remember ordering direct from HP and having stuff shipped overnight (DL360G2 and G3 era). Not many systems, at most maybe we ordered 10 new servers or something.

Obviously modern servers can push a whole lot in a small (and still reasonably priced) package.

A lot also like to cite "burst to cloud", but again have to be careful, I expect most transactional applications to have problems with bursting to remote facilities simply due to latency (whether the remote facility is a traditional data center or a cloud provider). You could build your app to be more suited to that but that would probably be quite a bit of extra cost (and ongoing testing), not likely to be worth while for most orgs. Or you could position your data center assets very near to your cloud provider to work around the latency issue.

Now if your apps are completely self contained, or at least fairly isolated subsystems, then it can probably work fine.

One company I was at their front end systems were entirely self contained, no external databases of any kind). So scalability was linear. When I left in 2010 (company is long since dead now) costs for cloud were not worth using vs co-location. Though their front end systems at the time consisted of probably at most 3 dozen physical servers(Dell R610 back then, at their peak each server could process 3,000 requests a second in tomcat) spread across 3-4 different geo regions(for reduced latency to customer as well as fail over). Standard front end site deployment was just a single rack at a colo. There was only one backend for data processing that was about 20 half populated racks of older gear.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon