Gone
Gone are the days where you could just put files over ftp and be done with it.
Recently, I was spinning up yet another terribly coded thing for fun because I believe in making my problems everyone else's problems, and realized something that had been nagging at me for a while: working with AWS is relatively painful. This may strike you as ridiculous, because most of the time in established companies it's …
You say that. But, for the cheap web hosting I bought for a hobby project, the only options are the painfully slow web interface, or ftp. (And not even sftp/ftps. It's unencrypted. And I can't disable the FTP account, either. It's always on, FFS.)
So I fired up ncftp and had to remember how to use it. It was like being hit by a cold shower. But it sounds like I got more hits than Corey.
(Also, the web panel has an API which sounds like it will support git, when I get it sorted.)
And for good reason because those files have to live in an environment which is a lot more maintainable if it's also specified that way. I've seen plenty of things that take the form of a repo of application code. FTP that up and run it and...it doesn't work because you need a database. Where's the database? That wasn't in the code. Is there a copy of it somewhere? Can I make a blank one with a script or something? Often, the answers were yes, but you don't know where, and no. Oh, and did I mention that you need a specific config for the HTTP server? It's just 120 lines, nothing complicated. Have fun reverse-engineering that because it wasn't stored alongside this because the HTTP server config doesn't live with the application code.
The complexity of cloud systems is indeed as painful as the article describes, but some of it has a real and good reason behind it. A lot of things are nicer when they can be expressed as deterministic code than when they take the form of vague, possibly incorrect or missing documentation or snapshots and images without context*. That is painful too, just not to the person who initially developed it.
* For example, I have an application from a place I've volunteered. I've got a big archive full of code and I have a backup of a database. The organization doesn't want to run it, which is great, because I would have to guess what to do with this stuff if they did. There's no documentation of what this does, how to install it, what's in the database, and for all I know, there may be components written by the original contractors which aren't in my big archive. I have no way of knowing whether this can run, even though theoretically I do have copies of the files concerned.
@elsergiovolador
Ironically, last night I updated a sports club website results & league tables pages by updating the files using FTP.
It does the jobs for small organisations with basic (information only) web sites quite happily, where there is no sensitive data* involved (everything on the site is publicly accessible).
*There are a few names / contact emails, but those people have consented to them in terms of visibility / approachability.. and all the contact emails are to email addresses I control, and run code to weed out spam & other junk, before forwarding any valid mail to relevant peoples "proper" email addresses
Lawl. If your concern is price, Cloud is not for you.
"Pay for only what you use!" -- turn on, never turn off, billed ad infinitum.
At every single company which I've ever worked, that has had a "cloud" service, the cloud price has become an existential crisis for the company. Just one more reason why you should be doing things on-prem, in your server room, until you fill your fourth server rack. (Then you'll be paying at least one full-time server admin - cloud can do that too, perhaps cheaper.) When those admins aren't admin'ning servers, they're doing other useful IT- and other- things for the company. Bonus points, you're not paying three "cloud engineers". There are very few use-cases for "cloud services". Hosting your website should probably be wpengine, for $10/mo (and they probably offer FTP).
Price always matters.
It's not that an organisation isn't willing to pay for the service - and pay quite a lot.
The issue is when one cannot tell the Executives an actual monthly or annual figure.
When the cost range is "anywhere between £100 and £100,000,000,000 a year", the project is dead. No vaguely sane executive will even look up from their golf swing.
Every fool knows that more steps means better!
More clicking, better! More forms. Better! More lines of code. Better! More menus. Better! More protocol. Better! More abbreviations. Better!
More complexity isn't sus at all, fool!
You should read the tech douche bro book ,"50 Steps to Wipe Your Arse with AI" and get with the times!
Saved over $10M in the decade that followed for my last org meanwhile providing better availability, performance and peace of mind at the same time.
Even my personal stuff sits at a colo. Total of 19 physical and virtual systems in a quarter rack consuming less than 200W for 200 bucks a month, unlimited 100meg bandwidth, and about 5 years since the last power outage (no redundant power in that 30yo facility). 15 years and no critical hardware failures. My personal VMware hosts may be EOL but they've been running for 5 years continuously without a slight hiccup.(Not even a reboot). Don't mind EOL for personal stuff. Been hosting my stuff at various places including my home since 1996.
Of course my professional gear is all highly available in a top tier facility.
IaaS is broken by design as I wrote about 15 years ago. Still broken today as the design is unchanged.
That's very impressive. Well done you.
Now, the only problem I have is that you will also expect to be paid for your time. Which will cost me a hell of a lot more than $50 per month for a fully managed cloud server.
Then you will charge again for backups, redundancy et etc etc. This is after I have paid the lawyers for the service contract between us.
Then one day my server is off and you've gone on holiday...
Don't forget the other two people you're also going to have to be paying, so that you can have out-of-hours cover all year round. You can't expect anyone to do more than one week of on call without the rest of their work suffering, and you'll need at least three people to cover holidays and illness. Oh, and you are paying them extra for being on call right?
"The cloud is just someone else's computer!" Yes. That means it's someone else getting out of bed at 3am to replace a failed disk, not me.
"That means it's someone else getting out of bed at 3am to replace a failed disk, not me."
And once they fail, there's literally not a thing you can do but wait. That's not a win either.
AWS specifially has been failing *a lot* for a cloud which claims to be bullet proof.
I'm a Gen-er and largely with you. I've always hated the complexity in default AWS and haven't liked the party Azure has followed much better.
As an actual fan of JS/TS, I've fought against undue complexity for over half my career. I've seen both spaghetti monoliths and microsecond jungles.
I've been a fan of what Vercel, Deno, Cloudflare and others are trying to do. I've deployed with dokiu and usually just use Caddy to proxy b docket-compose apps for personal efforts.
It's a total mixed bag. Ok the flip side, people are using AI to create apps they don't understand... I can only imagine the scale of security beaches this will create...
My biggest issue with pointing out Google as a positive is they're likely to just kill the product and workflow you've built on with the next update. I don't trust them. And good luck finding a real person when they disconnect your account from logging in.
Lol how long did it take to figure out what region it was in?
"I logged in, I went to RDS, but I don't see the database." "What do you mean you don't see the database?" "Well there's a domain error, and a host error, and an IAM error..." "Those don't mean anything. You can't see the database?" "No databases are listed." *ponders for a long while* "It says region 'stockholm'. Switch that to Virginia." "Oh hey, there it is!"
Why can't you search by database identifier, rds instance name? :-/
Understand your pain point, however, I believe AWS is geared towards enterprise customers. Unfortunately, it isn't the right platform to run a personal blog, hobby projects etc. AWS has an offering that somewhat caters for this - Lightsail. Similar to other VPS platform in the market, it comes with the simpler pricing and complexity.
"The difference between the Vercel user experience and the AWS user experience is purely one of the will to have a good experience at the highest levels of the company."
You are spot on. I would say that the partner ecosystem manages to exist only because AWS does not have the ability to execute and the execs do not even care, otherwise with their access to infrastructure, no partner like Vercel could even compete with them.
This is not really what AWS is intended for, it's aimed at Enterprise. People who are trained up and accomplished at its configuration can make a good few quid on the job market.
Personal projects on AWS is doable and it's even a good idea if self-training on AWS is an aim. But not the obvious choice otherwise.
>” it's aimed at Enterprise.”
To me it’s more like working with assembler, all the functionality is there, you need to know how to put it all together to make anything work.
One of the things that got me was how difficult it was to set up environments and systems, so that on the dashboard I could see resource usage and thus spend against Dev, Test and Prod(client 1), Prod(client 2)…
I’m sure it is doable, as it s”assembler”, but…
"People who are trained up and accomplished at its configuration "
Which costs...money, for the training. In order to spend yet more money on the implementation then the actual rollout.
In other words, you've identified a systemic scam. You can spend money with us after you spend yet more money understanding us, in order to realize what you really want to buy from us, in order to actually pay us.
If you're lucky.
I've complained about the stupid, insipid, ridiculously-complex AWS systems before. Why do I now need an IAM profile in order to access AWS reports that my Root user account can't? DO YOU NOT UNDERSTAND THE ABILITIES IMPLIED WITH THE TERM, "ROOT"??!
And that's just the start of the stupidity. It seems that Amazon's developers revel in the ability to complicate their interfaces, feeling that it (somehow) implies superiority.
" People who are trained up and accomplished at its configuration can make a good few quid on the job market."
Except there's almost no useful training and experience is the way you learn things. Manuals are useless and/or hopelessy outdated, if they even exist.
That applies also to enterprise users, which means that a simple migration project to AWS takes 18 months. With competent enterprise level machine room operators/developers, not just anyone.
For reference: Migration from a rented machine room to another took 6 months. No service breaks either during migration, planning that took one of those 6.
AWS have always made life difficult and the security people have consistently put day1 startup mentally at a higher priority than customer success, pulling the wool over the eyes of senior management in the name of security. Whenever I see a message like 'unable to find a policy to allow this' all that I have ever wanted in the past 10 years is a simple button which says "Fix this" so that AWS can go away and create the IAM records it needs for me to continue. It knows what it needs so why can't it fix it, for me alone. This is a fundamental oversight that has cost me thousands of wasted hours over the past decade.
Who says AI is useless? Can write poem in no time at all!
S3’s your bucket, don’t let it leak,
EC2 computes while you grab a sneak peek.
Lambda runs code without a server in sight,
RDS databases dance through the night.
VPC walls off your cloud with a grin,
IAM says “Who?” before letting you in.
ELB balances loads like a circus tightrope,
CloudWatch watches all, giving sysadmins hope.
SQS queues your messages, nice and in line,
SNS texts your phone: “Hey, everything’s fine!”
Kinesis streams data in rivers of bytes,
EMR sparks Hadoop jobs overnight.
Glue crawls and catalogs, sticks data in place,
Athena queries S3 at hyperspeed pace.
ECS runs your Docker, containers galore,
EKS adds Kubernetes—now you’re hardcore.
DynamoDB NoSQLs faster than light,
Glacier freezes data—out of mind, out of sight.
Step Functions choreographs chaos with flair,
Macie sniffs PII like a bloodhound on scent.
GuardDuty yells “Intruder!”—one hundred percent,
SageMaker trains models, no PhD spent.
AppSync GraphQLs till your frontend’s inspired,
QuickSight dashboards till the CFO’s retired.
Fargate runs containers—no servers to tend,
AWS acronyms: the alphabet’s end!
Seems like a large part of the nastiness of IAM is that it's been tacked-onto and evolved over the last 20 years, all while trying to keep everything as backwards-compatible as possible.
Of course, if they ripped it all out and started over today, AWS would look like a different thing but one must admit the fact that it's still here, underpinning all the New Shiny on top of it at all is an amazing feat.
No, speaking as a currently designated security troglodyte who’s spoken with some of the chief IAM architects, I would say legacy drift is not the big problem with IAM. The real problem is twofold. Firstly, the AWS APIs are all random SOAP-style verbs instead of REST and control plane and data actions are in the same service-url namespaces so nothing can be built around object or even object class permissions, only lists of actions that are unpredictably idempotent or mutating or data-exposing or control-plane-exposing. Secondly, the chief design requirement of IAM is performantly deterministic permission evaluation, not actual security or usability or least privilege. This leads to choices like making permissions a list of low-level API actions (because that’s where policies are evaluated), the boolean logic hell that is the deny sandwich and conditions, and strict character limits on policies such that you can’t explicitly list the actions for a single service like EC2 in a policy and are then forced to use wildcards and guess whether they’re restrictive enough and don’t open you up to the new surprise actions AWS adds without warning. The “fixes” they’ve added are just more layers of the same flawed design and implementation via permission boundaries and SCPs. So anyone used to sane CRUD permissions has to relearn everything and then discover that their basic security expectations and requirements are impossible to implement in any auditable way in AWS.
No, it's nothing to do with that at all.
Amazon do not care. That's the thing and the whole of the thing.
Amazon have absolutely no incentive to make AWS simpler to use. AWS only exists because they realised they could rent out their own overcapacity, so their only goal is to make it easy for them to administer.
It may even be to their advantage to make it overcomplicated, because that means their victims customers end up paying for stuff they didn't or can't use - and so the capacity can be sold to multiple customers.
The bad consultancies like it as well, because it makes them irreplaceable and gives more scope for nickle-and-diming.
So basically everyone who isn't already locked in, doesn't want to play there.
The problem with the whole premise is within the article [Vercel is built atop AWS].
Amazon doesn't care about usability because all (or most of) the tools that help you deploy faster are built on top of it, so it's not like they are going to lose much (these tools are probably already paying for more compute than they use anyway)
Have used AWS and Azure in detail personally and professionally. After you have dealt with the over-engineering and fragmented API/components of AWS, along its cryptic concepts, Azure is like a breath of fresh air. Azure has got it right in terms of capability, simplifying complexity, and just general utility (it works). But don't bother trying to use anything before it's GA. Being a beta tester for Azure components in 'public preview' is not fun. Once it's GA it's Azure FTW.
This is exactly why we built Defang (https://defang.io/).
Define your app as a Docker Compose project - and millions of devs already use Docker Compose for local deployments/testing.
Then literally with a single command (defang compose up), deploy that same Docker Compose project to AWS... or GCP, or DO, more coming!
It takes care of networking, storage, databases, queues, even LLMs - all mapped to the native services on your target cloud - so ELB, ECS/EC2, RDS, Elasticache, Bedrock, etc. in the case of AWS.
Secure, Scalable, Cost-efficient - all done according to the best practices for the cloud you choose.
So, you get the ease of use of a Vercel, but all the power and economies of scale of a hyper-scalar.
Check it out and give us your feedback!