back to article Just when you thought it was safe to go ahead with microservices... along comes serverless

We all know, and have probably even coded, monolithic applications – software made of big old chunks of code. Supposedly these are giving way to microservices, smaller elements of functionality. But don't get too comfortable because it's time to shake things up again: now we have serverless. The theory is to create single …

  1. Anonymous Coward
    Anonymous Coward

    Is it just me

    ... or do we have a new generation of script-kiddy coders applying new names to old pre-existing technologies ?

    1. Phil O'Sophical Silver badge

      Re: Is it just me

      Yes, how does When a request is made to an HTTP endpoint, a very specific function is executed for handling exactly that request. Once complete, the compute resources are surrendered. differ from multi-threading that we've used since the 1980s?

      1. This post has been deleted by its author

        1. Doctor Syntax Silver badge

          Re: Is it just me

          "Throwing boxes at the problem - OpEx looks like a better deal than CapEx, at least on paper - and Finance loves that."

          Finance knows how much servers cost. So lying through your teeth redefining "server" and "less" makes them thing they're getting a massive cut in costs.

      2. Anonymous Coward
        Anonymous Coward

        Re: Is it just me

        how does....differ from ....

        Now there are overpromoted seminars and courses about it!

        1. Doctor Syntax Silver badge

          Re: Is it just me

          Now there are new overpromoted seminars and courses about it!

      3. whbjr

        Re: Is it just me

        Apparently, they're creating a new database, or table, or something, to back up the request... then trashing it when the request is complete. I don't claim to understand this.

        Nor, for that matter, do I understand this: "That means being able to A/B, or blue/green, a deployment..." What is a blue/green deployment? Did I miss the color-coding class at the recent Bleeding-Edge Briefing?

    2. Herring`

      Re: Is it just me

      I remember when we were all going to write (D)COM or CORBA components and it was all going to be fantastic.

      Things have changed though. Using an HTTPS protocol sending text that has to be parsed is even less efficient.

    3. SVV Silver badge

      Re: Is it just me

      "When a request is made to an HTTP endpoint, a very specific function is executed for handling exactly that request. Once complete, the compute resources are surrendered."

      Which is precisely how the Java Servlet API has worked for nearly 20 years. Are you sure that someone didn't get pissed somewhere and got into a conversation about servlets, pronouncing it "serrrrvless" and got overheard by someone else who heard it and and went home thinking they'd said "serverless"?

  2. Wulfhaven

    How is this any different from microservices?

    It's all just the same ideas repeated with different names.

    1. AndrueC Silver badge

      It's all just the same ideas repeated with different names.

      Which often seems to sum up the entire IT industry and has done for decades. What goes around comes around (but with a different name and - sometimes - a better implementation).

    2. Ken 16 Silver badge

      it's as different from micro services

      as SOA is from OOP

  3. Kane Silver badge

    Dark Souls... for pussies. You want to get your Bloodborne on.

    Git Gud, Scrubs.

  4. Phil O'Sophical Silver badge


    you've also got so many connections to think about when writing a function that I bet most don't even consider contract testing that HTTPS API. How do you know you haven't broken the contract between your new release and everything that consumes the function's API?

    Sounds like a security nightmare. Every one of those APIs could mark a trust boundary where data is passing from untrusted (a user) to trusted (the database), or vice-versa. Testing that to ensure that there are no holes for, say, SQL injection, or authentication spoofing, interception, etc. will be a huge job. Which means no-one will do it.

    1. Ian Michael Gumby

      Re: Security?

      Who needs security?

      Yeah, toss in security and see what happens.

  5. Denarius Silver badge

    Is it just me ?

    or did this article belong with this Fridays BOFH ?

    BTW, I thought APIs were called, not consumed. Age showing again ?

    1. steelpillow Silver badge

      Re: Is it just me ?

      "So, the https endpoint..."

      "You mean server?" butts in the PFY

      "No, this is serverless."

      "Then what services the https request?" I ask.

      "Nothing does, Simon. The endpoint API consumes it."

      "He means services it" mutters the PFY



      "What about security?" I ask idly.

      "Yes, security will be a big issue for serverless."

      "You mean, folks will need experts in serverless architectures and will pay top notch for infosec work."

      "Er, probably", he wobbles a bit and his eyes flicker.

      "That is where your career progression comes in, isn't it? You're so sick of coding overblown back ends for overblown backsides that you're willing to play the suits at their own linguistic acrobatics."



      Heck, this stuff writes itself. (With apologies to Mr. Travagila)

    2. chrismeggs

      Re: Is it just me ?

      I am old and not ashamed. To me, an API is the process by which information or requests for action is conducted over an interface. Usually, the interface is defined by an Interface Control Definition. ICDs are not as rigorously defined as they are meant to be and should exist at all (seven) ISO levels. They should not dictate the behaviour of the systems at each end, in fact one of the ICDs virtues is its ability to isolate one system's changes from another. I am old and not ashamed and have forgotten the question.

  6. ExampleOne

    If it's serverless, what does it run on?

    After-all, cloud is just a new name for "somebody elses servers".

    1. Anonymous Coward
      Anonymous Coward

      It runs on dreams

      through fields of unicorn poop

    2. dshaw4

      It's serverless because you don't spin up a compute resource, or put a database or image or other application on a compute resource. You just call,for example, the function to resize an image.

      Each function you call launches in its own container, which is secured. When the job is done, the container is killed. For very high security, the overall solution can be segmented in virtual private clouds (VPC) using hub and spoke.

      1. Doctor Syntax Silver badge

        "It's serverless because you don't spin up a compute resource...Each function you call launches in its own container"

        The second sounds extremely like spinning up a compute resource except in different words. Do you really think the assembled el Reg commentards can't see that?

        Do you also think we don't know that with servers you don't need the time to spin up a compute resource/launch a function because the server was spun up at boot time and is already waiting to do the job?

        Given that you seem to be new here maybe you really do think that.

  7. tiggity Silver badge

    Add that to my list

    Of new trendy terminology

    Describing what has been done for ages

    Based on that description, I just take my app which provides lots of "web service" APIs i.e. lots of disctinct endppoints, defined by URI and access method e.g. GET, PST, PUT, DELETE N.B. with full unit tests and any change of contract would show up as a breaking change on tests

    Host that in the cloud (and backend data in cloud) and job done, serverless services to add to my CV

  8. regbadgerer

    Is it just me

    Or is 'serverless' just microservices but using a vendor-specific framework?

    Seems like a one-way ticket back to the vendor lock-in nightmares of the 90s that we've all been working so hard to get rid of.

    1. xeroks

      Re: Is it just me

      "Or is 'serverless' just microservices but using a vendor-specific framework?"

      Kind of. Microservices I've used have still been billed by the server. You can add scale-out rules - e.g. if your servers are at 80% capacity, then fire up an extra one. So you tend to specify the smallest, cheapest servers you can, automatically add more when you need them, and shut them down when you don't.

      The serverless paradigm means you don't bother with that. You have 100 requests? You get charged for the processing required for those 100 requests.

      I've not looked into what control you have over those tasks - you must be able to throttle them in the case of an exceptional number coming in at the same time.

      As far as vendor-lock in is concerned, they're all at it. Microsoft's service fabric is a powerful framework for microservices, and while you're not totally locked into Azure, you are tied into Microsoft.

      1. Doctor Syntax Silver badge

        Re: Is it just me

        "You get charged for the processing required for those 100 requests."

        Give that servicing each of those requests now seems to have the overhead of spinning up the service that allegedly doesn't exist and closing it down afterwards it seems likely that the charge ought to be more.

        1. xeroks

          Re: Is it just me

          "Give that servicing each of those requests now seems to have the overhead of spinning up the service that allegedly doesn't exist and closing it down afterwards it seems likely that the charge ought to be more."

          I suspect you're right that the spin up /down will add to the cost. However, those costs are less than the savings made from not having hardware sitting doing nothing.

          It's difficult to tell: while AWS prices are *extremely* compelling at the moment, they are in the game to make lots of money in the long term.

          1. really_adf

            Re: Is it just me

            I suspect you're right that the spin up /down will add to the cost. However, those costs are less than the savings made from not having hardware sitting doing nothing.

            Servers ideally do exactly nothing except when there is a request to serve. I mean "server" loosely here - could be a VM, container, process or thread.

            "Spinning up" a server is ultimately a case of getting stuff organised in memory so you have something ready to respond to a request.

            So it seems to me that from a hardware perspective, "serverless" is trading CPU and responsiveness for memory. Whether that is a saving depends on many things; I wouldn't say it is a given.

            (Of course, there's more to it than just the hardware perspective.)

          2. Doctor Syntax Silver badge

            Re: Is it just me

            "However, those costs are less than the savings made from not having hardware sitting doing nothing."

            That depends on how much hardware is sitting waiting for stuff to be spun up. There would need to be some slack in there and it's going to be an overhead that gets charged on to the customers somehow. If the service is there but idle it should get swapped out, minimising the hardware it uses; how does the work of swapping it back in compare with spinning up a new instance?

        2. Martin M

          Re: Is it just me

          They don't start up and down for each request - the underlying container instances, language runtimes and deployed function code are cached for a few minutes. Under steady state load it's efficient.

          When you do have a usage spike, there can be issues if the Lambda containers can't be started up by Amazon fast enough to service inbound API gateway calls. Calls can be errored by the gateway for a short period while this happens. This at best impacts latency (if there's retry logic at the client) and at worst is user-visible.

          Base container start-up times are really fast. Lambda start-up tends to be more determined by the language runtime. Java applications can be terrible for this - JVM start time is slow, the big frameworks like Spring can take a while to do the required reflection to get code running, then code runs slowly in interpreted mode until it gets dynamically compiled.

          I'm glad Amazon introduced Go Lambdas recently. As a modern, compiled language it looks like a really good fit.

          Hopefully they will also improve capacity prediction in future, or at least allow customers to configure greater 'headroom' in terms of pre-started containers (presumably for a cost) to smooth out any remaining issues.

    2. thames

      Re: Is it just me

      @regbadgerer - I haven't tried it yet, but from what I understand it's all about making the billing more granular for highly variable loads. Instead of getting billed for provisioning a micro-service which may sit about not getting used much, you get billed on actual usage. You do have to structure your application to work effectively with that however.

      Whether that makes financial sense is going to depend a lot on your individual use case. It isn't for everyone, but it may reduce costs for some people who have highly variable loads. If you run high loads on a fairly consistent basis, then it's probably not a good fit for you.

      The main problem with it has I think been the tendency to create new terminology in an attempt to differentiate it from micro-services. The basic concept though is to make your application better fit something that can be billed on a more granular basis. It probably has a place, but as another option rather than the one and only way of doing things.

  9. colinb


    A whole new workstream to fix up the inevitable design disasters to follow.

    "That is, each function should have its own database,"


    Lets skip forward in time to a couple of serverless devs Tom and Jerry

    Dev 1: hey, we have 40 versions of the PlaceOrder function, do we need them all?

    Dev 2: Dunno, whats the usage like?

    Dev 1: Let me see, eh they all seem to have at least one hit in the last 3 months

    Dev 2: Oh, rats, they didn't mention that in the InfoQ video i saw

    Dev 1: maybe ask StackOverflow what we should do?

    Dev 2: nah, there's this new tech called BrainLess looks really good, gonna learn it.

    Dev 1: Cool, <fires up CV, types> about time i moved on the tech here is so old.

    Dev 2: Yeah this PlaceOrder thing is gonna be SEP (Someone else's Problem)


    1. Anonymous Coward Silver badge

      Re: Excellent


      Dev 1: We've got 40 versions of PlaceOrder

      Dev 2: OK, we should combine them all into one super PlaceOrder function to improve maintainability.



      We've got 41 versions of PlaceOrder.

    2. Martin M

      Re: Excellent

      The “each function should have its own database” thing in the article is patently absurd. If that was the case, an function to retrieve state would not be able to access state written by a mutator. Just because functions can be independently deployed and have no shared stateful store dependencies doesn’t mean that they should or will be.

      In practice, even in a microservice architecture, there will typically be clusters of functions around a single stateful per-microservice store. Usually this function cluster (plus any schema changes, if required) will form the most practical deployment unit.

      And yes, design has to be carefully considered, as there will be a temptation for poor developers to either duplicate code or design systems with horrible performance by doing fine-grained functional decomposition over HTTP. But there’s no reason that has to happen as custom libraries can be deployed as part of a Lambda and abstract common logic within the context of a microservice.

  10. Anonymous Coward
    Anonymous Coward

    Yes, yes, yes

    Just get the old stuff working already instead of chasing new fads all the time.

    I have seen things you fadsters wouldn't believe. Unscrutable JavaScript applications full of bugs pretending to be serious. Servers on the Internet running with SELinux turned off and the database as "root" as delivered by "Internet companies". An utter lack of specified business processes leading to 5 different applications that need to be kept alive in sync by people not talking to each other. All these moments will be ... lost ... like market valuations in the crash.

  11. Stretch

    always fun to watch what bullshit the consultants are pushing these days. Surely after "serverless" we'll get "codeless" where you just wish for something to happen and it magically happens.

    1. Anonymous Coward
      Anonymous Coward


      Have had "codeless" for decades. Currently the MBA types call them "Rapid Development Environments". These are cloud-based/hosted "IDEs" that resemble Visio, but, claim to produce runnable code.

    2. Doctor Syntax Silver badge

      Surely after "serverless" we'll get "codeless" where you just wish for something to happen and it magically happens.

      Then "wishless" where stuff magically happens without even having to wish. It all hits the buffers when it gets to "magicless" and people have to start doing actual work and thinking about things.

  12. json

    Serverless is a misnomer

    ..its just another fancy term for distributed computing, granted with zero OS admin requirements.

  13. Anonymous Coward
    Anonymous Coward

    Another Shiny Toy

    A couple of months ago, a client of mine invited me to their office to see if we can help them with their existing application.. it was more or less a presentation of what they were doing the past year, they managed to develop and host that in a "container" environment, the actual application was actually small and the CEO basically spent most of the time extolling the virtues of containers. .. and could we recalibrate our usual services and develop for them? I said no. Seriously, we could have done what they did in much less time.. container tech was just too shiny a new toy for him to ignore.

  14. RLWatkins

    This model is not new...

    ... and it is not universally useful.

    One can decompose *any* "monolithic" application into a collection of functions, assign each of them TCP/IP URIs, send them messages and await the results which they produce. In fact, one of the models of procedural programming consists of sending messages, i.e. activation records, to functions.

    (Entire operating systems are based on the concept of message-passing, e.g. QNX, AmigaOS.)

    Even the problem of global variables can be resolved in this model by replacing them with accessor functions.

    Where the notion of this being a universal panacea breaks down is this: overhead. For top-level procedures in interactive information applications, which are expected to run at UI speed, this sort of thing is fine. On the other hand, if one is running something that is compute- or I/O-bound instead of UI-bound, one suffers a huge loss of performance.

    To make matters worse, much of this interaction now takes place over a network, rather than by passing such messages in memory. And a network is a bus, no matter how many switches are put into place to make it behave as a mesh. All those "serverless" calls can make it quite congested.

    Try calculating a 50K employee payroll this way. On the other hand, don't.

    (One sees the same problem in "hyper-kernel" systems which are scaled up too far.)

    Loosely coupled has its place. Tightly coupled has its place. Like any new paradigm, or in this case a new name for an old paradigm, being shiny and new makes it interesting, but doesn't necessarily make it universally useful.

  15. Anonymous Coward
    Anonymous Coward


    I know a lot of you re joking around, but some are not. The reason for all of these new names and methodologies is for the devops culture. Serverless is called it because when developing code, the developer doesn't need to know or care where or how its run, they deploy the code itself and the rest is taken care of by the infrastructure and the rules set out by the admins of the system. The scaling etc is automatic. Contrast this with container, where the developer is to decide on the design and contents of the container, the security etc.

    There is a separation between developers and admins for a reason, one does the coding for applications, the other manages the systems, scaling and security. Devops and containers pushes most of this back on the developer. To push the infrastructure back on the Admin and still use the same developer life cycle, serverless is born. But as has been said, serverless doesnt work for everything, so changes to the roles of the admin and developers are still needed and a tighter integration between them.

    Most developers dont want to do the infrastructure stuff and dont know how to. Most Admins dont know coding well enough to do the development of apps. There are exceptions, but most want to still just do what they did before, but use still be seen as doing whats new and 'cool', doing it devops style in containiers.

  16. Anonymous Coward

    Developerless and Corporateless too please


    My computer was trashed (1999) to disguise the fact they stole some intellectual property).

    I used a memory browser to view memory, while it was happening,

    And there was DCOM, it said in memory DCOM Decommishining Windows95, while my hard drive had the heads staggered across it and file allocation tables scrambled, (strangely it did not scramble .txt text files any where on the drive) (Dcom had been loaded by Norton Gold - a freeby on a computer mag disk and wasn't needed by my system). What you can do with a Distributed communications system eh - Now MS has build it into the O/S - can't View drives in Management console without it ;-)

    Ahhh Thanks, Mr Anonymous above and your "Serverless" comment. Wonderful and to the point.

    "Most developers dont want to do the infrastructure stuff and dont know how to".

    All we need is "Developerless "Software" or "Corporateless" Operating Systems.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020