back to article Microservices guru warns devs that trendy architecture shouldn't be the default for every app, but 'a last resort'

Sam Newman, author of Building Microservices and Monolith to Microservices, told attendees at the QCon developer conference in London that "microservices should not be the default choice." Microservices have become today's equivalent of "nobody ever got fired for buying IBM", a common catchphrase back in the '80s, said Newman …

  1. Warm Braw Silver badge

    All-or-nothing deployments

    The point of this devops stuff, as I understand it, is that you've got your deployment automated to the point that you can push a button and it happens. The whole microservices thing depends on your being able quickly to roll out incremental changes.

    There isn't, in principle, any reason you can't do the same for a monolithic application - it's just that historically people have done that less frequently and therefore haven't invested in automating the deployment process.

    The big issue is really whether your monolithic application is susceptible to automatic testing: if it isn't, then sawing off the bits that are could well make life easier. Whether those bits communicate via the stack or the network is an implementation detail.

    1. Richard 63

      Re: All-or-nothing deployments

      You started fairly wide of the mark but ended bang on the money(because mixing metaphors is fun!)

      I don't normally comment but I thought it worthy of commenting that just becase you don't have the definitions nailed down, doesn't mean you can't see the value in doing stuff.

      1. Warm Braw Silver badge

        Re: All-or-nothing deployments

        just becase you don't have the definitions nailed down

        That may be because in my advancing years I have seen the same things (at least to a first approximation) redefined many times and it is not always immediately clear what beyond the first approximation is the genuine advance...

    2. Version 1.0 Silver badge

      Re: All-or-nothing deployments

      And those "incremental changes" can remove parts of the application that you use and depend on functioning, we're always told that "its an upgrade" even if it renders the cloud app dysfunctional for the purpose that you purchased it.

    3. AdamWill

      Re: All-or-nothing deployments

      yeah. This bit struck me:

      "Newman sees great value in continuous delivery, where software is automatically built and tested and therefore already ready for release."

      It made me think, wait, he's been dealing with people trying to do microservices *without* CD? Like you I sort of thought that was the whole point...

  2. Ken 16 Bronze badge
    Thumb Up


    "The core issue seems to be that it is hard to do microservices well."

    I've certainly seen them done badly multiple times in multiple places, since that seems easier. Then there are the changes of mind, are the services accessed directly, via an API gateway or via an ESB. Why not all three? Why not change your mind half way about which API gateway? Once the microservices are distributed then it's almost impossible to reuse them so lets develop multiple ones doing the same job and flip a coin which to use. Then lets do a version upgrade...

    If your problem isn't the problem that microservices are intended to solve then why, aside from religious conviction, use them?

    1. Aristotles slow and dimwitted horse Silver badge

      Re: Alleluia!

      RE : "why, aside from religious conviction, use them?"

      It's absolutely not a position I support, but in the past I have seen "Because f**k the customer..." I guess. Bad design and obscure implementation is most continually excellent for the 3rd party support and maintenance revenues.

    2. rmullen0

      Re: Alleluia!

      Religious conviction is exactly what the problem is.

  3. Aristotles slow and dimwitted horse Silver badge

    I'd be interested...

    I'd be interested to understand the average age of the attendees to this session. I've just read the article and there was nothing in it that really jumped out to me as anything other than normal and standard considerations for designing enterprise applications. That said, I'm in my late 40's and grew up with procedural, structured, object oriented and all other kinds of ways of designing and programming applications.

    Perhaps it's just me, but when I read statements like this "people tend to be not good at defining module boundaries or having discipline about how module boundaries are formed" I just wonder what sort of age the audience is, what sort of CS education they have received, and what sort of experience they have in large scale, multi-team, multi stakeholder software projects.

    1. Mike 125

      Re: I'd be interested...

      >and what sort of experience they have in large scale, multi-team, multi stakeholder software projects.



      "but for whatever reason, we have to deploy the entire system together as part of a lockstep release. Often this can occur because we've got our service barriers wrong."

      In the old days, we used to scream "Sheeat - who checked in that modified header file without warning anyone?"

      Kids today.

    2. SVV

      Re: I'd be interested...

      This sums up some of my problems with the current microservices hype. At the moment it is focused on quick development, quick deployment and other flavours of aqile-infused quickness. Little or no attention is paid to the two real-life problems of this approach. Number one is the primary importance of interfaces. Designing these carefully, with an emphasis on large-grained objects rather than small grained individual parameters has always been one of the key benefits of OO languages (eg you pass an Address object, rather than individual address fields, so that when you need to add stuff to an address format you don't need to change all the interfacing code, just the address class.). Service based systems take you back to the days of functional programming where functionality change meant interface change too - and changes to all the calls to that function too.

      The second is dependency management. Start seperating code out into completely isolated services, and sooner or later you are going to hit the problem where version x of one service is dependent on version y of another. This needs a way to be defined strictly if you want to avoid descending into unmanageable chaos, otherwise you're reliant on "run the tests and hope for the best".

    3. This post has been deleted by a moderator

    4. eldakka Silver badge

      Re: I'd be interested...

      The problem usually isn't the random developers in the dev teams.

      It's usually the senior, 'bored' developers who get "executive capture". That is, they get in the ear of the executive (CIO's etc), reinforce the de riguer keywords being dropped by vendors and Gartner et al., because they want some new shiny to play with.

      Then you get the CIO's and other senior managers pushing it because they think that's the way to go. No research papers about the organisations needs, just 'parables' and examples of how well its worked in totally unrelated industries (anyone else have Agile consultants come in and use Toyota - a completely unrelated industry to mine at least - as a reference/case-study?). No rigorous cost-benefit analysis of the effect on the organisation.

      It happened with 'agile' and 'cloud' too. Senior management taking it on gospel that it is the way the world is going.

      And it'll happen again, AI looks like the thing being pushed now, and I expect it'll be quantum computing in a decade or so.

  4. 's water music


    If your underlying work product is shit it doesn't make much difference which methodology you use to deliver its implementation.

    My whole career is pretty much built on clearing up after people who fail to realise this so thank god for idiots.*

    * Don't worry, I more than pay my dues in maintaining the sum of human idiocy.

    1. peter-l

      Re: GIGO

      I've had to adopt this very attitude to survive emotionally in my current role.

      It was born from what a recruiter (who was sponsoring a user group meeting) said informally to me (about counseling his frustrated staff) on the dysfunctional hiring process in local government:

      "its the only reason a pariah like me can stay in business"

      This was circa 2002 and has stuck with me all this time. Pure gold.

  5. Pascal Monett Silver badge

    So microservices are hard

    When I read "people tend to be not good at defining module boundaries or having discipline about how module boundaries are formed" what I actually see is "people are rubbish at properly analyzing a problem and defining the specifications to solve it".

    If you are not good at defining module boundaries, it doesn't matter what methodology you try to use, your code will be rubbish. And if, as Ken 16 pointed out, you add a layer of management indecision plus unforeseen technical issues, you'll end up with a big layer cake of failure.

    Analysis. It's always the analysis that is the basis of all problems. When you forget about something, you do not take it into account and it ends up being the developers that try this and that workaround and the collisions between the workarounds compounds the problem.

    If your analysis is complete and takes all possible factors into account, defining what a module should and should not do is much easier and also easy to follow.

  6. Anonymous Coward
    Anonymous Coward

    I am not a coder, I'm a systems guy (infrastructure more than anything), so I am intrigued.

    There are vendors out there that push micro services. The course I went through on this basically said (and I am paraphrasing):

    "Its great for rapid deployment, but the developer has to be a DBA, security engineer, systems engineer, and programmer who understands the flows of *everything* in and out of their environment. The biggest challenge to this is to drop the whole idea of governance."

    Now you can spin this any way you like, but for one person to be capable of ALL of that is going to be very hard to find. I even wonder if this is the reason behind so many stories of insecure data out there for all to see.

    Its almost like saying that any DBA can be a developer, security engineer, systems engineer, and programmer who understands the flows of *everything* in and out of their environment. Or any permutation thereof.

    Call me old fashioned, but those types of jobs are invaluable because people with each of their disciplines *know* what to look for in their respective fields, but not necessarily in others.

    1. PerlyKing

      Jack of all trades

      the developer has to be a DBA, security engineer, systems engineer, and programmer who understands the flows of *everything* in and out of their environment

      Sounds like rubbish to me. Just because a service is, er, micro doesn't mean that it has to be developed in isolation by one person. Even if the function of each service is distinct that doesn't mean that you can't use the same security layer in each one, call on a DBA for support, and so on.

      And surely "the flows of everything" becomes a lot simpler for a microservice?

      As the article says, microservices are not the answer to everything but they should be understood for what they are, the same as any other tool in the box.

    2. Anonymous Coward
      Anonymous Coward

      "There are vendors out there that push micro services. The course I went through on this basically said (and I am paraphrasing): <snip>"

      It sounds like someone (whoever was giving the course) doesn't actually know what micro-services are ... (if you're interested, have a look on YouTube for videos on the subject by Sam Newman or Jimmy Bogard)

      1. Anonymous Coward
        Anonymous Coward

        or better still, Udi Dahan.

      2. Anonymous Coward
        Anonymous Coward

        Well, not entirely. The reason behind what was said was probably because microservices lead to containers, which leads to devops and CI/CD development with the mentality that things need to go quick and use all the latest shit.

        This means the developers want the latest hot thing, when then go to the people that provided their services before, they say, we will look at it and then they test, create policies etc around this product, taking time.

        The developer wanted this thing now. not next week or next month, so they go and add it themselves, getting it form docker hub. Now as they didn't follow the normal procedure, they get talked to. Then, as the infrastructure / security / policy teams are seen as being slow, a new policy comes about, you can deploy what you want and long as its reviewed by a software architect.

        Then multiple DB, redis, rabbitmq, authentication servers come up, in configurations that are piss poor.

        Infrastructure and security do not want to touch what they have done, they were not involved and were removed because of being seen as slow. even though they were actually testing and developing correctly configured services.

        The developers now have to manage these poorly configured systems that they set up, who then start to blame the people that they locked out at the start for things not running well.

        New policies come into place, saying that the infrastructure and security departments need to be involved in any project. They are, at the end, when nothing can be done, and complaints come in when told they have done it wrong again.

        Not included at the start because when they are, the developers are told what they need to do, and these thing take time. They want to get started on using the new shiny, not the boring security and management stuff.

        1. eldakka Silver badge

          You sound like you work where I work, as that's exactly what's happening.

    3. Missing Semicolon Silver badge

      people like us

      In other words, if all the devs are dev-gods like all my mates, microservices are great!

      1. Someone Else Silver badge

        Re: people like us

        In other words, if all the devs are dev-gods like all my mates claim they are, microservices are great!

        There, FTFY

  7. andy 103

    Worse off with microservices

    I worked at a company which had a monolithic application used by customers globally. It had been built in a pretty old fashioned way but worked well and generated lots of revenue. It was version controlled (git) and the deployment process was simply doing a git pull on to the production server. All testing was done manually with, you know, actual humans doing sanity checks on work they'd done. If something disastrous occurred it was easy enough to rollback to a working commit.

    They then decided to develop applications using a microservices architecture. It was still version controlled (git) but also had unit testing and other tools in the deployment procedure including Jenkins. There were countless dev instances - all of which used deployment tools. Some of the developers got a boner about things like linting their js, but thought that meant doing sanity checks on their own work was less of a thing because "I've run my tests".

    The end result was that in the monolithic approach people deployed stuff less often and were more keen to check it before it went into production. The deployment process was simple, fast and worked. There was little overhead with it.

    In the microservices world there were perceivable gains of being able to deploy stuff "faster and more reliably". But these didn't actually translate into benefits for the customers of the software.

    To me, the mindset of microservices and trendy deployment processes isn't about quality for end-users or even getting things done efficiently. It simply creates a load of work and overhead *elsewhere* so that you can press The Magic Button(TM) and claim to be really efficient.

    Just because you can deploy stuff fast doesn't mean it's a good idea. In fact it shifts the mindset to constant deployment and that's when quality tends to suffer and customers are disregarded.

  8. Rich 2

    Quality hip-techy speak

    "...our developer velocity..."

    On a serious point, really, WTF does that mean??? (*)

    (*) That's a rhetorical question - I couldn't give a monkeys really

    1. Ben Bonsall

      Re: Quality hip-techy speak

      It's a measure of how quickly they can release bugs into production.

      1. Someone Else Silver badge

        @ Ben Bonsall -- Re: Quality hip-techy speak





        Cheers, sir!

      2. ScottStevens1024

        Re: Quality hip-techy speak

        I created an account especially just to say that if i could like that comment 1000 times i would!

    2. Mike 137 Silver badge

      "...our developer velocity..."

      Velocity has a vector as well as a speed. In my experience the current quality vector is consistently downward - primarily because getting it released takes precedence over whether it actually works. Witness numerous government IT projects and vast numbers of patches to fix patches to fix patches.

    3. peter-l

      Re: Quality hip-techy speak

      It's an estimation tool... some managers want to know when they will actually arrive in hell. Anyone for a ride in a handbasket?

  9. SJG

    Speed and Quality

    Many years ago my app maintenance team got a new lead. He made one change - releases switched from frequent and adhoc to once every 14 days. The number of production issues dropped quickly and dramatically.

  10. trevorde Silver badge

    Tick boxes

    Worked on a project for a large, government organisation where we had a very small team of 6. We delivered something based on a monolithic architecture and the customer was happy. We then acquired more people, including a 'Technical Architect'. Said TA decided that a micro-services architecture was the only way to do the project because the monolithic architecture 'would not scale'. We then rewrote it as microservices, at great expense, and made sure it scaled to ridiculous levels. There were a maximum of 12 people using the app, each for a few minutes each day.

    1. Anonymous Coward
      Anonymous Coward

      Re: Tick boxes

      God, sounds just like where i am, where were you? Have a production system that has only a few people using it at a time, doesn't need to scale, works right now. The developers find it hard (thy say) to develop for. They want to move it onto kubernetes, microservices. Asked why would they do that, making it more complex? So it can scale...!!!!! But it doesn't need to scale, it never will.

      So then, it will make it easier to develop for.. No it will not, its now a distributed system.

      The monolith is hard to change, microservices will make it easier.... Making the monolith more modular will fix that. The response to that, it is modular!!!!! So why is it hard to change?

      It will save resources... bet it will not, the number of VMs/ servers needed for the kubernetes cluster will be far more than the 2 needed for it now.

      It is all because it something new, they find what the old reliable way boring, its the 'new' hot thing (in quotes because its not new, its a fucking distributed system, people didn't make everything a distributed system because its hard to get right, so only did it when it was needed).

      Another contract was won, so needed developing, guess what, microservice the shit out of it. They have never done it before, but gonna be easy, plenty of time. Long story short, massive amount of overtime, miss the deadlines, didn't work right, still fixing it months after the deadline.

      Oh and the amount of server resources it takes, about 1/4 of everything else that run the company (its basically a website).

      1. DPWDC

        Re: Tick boxes

        Andy?! Is that you?!

        See also: this sounds just like EVERY company I've worked for...

  11. This post has been deleted by a moderator

  12. Anonymous Coward
    Anonymous Coward

    My employer has lots of poorly factored monolithic code. Around three years ago, microservices and microsites were chosen as our new design pattern for the future. Cue the following three years spent redeveloping specific parts of the monolithic beast with the goal of eventually replacing the monolith with elegantly composed service and microsite parts.

    To date I estimate that we have converted at most 5% of the monolithic beast to our new service based design pattern. It was clear from when the first service and microsite was delivered that it was horribly expensive to develop and maintain. During those three years, regular business churn requirements continued to be delivered as expected and over time the separation of concerns, the positioning of business logic and the definition of interfaces between the new parts and the monolith degraded to unfathomable levels of incoherence and complexity.

    The plan to continue on with that design pattern has now been stopped in favour of doing something new from scratch and although this new thing will still be a service pattern crusade, it is going to be developed in a language that virtually no employee has any expertise in. To be fair I do understand the reasons for wanting to iterate towards a scaleable services design however the company culture gives me confidence that the new idea is also destined to decline into insanity.

  13. Anonymous Coward
    Anonymous Coward

    A useful tool for the CIO.....

    Why not try out this site?


  14. Mavykins

    Glad I read this

    This is one the issues I've been trying to get my head round for the last few months.

    Internally we have a system I built about 7 years ago which about 3000 users use daily. It still works but needs updating. One of the stumbling blocks is do we go down the route of microservices to make the system scalable, quick deployment etc, and as this seems to be the "Buzz Word" at the moment and every developer I speak to loves the idea of new tech ("who doesn't). But after reading this, the bit that got me was "have you tried running 10 copies of it". And he has a point, the system would scale well that way, wouldn't change our current development process, no major skill changes or trying to manage/maintain multiple microservices. Sometimes the old ways are the best way :)

    1. Stu J

      Re: Glad I read this

      It's not always a valid approach though.

      Running multiple instances of a monolith (and by this I'm talking about some of the enterprise-sized monoliths I've come across that required upwards to 32GB/RAM per instance just to get them online...) can be very expensive, and unless your monolith has been explicitly designed to scale horizontally, you invariably run into problems with session management to the extent that it's often impossible to scale dynamically, so you end up over-provisioning to cope with peak load.

      I'm a big fan of the strangler pattern - stick a proxy load-balancer in front of all API calls to your monolith, and once you've identified particular areas that you want to be able to scale dynamically and rapidly, break those out as microservices and redirect the calls from the load-balancer to those services, then remove that functionality from the monolith. There's no real reason why the optimal solution shouldn't be a combination of monolith and microservices.

  15. Anonymous Coward
    Anonymous Coward

    maybe because microservices were never intended to solve a technical problem - but a management problem. if anything they create technical problems.

    there isn't anything you can do, in a technical sense, using microservices that you can't do using a great big stack. if anything, great big stacks have inherent security and structural benefits.

    microservices help you align different systems that have different change and release schedules, and often competing requirements. they're intended to help solve problems with management.

    it's always harder to corral than to control.

  16. W.S.Gosset Silver badge

    Mac OS X example

    > applications deployed as a single process, or monoliths, can be modular in their design

    Mac OS X's core is, at devel time, a micro-kernel+kernel structure ; at release time, a single blob.

  17. W.S.Gosset Silver badge

    "nothing new – it's an idea "from the early 1970s" "

    Older than that, methinks.

    In my experience, though, all the various changing memes have been merely current-hard+software-circs snapshots of the larger & unchanging oldiebutgoodie: "Common Sense": Keep moving parts together, separate at changes of "speed" (eg, scope, commonality of function, cost of infrastructure, frequency of user changes, etc).

    Those separation decisions are often based on net results of externalities so can change over time as they do; eg huge rise in network speed:cost ratio, huge drop in cpu cost allowing users to have a mainframe each (in their pocket), drop in cost of various levels of virtualisation, etc etc etc.

    Closest label I've seen to this/commonsense is "Responsibility Driven Programming".

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022