back to article Nvidia says microservices will drive a SmartNIC into every server

Nvidia believes another wave of growth lies ahead of it as microservices create demand for faster data centre networks and therefore a need to offload security and networking chores from hard-pressed CPUs. The company’s numbers are already surging: today’s Q3 results announcement revealed record revenue of $4.73 billion, a 57 …

  1. StrangerHereMyself Silver badge

    Haven't you heard?

    Microservices are dead.

    Monoliths are the new black.

    1. Steve Channell
      Meh

      Re: Haven't you heard?

      Microservices done right can be packaged as as monolith or distributed - the pattern here is Hybrid kernels that are logically microkernels, but with key services running within the kernel (Windows NT, Darwin). SpringBoot is a reasonable example, but faster Inversion of Control containers are just as good.

      The architecture issue is to use efficient binding that allows a gRPC service to be called directly with/without the network stack and facade based security tokens.

      Kinda like "back to the future" with CORBA vs DCOM/ORPC

      1. Anonymous Coward
        Anonymous Coward

        @Steve Channell Re: Haven't you heard?

        Uhm that's the point of the micro services. To be packaged into a larger solution.

        Not a big fan of Spring. There are other ways to do AOP if you know what you're doing.

        And while you're considering... gRPC ... which is interesting... there are other things that make the cards more compelling.

        Again, have to post anon for the obvious reasons.

        1. Steve Channell
          Unhappy

          Re: @Steve Channell Haven't you heard?

          The clue is in the "micro" part of micro service.. you assemble a (App) service from multiple (micro) services, but for many people that means a Docker container deployed to K8, where every inter-process message is passed through the network stack and fabric in a loosely coupled, (hopefully) strongly cohesive way. When a service consists of multiple layers of services, network latency becomes the performance limiting factor, compounded by AuthN/AuthZ and encryption between containers. This can be addressed by [1] network offload OR [2] packaging micro-services together to avoid network hop.

          The point of IoC is that the interface instance can be a proxy, or a direct object reference. Aspect Oriented Programming (AOP) is fundamentally a stupid idea (but a neat way to retrofit functionality to a defective architecture) because it imposes a generic marshalling layer. I share the concern about Spring, mostly because it clings to AOP.

          The point I was making is that IoC decouples the design from the configuration allowing a choice of small (separately deployed) services OR monolithic deployment. In most non-trivial cases AuthH/AuthZ will be co-located, logging by a proxy and Service-Bus via network. When gRPC is used properly (protocol buffer + flyweight) network transmission is async to serialisation (making it faster than most alternatives).

          Whether nVidia can leverage DPU to push Mellanox switches is another matter.

          1. W.S.Gosset
            Thumb Up

            Re: @Steve Channell Haven't you heard?

            Both of you have valid points, but have different perspectives, at least in terms of explanation/discussion.

            I would liken the larger point (ie, where do you draw the line for normal discussion on technical details, on theoretical internal structures vs practical emergent System) to an old argument re MacOSX.

            Is it Microkernel or Monolithic Kernel?

            The answer is : Yes.

            MacOSX is a microkernel structure at dev time ; a monolithic structure at run time/build time.

            That is, for an intra-Apple OS coder, the code structure and hence direct usability is microkernel. For any ex-Apple coder, or non-kernel intra-Apple coder, or any User, it's effectively a linux-style Monolithic Kernel.

            So... you're both right, it's just that one of you is standing on his left foot and one of you is standing on his right foot.

            1. Steve Channell
              Thumb Up

              Re: @Steve Channell Haven't you heard?

              The reference to microkernels was as an example of the design process of loosely coupled system that evolve to include common core capabilities into every process/service to address performance issues.

              micro-services architecture as what we used to call service oriented architecture before J2EE bundled all the services into a container. The key point is that micoservices allow independent changes without the need for monolithic quarterly change cycles: there is no reason for the { caching, web, authentication personalisation, inventory, basket, pricing, recomendation, purchase, warehouse, shipping, review, accounting, analysis} services should need the same infrastructure or change-management - three key considerations:

              1) The JVM/CLR stacks need to change quickly for web-facing services, that would impose a heavy testing cycle on purchase/shipping in a monolithic deployment.

              2) Services touching payment services must be thoroughly reviewed tested to avoid crime

              3) presentation tiers needs to scale out with load, but other services need to scale-up with load

              The architecture driver is not a design pattern, but governance, scalability and performance. My point is that micro-services are not a design pattern, but a deployment pattern - IoC allows you to decouple decouple design from deployment - deploying only the parts you need..

              There are monolithic deployment architecture (J2EE, mainframe), pure micro-services deployed as K8 containers deployment architecture, and hybrid deployment architecture where some standardisation facilitates performance optimisation

      2. StrangerHereMyself Silver badge

        Re: Haven't you heard?

        I believe you are confusing microservices with microkernels.

  2. Anonymous Coward
    Anonymous Coward

    Sorry Nvidia. Not exactly.

    Posting Anon for the obvious reasons...

    For many use cases, this is a nice to have.

    Having the capability to offload some stuff to the nic is a good thing.

    However, it has to integrate in to the overall infrastructure picture and it has to also add value that exceeds the cost of the card.

    Enhanced security becomes the first 'low hanging fruit' and there are others I can't say...

    Intel, AMD are also players in this space.

  3. Anonymous Coward
    Anonymous Coward

    They are putting Ampere chips on some of their latest NIC's ... no wonder there is a bloody RTX GPU shortage :(

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like