back to article React team observes that running everything on the client can be costly, aims to fix it with Server Components

Facebook's React team has previewed Server Components, allowing developers to write code that runs on the server, speeding up data access and reducing the amount of code that has to be downloaded by the web browser. Should applications run most of their code on the client, or on the server? It is a never-ending question and …

  1. Anonymous Coward
    Anonymous Coward

    So React is moving to an MVC type architecture? I thought it already had that in part with Flux?

    1. Graham Dawson Silver badge

      One of the attractions of React is, or was I suppose, that it wasn't a full stack solution, but a UI framework. A lot of my work has been using React with whatever datastore and data source was most appropriate to the situation at the time (most recently, apollo-graphql, which is rather nifty, but also flux + some random API) rather than being tied to one blessed solution.

      I'm not entirely sure if that's what they're discussing here, though. It feels like some hybrid mockery of server-side rendering and API/datastore, which makes me worry about possible merging of concerns and a loss of flexibility.

      I feel like the real problem they're trying to solve isn't actually architectural. As far as React is concerned, the UI framework is a solved problem, with only incremental tweaking to keep up with standards from here on. Maintenance isn't nearly as glamorous or intellectually stimulating as new feature implementation, so they're inventing new features to creep towards. Server-side is the most obvious, but it's also a solved problem, using existing bundlers and compilers, like Webpack and Babel. This new server component system won't simplify things, because it will still have to rely on either implementation in an existing server-side framework such as Express, or on existing bundler/compiler toolchains, or it will have to become a complete replacement for all of them at some level.

      But then, the javascript ecosystem is never one you could accuse of not re-inventing the wheel every five minutes...

      The issue is ultimately one of "not invented here". They want to control the entire stack for their own purposes, rather than thinking about the community as a whole. React as a UI framework is great. React as a complete stack becomes overly complicated (yes, you can laugh here), unwieldy, prone to greater error, and isolated from the greater JS ecosystem. That kind of sucks.

      1. Anonymous Coward
        Anonymous Coward

        All excellent points. I could not agree more.

  2. Greybearded old scrote Silver badge

    Lightning fast javascript?

    I don't know what you're smoking sonny, but it's seriously messing with your sense of time.

    Or maybe you're just too young to remember what native code could be like. (Could be, not necessarily is.)

    1. Warm Braw

      Re: Lightning fast javascript?

      I'm not sure it's JavaScript (per se) that's the problem - it's had enough performance work done on it for the purpose it serves. It's the operations on the DOM that take the time - if you were looking at the most efficient way to update a user interface it wouldn't be by applying a stream of incremental changes using a text-based mark-up language that then interacts with a text-based styling system that likely causes a significant amount of re-rendering that's then partially invalidated by the next incremental change.

      It would be the same problem with Wasm - which is as near to native code as you're going to get - you still have to call back into the DOM to display the results.

      Historically, the focus has always been on getting load off the servers and shifting it to the browser. If you're happy to do the work on the server, then there's an argument for doing all of it and simply sending a stream of GPU operations to the browser to draw the result. I think that's the way we may be headed, even if we're proceeding by a series of apparently random walks.

      1. Phil O'Sophical Silver badge

        Re: Lightning fast javascript?

        If you're happy to do the work on the server, then there's an argument for doing all of it and simply sending a stream of GPU operations to the browser to draw the result.

        Isn't that the "thin client" (X terminal) model again? The pendulum swings...

        1. Anonymous Coward
          Anonymous Coward

          Re: Lightning fast javascript?

          Each generation thinking they know best, going though all the steps over and over, ignoring what was already done.

          1. Steve K

            Re: Lightning fast javascript?

            Agreed - surely Facebook's code is done by now, and they don't need to reinvent any more wheels?

            If they are looking to save money then they could move to pure support and maintenance and let the code tick over?

            1. Strahd Ivarius Silver badge

              Re: Lightning fast javascript?

              And then outsource everything to Capita?

              1. TimMaher Silver badge

                Re: Lightning fast javascript?

                And then Crapita can sell their brand new “Facebook User Control Kernel” team to Asos.

        2. AndyMTB

          Re: Lightning fast javascript?

          So the entire page would be sent as X-directives...? Think of the advertising revenues, blockers and pi-holes wouldn't have a chance!

          1. Anonymous Coward
            Anonymous Coward

            Re: Lightning fast javascript?

            "So the entire page would be sent as X-directives...?"

            Then a massive number of discrete tiny X directives for a screen could take too long to transmit - so it would become more efficient to invoke X macros stored on the client. Which seems to be the route that eventually led to Javascript.

      2. Anonymous Coward
        Anonymous Coward

        Re: Lightning fast javascript?

        Depends on the application I guess. I've been building websites since the 90s both professionally and me it seems like there has never really been a focus on anything other than just progress.

        I mean, back in the day, using iframes was king for layouts. But you quickly became that guy that wasn't using tables yet. Then came div layouts and you were a tool for using tables.

        I think what is happening here is that react is trying to stay relevant.

        Bootstrap is dropping a lot of JS in the next major release and moving to almost pure HTML5. So jQuery and other render blocking dependencies will no longer be required unless you need them and in theory designs based on Bootstrap will look more consistent across platforms.

        This signals a shift towards having less JS bloat in your UIs in the future, hence why React may be trying to move some of that bloat behind the scenes.

    2. J27

      Re: Lightning fast javascript?

      Native code is not an option on the client-side of a web application. You closest you can get is WASM, and that's running on a VM that in some cases is actually slower than the Java interpreter.

  3. easytoby

    Server side = not in user control?

    The other advantage (to Facebook) is that server side is outside local European legislation protecting the privacy of individual users. This is how they will overcome e.g. the Apple restrictions on device ID tracking.

    1. Gunboat Diplomat

      Re: Server side = not in user control?

      I really don't think server side is a way to avoid EU regulations

    2. itguy

      Re: Server side = not in user control?

      Er no.

      GDPR and other priv regs don't care where the data sits, just who has control of it and what do they do with it.

      Client side or server side is the same.

    3. Anonymous Coward
      Anonymous Coward

      Re: Server side = not in user control?

      They can't bypass regulations, but they can bypass client-side tracking protection, that's true.

      Anyway once again they are going to re-invent the wheel and "discover" again what has been know for decades about distributed systems.

      1. Anonymous Coward
        Anonymous Coward

        Re: Server side = not in user control?

        Each generation thinks they know better than the previous, ignoring what was done. Ending up making exactly the same mistakes and 'pioneering' what the previous generation did.

    4. flatline2000

      Re: Server side = not in user control?

      Great more node.js

  4. The Mighty Spang

    am i missing something?

    "he calls the example function FetchAllTheStuffJustInCase() – it is efficient, but an ugly solution, particularly if at a later date the design changes and not all the data is needed. The alternative, he said, is a separate fetch for each component, which impacts performance."

    erm couldn't you do something like

    FetchMeTheStuffIAskFor( ["/api/user","/api/stocklist","/api/basket"], [ params1, params2, params3] ) ?

    and an api on the other end which just calls all the apis requested internally and assembles the results?

    1. Anonymous Coward Silver badge

      Re: am i missing something?

      You could indeed, but wouldn't it be nice if the server knew what stuff you're going to need and included it already when sending the main page? Saves an extra round trip.

      Hell, you could just call this a Hypertext Pre-processor. But to be popular with geeks of a certain vintage you need to make it a recursive acronym: PHP: Hypertext Preprocessor. I reckon I'm onto something here...

      1. Anonymous Coward
        Anonymous Coward

        Re: am i missing something?

        Because the html is somewhat larger than just the data and the code to render the html client side maybe? What's the problem with doing several simultaneous requests for data in any case? It would be interesting to see how each scales.

    2. sabroni Silver badge

      Re: am i missing something?

      That's how graphql does it, here's the shape of the data i want back and the top level key.

      Doesn't really fix the problem of a single call doing lots of disparate jobs though.

    3. bombastic bob Silver badge

      Re: am i missing something?

      I think the point of mentioning FetchAllTheStuffJustInCase() was an example of DOING IT WRONG, and yet this kind of *BAD* *PROGRAMMING* appears to be WAY more common than anyone might want to admit, from the horrendous amount of unnecessary java script in web pages, and the use of GINORMOUS (and generally unnecessary) 3rd party javascript library downloads from various content servers scattered around the web, to the length of time it takes to populate a "File Open" dialog box with a list of more than a handful of files... [and gnome-based and even mate-based desktops, I'm talking about *YOU*, too].

      NATIVE CODE is nearly ALWAYS better. Do only what's needed, and do it on the server. And it should be EFFICIENT code, and not "grab everything _AND_ the kitchen sink, 'just in case'". You don't need to thumbnail every file before you can select one, as an example, especially when a directory contains HUNDREDS or even THOUSANDS of files... example, do a gnome or mate 'file open' on files in /usr/bin - see what I mean?

      At some point the server operators will STOP stealing CPU from the clients and realize how inefficient their processes have been, when they NECESSARILY move it to the server side and discover the resources that doing things "that way" actually consumes!!!

  5. Buzzword

    Turning UI devs into Full-Stack devs

    One language (or framework) to rule the entire stack. This helps balance out resources across front-end and back-end. How many times has a front-end dev's work been delayed because the back-end devs are behind; or vice-versa?

    Problem is, this doesn't solve the next level down in the stack. Somebody still needs to manage database changes, server configuration changes, auth model changes, hardware; and all the other weird and wonderful things further down the stack.

  6. Howard Sway Silver badge

    Client / server architectures

    are like types of customers. Thin clients, thick clients, fat clients, those who think the server should do everything for them and those who think the client should control everything................ you'll meet them all eventually if you stay in the industry long enough.

    1. Blue Pumpkin

      Re: Client / server architectures

      And if you stay long enough you'll meet them several times round.

      Each time with a "brand new, totally innovative, never before seen" sticker on them ....

      1. flatline2000

        Re: Client / server architectures

        Every time..

  7. Robert Grant

    Server Components are not the same as server-side rendering and there are no ugly screen refreshes as users navigate a page

    Server-side rendering doesn't imply screen refreshes like that. SSR is an initial server-side render of React (or equivalent) code into HTML, with all the events etc attached, but still works like React (or equivalent) once it hits the browser. You're thinking of old-school request/response full page refreshes.

  8. Anonymous Coward
    Anonymous Coward

    Anyone with serious Enterprise development experience knows the expense of transmitting data to a client for processing DWARFS the cost of a complex RPC on the server that can do the IOs close to the databases.

    But the internet kids have to learn old technology all over again, because they're in love with the buzzwords and don't realize it has all been done before under different terms with different languages. I've been coding since the 80s; it is surprising how little has really changed when you get to the nuts and bolts of the design issues and caveats involved.

    1. FILE_ID.DIZ


      they're in love with the buzzwords

      Sorry, I initially misread this as "they're in love with the buzzsaw". It still fits your comment well.

    2. cbars Silver badge

      Right. This is the 5th or 6th comment I've seen like this so apologies for picking on yours but:

      I've been in this game a few years, so am by no means a young 'un, but I've heard this crap for my entire career and while I agree that this is a futile dance of circles I do not agree its youthful arrogance. Want to know why "kids these days" don't know about what went on in the 80's...?

      Your documentation was shit.

      Thats it. Yours was, the companies you worked for didn't bother either, and no one has preserved any of what docs there were because they are crap.

      *The* biggest difference, and the best contribution a developer can make is to write solid docs.

      Thats it.

      When I join a project and find a good doc, it doesnt really matter how crap the code is, it will live on, because I understand the intention and can have confidence in any changes (and assure them with targeted investigations). If there are no docs, no way does anyone have any other option than to back away slowly and start building something to fit the current state of play elsewhere.

      Thats because you're paid to get shit working not reverse engineer some code that the author was "so proud of" that they couldn't even be bothered to put their name on it.

      *That* attitude scales, and spreads, and is the root of all evil.

      So its great to hear you have so much enterprise experience and you're also fighting the good fight to ensure all your KDDs get documented, your architecture and quick fixes are in a centralised and indexed change log etc; just don't blame the "kids" for not being able to follow the spaghetti dungeons that idiots built.

      Honestly, I once heard some old boy brag that he built and deployed a business critical application within 3 days, and if that doesn't shock you then there is no hope.

      You know what "real" engineers and architects do for the majority of the time? Yeah, its documentation. If you're not doing that, you're a brick layer, and I hope you're following someone else's instructions.

      (Once again, not personal! "You" is generic and at the other comments too :))

      Edit: also, no disrespect to brick layers, but you won't find a bricky saying he's an architect

      1. Andrew Commons

        "Working software over comprehensive documentation"

        You reap what you sow.

        I was coding for a living way before this when comprehensive documentation was part of the process. The process was applied on a per project basis and the problem that wasn't addressed very well was centralising the documentation so that it reflected the system as it evolved over several decades.

        The documentation was far from shit, it was the documentation management that sucked. Now you don't even have the documentation to mismanage.

        1. cbars Silver badge

          Yes indeed, managing documentation is a key part of it. By good docs, I dont mean documenting the names of variables, or internal data types, or view layout witeframes etc etc

          I mean appropriate documentation that documents interfaces, specifies assumptions and most critically of all explains the business objectives of the software.

          "What does this do?"

          "Dont know but if we don't run it then X (downstream) breaks"


          Yes, Jesus, obviously a working system comes first, but the amount of code I see that doesn't even have a README.... how can you tell its working if you havnt got the requirements documented...?

        2. bombastic bob Silver badge

          the official way of top-down coding back in the day was, basically, write the core of the documentation FIRST, and then implement to it. So at least the CORE docs would be worth while and accurate, because the design was BASED on them.

          Obvious implications are obvious.

      2. Martin M

        Not disagreeing on the importance of good documentation, although would rather see a small set that is completely current and addresses the load bearing walls of the architecture than reams of useless outdated rubbish. Similar at code level. The very best code doesn’t need much at detail level because it’s obvious what it’s doing. But good code is quite rare.

        However, this isn’t really about specific documentation for a particular system - it’s about fundamental distributed systems design knowledge. It’s going to be relevant forever because information can’t travel faster than the speed of light and hence latency will always be important for globally distributed systems. Its not like this isn’t written down either, see

        which links back to a 2002 book, and it was widespread knowledge well before then, right back to Deutsch’s seven fallacies of distributed computing in 1994. Yet people somehow manage to graduate from Computer Science courses or have long IT careers without knowing about this or e.g. ACID, locking/deadlocks, consensus, cache invalidation, message sequencing, idempotency, CAP etc.. I’m still not quite sure how.

        In 2003, on a project with a global user base, as well as having explicit architectural principles covering interface granularity I insisted on a hardware WAN emulator in front of the local dev/test servers. Turned up to Singapore latency (about 300ms IIRC). People stop designing inefficient interfaces/using them badly quite quickly when they personally feel the pain :-)

      3. bombastic bob Silver badge

        You know what "real" engineers and architects do for the majority of the time? Yeah, its documentation

        Sadly, no. [although I'm doing docs at the moment, seriously]

        I run into 'lack of proper documentation' a LOT. I think most others do as well. If "Stack Overflow" is the best source for information on a programming language or platform, then the official documentation is either poor quality or missing.

  9. sabroni Silver badge

    Back in the day it all ran on the server. The more clients you had, the more resources you needed.

    Nowadays a client brings compute capacity with them. If you can design your system to take advantage of that it will scale better.

    But of course anyone with serious Enterprise development experience knows that....

    1. Ilsa Loving

      That's a big if.

      I can always tell the people that really haven't thought through what they're doing because they don't restrict their queries, preferring to dump the entire table, and then they wonder why performance is terrible.

  10. Ilsa Loving


    So not only have we come full circle, but they come up with a ridiculous hack to make it work. There are so many already excellent server side tools available that would be better suited to that kind of work, but nope, NIHS.

  11. Elledan

    Barking up the wrong tree

    Maybe it's just that many people are on zippy internet connections without data limits, but speaking as someone who still remembers when utilities that checked the approximate load time for a specific website page one was working on were common, I think they're slightly delusional.

    Yes, a lot of people are on 4G LTE links and 100+ Mbps VDSL/cable/fiber, but there are plenty of people (even in questionably developed countries like the US and UK) for whom the main issue with today's websites isn't whether all the processing bloat runs on the server or client side.

    Even minified, JavaScript is massive. JSON and text-encoded image data bloat up what should be a few kB into hundreds of kBs, if not MBs. Similarly, processing demands on the client system have skyrocketed. Case in point, the Celeron 400 system I was using with Windows 2000 around 2000. It had 64 MB of RAM. These days you cannot get a browser to even run in that much RAM, let alone open a single page. Even when just browsing pages that should be just text and images.

    It's easy to say that 'these days we got the bandwidth and computing resources', but that depends on everyone upgrading to new, zippy systems every few years. For many families and adults, all they have is an older PC (think 2010 vintage), or an older (budget) phone. I'm one of those lucky ones who happen to have a tricked out PC (2015-vintage, still with 32 GB RAM), but even with Chrome open on just the Netflix site, the browser is using about 1 GB of RAM already.

    I happen to have a 2010-era PC around as well (quad-core Intel pre-Core series CPU, 8 GB RAM), which can still browse the web, but it's already a painful experience there. Want to even open Facebook on it? Better grab a coffee. Want to actually use Facebook, or Twitter, or anything else 'web-app'? Hope you are on the good relaxation meds because you will need them.

    Maybe I'm just old and grumpy and upset that things are changing, but from where I am standing as someone who did a lot with web development and web accessibility, Facebook isn't even barking up the wrong tree here when it comes to addressing the actual problem. This is just plain wankery, pretty much. It's just text and images. The dynamic part is nothing fancy, we did that in the late 90s already using hidden frames (later iframes) and the basic JS of the era. And it ran fine on a Pentium 200 with whatever crusty IE or NS that was on it.

    I'm not seeing what we are gaining relative to that, except the need to buy new PCs and smartphones more often to keep up, while those who cannot afford it to spend money on these things will fall behind.

  12. Anonymous Coward
    Anonymous Coward

    This sounds awfully like JSF (Java Server Faces), which turned out to be a great way to cripple web interfaces with too many requests flying back and forth. It assumed multiple requests for smaller bits of data was going to scale, versus working out what a page needed up front and getting it in the initial request. In the Java world most people have gone back to the latter approach, with subsequent requests using AJAX to just fetch or update the bare minimum of data.

  13. ecofeco Silver badge

    Rube Goldberg

    ...would be proud.

  14. RobLang

    Solution to the problem isn't always more code

    I accept that in a microservice world, one team owns the entire stack from DB to browser and having the same tech running all the way through is good for the team if that's the tech they want to choose. Fine with that.

    However, the problem here seems to be the efficiency of multiple calls to the server. In an HTTP 1.x world, that's certainly the case but with HTTP 2 coming (eventually) then chatty calls to the server have less overhead. Isn't that the problem they should be solving rather than adding more code?

    It feels to me like the browser manufacturers are becoming more opaque like they were in the later 90s and not really co-operating with the developers. I fear that silos are going to lead to more solutions like this.

  15. trevorde Silver badge

    Just when you thought it couldn't get any worse



    global recession


    new Javascript framework

  16. ozzie252

    Where have I seen this before.....?

    Ah yes.... server-size Blazor

    1. weirdbeardmt

      Re: Where have I seen this before.....?

      this. Whatever reasons they might be giving for doing it, I'll bet a large part of the motivation is that Blazor will likely make their entire framework redundant very soon.

  17. Anonymous Coward
    Anonymous Coward

    Something is fundamentally wrong

    Back in 2008, I made an application which showed some graphics. It wasn't even native or using hardware-accelerated graphics; it was .NET and bog-standard WPF. It ran well enough (albeit on Windows only) on 2008 machines, and it took me about a year, working half time.

    Years later, the client wanted to port it to the browser, so it could run anywhere. They brought in a full time developer in addition to myself, so that's three times the resources, and it took us over three years to reach feature parity, so dev time was about 10x.

    In fairness, part of that is because we've had to redo the whole UI once, because the new and shiny framework we were using got dropped in favor of the _new_ new and shiny. Just keep using the original framework? Can't, because it depends on stuff that depends on stuff that depends on stuff that has been found to have critical vulns; if we update it then everything breaks.

    Meanwhile, the WPF version from 2008 is still working, it doesn't require a web server, it actually works offline (no, service workers don't really work), it doesn't depend on shit that this month may or may not be sending your data to China, and it's roughly 10x faster using 10x less RAM. Sure, it only works on Windows, but guess what? If I had spent all that time just making a native version for every single OS that the client's users were interested in, I could have done it myself, I'd have finished earlier, and we'd have a better product. Hell, if we had kept up development of the .NET version, at this point we could probably run it on Linux and Mac with just minor tweaks.

    I know, I know, it's not quite that bad, most of the time. But it sure feels that way a lot of the time. Not even WASM is looking like it'll solve this. There's something fundamentally _wrong_ with web development, and I fear it's not going to get fixed through incremental changes.

    1. chuBb.

      Re: Something is fundamentally wrong

      The problem is cultural I think, the Web team always seems to be the youngest, experience and longevity push you towards the back end and processes, certainly thats how my career trajectory is headed that and ux got v boring, having lived through css - > xhtml - > ajax - > fuckit-justput-<doctype>-at-the-top I just can't be arsed give me a good systems integration challenge instead...

      1. Anonymous Coward
        Anonymous Coward

        Re: Something is fundamentally wrong

        Fair, except that this web bullshit is flooding backend and processes too. I still don't get why anyone ever thought Node was a good idea.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like