back to article GraphQL a cut above the REST, say query lang's fans: Airbnb, Knotel, others embrace the tech

At the GraphQL Summit in San Francisco on Wednesday, Matt DeBergalis, co-founder and CTO at data plumbing biz Apollo GraphQL, urged companies to appoint a data graph champion to help ease the implementation of GraphQL, a query language for fetching data. It's not yet a given that organizations want to implement GraphQL. But at …

  1. Korev Silver badge

    "When we were looking at designing front-end applications, they only needed a small subset of that data," he said. "Most of them need the address and the lat/long, for example. Sending back all the information we have about every building every time would be very, very onerous. And so for us, it made sense to use GraphQL where the client could select exactly what data they wanted to get back."

    Potential dumb question: this sounds like exactly the kind of the thing that any SQL database could handle well; what advantages are there to use GraphQL or similar in this context?

    1. Tom 38 Silver badge

      GraphQL is like an aggregator of databases - many (maybe most? I haven't seen enough personally..) are backed by an RDBMS. The benefit of GraphQL is that you describe the data that is available and define how to get it from various data stores on the backend, and then the frontend user has more flexibility in requesting what data they are interested in for that specific page. The aggregator is then responsible for assembling the pieces of information and returning it all back to the client in one response. A typical REST service architecture (say a forum), you might make a request to get info on the user, one for their posts, one for their friends, etc etc. With graphql, that's one query, even if those things live in different data stores.

      The benefit is that you have one language for making these requests and that you make fewer requests in order to get the information that the frontend needs. Frontend devs only need to know what data is available from GraphQL, and backend devs only need to be able to describe that data to the aggregator.

    2. Unep Eurobats

      It lets the client decide

      If client coders just want a small subset of the data, then traditionally they have to ask the developer of the server-side REST API to provide a method that returns just this subset. If there's only a standard GET method that returns everything, then the client has to incur the expense of calling that and then discard most of the response.

      By giving the client the ability to select just what they want from a standard endpoint, GraphQL has the potential to speed up client/server development and/or interaction. However, there's the risk that in practice this approach will just transfer complexity from the API to the client.

      1. Korev Silver badge

        Re: It lets the client decide

        Thank you both for the explanation.

        As it's Friday -->

  2. Reg Reader 1


    Thanks for asking that question.

    @Tom 38 and Unep Eurobats

    Thanks for those explanations.

  3. Claptrap314 Silver badge

    SQL over JSON

    @Korev You're not wrong.

    Twelve years ago, I got so frustrated with our SOAP implementation that I created an api that took a string (a Rails object query) and returned a string (json of the result) in about an hour. One of those absolutely-only-dev things that you do to blow off steam & speed development.

    I have worked with the GitHub REST & GraphQL APIs, running on the standard libraries, for Ruby and for Go.

    GraphQL permits users to specify in a single query, and to a high degree of detail, exactly what data they need. This is REALLY GOOD for the API providers because 1) it reduces load on the servers and 2) as mentioned, there is no dealing with the cost of fielding, vetting, accepting, designing.... new APIs.

    One thing that was always a problem, however, gets a lot worse with GraphQL. A user can request a LOT of data with a simple request. Strategies to limit the amount of data returned per call, and to continue a prior call, are much more sophisticated under GraphQL.

    For consumers, it can be good (especially for not requested unneeded data), however...

    1) I found the GraphQL specification FAR from easy to grok. I seriously want to see about 50 examples to work out just what you are supposed to do.

    2) Working with GraphQL in Go was worse than working with SOAP in Ruby all those years ago. Each call-query requires its own set of class definitions.

    3) There is a huge dependency on the provider doing things right. This is true in general, but for GitHub, GraphQL lacked critical things REST supported & vice versa. Which means I needed to operate two (very) different sets of APIs. I mean three. Because the website allowed stuff you could not do with either REST or GraphQL.

    Not only that, but the base libraries lacked the continuation wrappers needed to preserve the sanity of people making these calls. The REST libraries lacked these as well, but continuation in REST at least feels a lot easier to manage. Of course, given that in Go, you have to define a new set of classes for each call, and Go lack generics, I understand that this might be tricky--all the more reason to have competent people on the provider's end assemble a library that manages that directly.


    Having written this out, I really think straight-up SQL is the correct solution. Of course, the provider needs to sanitize the calls. And limits on the number of items returned, as well as server costs, imposed. But SQL is FAR easier to work with than GraphQL, and has its advantages. Furthermore, GraphQL IS getting translated into SQL.

  4. Kevin McMurtrie Silver badge


    I seem to recall that the REST came about as a way to describe very specific legal operations that could be performed on data through a public API. GraphQL sounds like a generalized approach that could be handled by a small amount of generalized code - until it's time to define the business logic. GraphQL sounds like it would become increasingly difficult to manage public access as new query features are added.

    I also don't like micro-services that exist only to provide internal applications with database access. It's really hard to home-brew a better API than a native database interface. You're better off giving internal apps database access and using shared libraries for common operations.

    1. Claptrap314 Silver badge

      Re: Hackable?

      I don't really think so. If you consider GitHub, there is a pretty clear chain of SQL statements that get generated for each click on the website. In principle, there is no particular reason it would be difficult to expose the relevant tables and relations, subject to the existing access controls for the users. You just have to make it a policy to always do so.

      The security issues are the same for the API as for the website, if you assume basic competence for the dev team.

      1. Kevin McMurtrie Silver badge

        Re: Hackable?

        I'm thinking more about DoS security - creating a large effort for a small response. These can be hard to track down because the response size looks normal. When enough are done in parallel, everyone's query slows down so all response times look similar too.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like