back to article How to explain what an API is – and why they matter

Explaining what an API is can be surprisingly difficult. It's striking to remember that they have been around for about as long as we've had programming languages, and that while the "API economy" might be a relatively recent term, APIs have been enabling innovation for decades. But how to best describe them to someone for …

  1. amanfromMars 1 Silver badge

    Take a Bow and the Applause if the Cap Fits ....

    El Reg is great example of a remote, relatively secure and readily available virtual API trading in quite a vast and oft unusual and eclectic array of trades/skillsets negating the need for one to build anything/everything in-house oneself even if it be sought/wanted exclusively for oneself.

  2. John69

    Is the terminology correct here? An application programming interface is a description of how a service works. An application programming application can implement an API and provide a service from an endpoint.

    1. badflorist

      Righ but, this article is for people < 30 years old.

      "One early and famous example of an API... Google Maps API."

      That's "early" if it was released while you were in high school, otherwise....

      Not sure which EXTERNALLY complexed STRING based API came first, but x.500 had a very overly complexed one and of course SQL has always had an external API that can be at least as complexed as the longest string value.

      As a old enthusiast programmer, "API" to me is all nearly internal using shared memory, for example drivers. Externally, well, technically 'la -al | grp whatever' counts as a complexed API.

  3. Warm Braw Silver badge

    Beware the wolf in sheep's clothing.

    APIs are hardly new. Early computer systems had "layered products" - teleprocessing monitors and database systems for example - that allowed "companies to access functionality supplied by others". Everyone who has used an e-mail client has used an API. HTTP is an API itself. Nothing revolutionary in principle to see here.

    Arguably, the least important thing about the Google Maps API was the API. Mapping applications existed before Google Maps: what stopped them being more widespread was the cost of licensing the maps. What stopped the applications being interchangeable between different geographical regions was the subtly different projections used by different mapping organisations. Google Maps only enabled "a whole host of innovations" because Google sucked up the cost of mapping the globe and produced a global mapping resource using the same co-ordinate system which it allowed people to use at no cost. The API is simply noise compared with the economic transformation - except that it's the mechanism by which Google retains control over its investment.

    And that's the key thing about the new API economy - you hand over your data and possibly also a payment and get some service in return. But you lose control of your data in the process. While many of these service providers may be benign, they're going to find out a lot about you and your competitors and that could put them in a position of significant power. And, of course, the more third-party APIs you're using the more vulnerable your business is to a technical or financial failure of any one of them - and you're on the hook for future price hikes unless you're prepared to go through the pain of regular service migrations.

    And as soon as anyone can build a solution by plugging together a few building blocks there's no intrinsic value in it, it's simply a race to the lowest margin.

    As for the car analogy, well, all those standard parts have led to just 14 manufacturers dominating the world's automotive industry. So maybe it's all down to money after all.

    1. bombastic bob Silver badge
      Unhappy

      Re: Beware the wolf in sheep's clothing.

      API as a marketeering buzz-term. Now with MORE TIERS!

      This reminds me when $CEO decided that TIERS were good, and MORE of them were better. It's a *FIVE* TIER SOLUTION! I was supposed to implement that. right...

      So, is that "NEW TIERS, NOW with APIs"? Or, is it "NEW APIs, NOW with TIERS"?

      I think *I* am in tears...

  4. Dr Paul Taylor

    screen scrubbing

    In the 1970s APIs were called modular programming. It was the essence of Unix.

    Problem is, the teenagers who implement a lot of the gizmos on the Web care more about making their sites look pretty than providing information to others.

    Besides this, there is a widespread attitude that "no robots must be allowed in", ie under no circumstances should other sites be allowed to build on "my" information.

    For example, when Jamie Jones and I were implementing istopbrexit.info, I thought "we've got the user's postcode and that of this meeting, let's tell them how to get there". A site called traveline.info seemed to be providing passable public transport information, but I couldn't see how to feed postcodes to its interface.

    When I enquired, they told me I had to pay £500 for a license. This is a pretty stupid attitude, because if they provided a simple API, more people would find out about their site (and see their ads).

    As for Google maps, it's only for Californians who never get out of their cars. It lacks any detail for pedestrians. streetmap.co.uk is far superior in Britain and increasingly openstreetmap.org across the globe.

    (We took istopbrexit.info down when the disaster happened, but also because the major anti-brexit organisations were determined to compete with me instead of cooperating, and for personal reasons. Unfortunately, the domain name was taken over by a porn site.)

    1. A.P. Veening Silver badge

      Re: screen scrubbing

      Unfortunately, the domain name was taken over by a porn site.

      I would qualify that as business as usual. Whenever the registration of a website with a decent amount of traffic lapses or it otherwise becomes available, it will be taken over by a porn site.

  5. amanfromMars 1 Silver badge

    Refused Novel Access can be Universally Problematical ...

    ....... for All Denied Future Sensitive Secure Intelligence

    And that's the key thing about the new API economy - you hand over your data and possibly also a payment and get some service in return. But you lose control of your data in the process.

    Another key thing about the newer and the newest of API economies ...... All your data handed over is surely worthy of payments to both raw and rare sources for services to be rendered in return. And whenever well done, is such a reward extremely generous and gratifying and the minimum absolute default awarded for continuing supplies of excellence in the data you control and master which is lost to/in others/one is decided/inclined/advised to deny from or rescind in others.

  6. Norman Nescio Silver badge

    APIs have been around since the dawn of computing

    They are just a documented way of handling the arguments in a subroutine call.

    What's new-ish is the ability to run subroutine calls across a public network*, where you don't necessarily have ownership or control of the far end, and potentially paying for the privilege. This enables monetisation of (micro-)services, and a network effect of many people using the same API operating on common shared data. A bit like one fax machine being useless, two being of limited utility, but connect fax machines to a public telephone network and you suddenly have a new way of sending information around.

    *Remote procedure calls have been around since the early 80s, and were an academic curiosity before then. New-ish in the history of computing. Possibly just discovered by Marketing?

  7. docmechanic

    Great description of the emerging market segment catering to "API first" organizations. Whether or not you feel that REST was an inflection point in the availability and wider usage of APIs, this piece nicely describes why that market segment is humming right now.

  8. bazza Silver badge

    The Car Building Analogy is Flawed

    Because, there have been different outcomes.

    The proposition is that if you offer an API, you can sell that multiple customers, as an alternator manufacturer might sell to more than one company.

    Thing is, if you study the automotive sector, it’s nothing like that.

    Suppliers who enter the Toyota sphere of influence end up supplying only Toyota. This is because doing business with Toyota is really, really very good, they’re very large, and who’d want to do business with Ford as well?

    Honda have a reputation for doing an awful lot themselves.

    The US manufacturers had a habit of driving suppliers into bankruptcy by not paying bills, which is why when the Japanese arrived in the US a lot of suppliers stopped supplying US companies.

    Bosch do everything in Germany. Similarly in Italy and France. Traditionally everything British used everything Lucas had to offer, but that’s a period best kept in the dark.

    Every single deal is going to be different, successful or pointless, or over or underwhelming. What might start out looking pointless could become a gold mine. Or, set up to supply a big time customer and you might go bust in the process. The point is that to sell an API you have to have the resources to deliver it, and a thing that no one else can reproduce. If you solve the resource problem by relying on, say, scalable cloud providers you’re just a middleman, ripe for being cut out, because as we well know APIs can be freely replicated…

    The real value in business is always control of the end customer base… If you provide only an API, you’re not the most valuable bit of the business.

    1. Norman Nescio Silver badge
      Joke

      Re: The Car Building Analogy is Flawed

      Traditionally everything British used everything Lucas had to offer, but that’s a period best kept in the dark.

      If you were using Lucas for lighting, that was not difficult!

      https://mossmotoring.com/prince-darkness-joseph-lucas/

  9. Anonymous Coward
    Anonymous Coward

    ....and no mention of Oracle vs Google?

    .....so it's not not enough to "explain what an API is"...............

    .....it's also required to explain what "the name of the API is"......as well!

    Larry Ellison thinks that he can copyright THE NAME.......

    .....even if the code called by THE NAME happens not to belong to Larry!!

    Wonderful!!

    Welcome to world of APIs......and commercial greed!

    Link: https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_Inc.

  10. martinusher Silver badge

    A monetizing trend

    Ever since the Marketing/Legal people because aware of APIs they've been thinking of a way to monetize them. Its obviously prime territory for an IP land grab but you do get the impression that what they'd really like is the virtual equivalent of a coin mechanism on each API call -- pop a coin into the slot, you get to use the API or those particular calls.

    It seems that greed has no bounds.

  11. Anonymous Coward
    Anonymous Coward

    I'm sorry

    To shoot this article down, but it's not seeing the forest for the trees.

    Firstly, as everyone else have already pointed out, APIs have been around since two applications needed to talk to each other to run cooperatively.

    Secondly, someone above have the insight that what's relatively new is being able to do that cooperative work across heterogenous machines controlled by different actors, with interactions occurring in an ad hoc manner X509, rsh, SMTP or finger are examples of APIs, although we call them protocols.

    One that I never understood was the PhD guy who started calling HTTP, REST. He went like "Look! If you use a POST verb in your request you can send data to the server and the application at the other end will (should) understand the request to be non idempotent and…" Aber keine Scheiße, Herr Sherlock! That's straight off the HTTP RFC. :)

    1. Claptrap314 Silver badge

      Re: I'm sorry

      The big deal about REST was that it allowed a standardized way to organize an API. Without REST (as I am #*$(&@@ experiencing RIGHT NOW), the entire API document has to be carefully studied to understand what a particular call does. With REST, you can feel your way through, and scan most of the document.

      Of course, REST led to OpenAPI & Swagger, which means that you often DON'T have to read the API documentation at all!

      What was (and is) obnoxious about REST is that there are classes of interactions that don't fit the REST paradigm, and, so far as I see, there is no clean way to handle those cases.

      1. Anonymous Coward
        Anonymous Coward

        Re: I'm sorry

        > What was (and is) obnoxious about REST is that there are classes of interactions that don't fit the REST paradigm

        That's because the "paradigm" is a particularly bad one. I recall reading excerpts of this guy's thesis back in the day and I didn't think it was PhD material. Undergraduate stuff at best.

        > REST led to OpenAPI & Swagger

        OpenAPI *is* what used to be called Swagger. Good for taking advantage of documentation generators to get nicely formatted pages. Shockingly bad for specifying real-life APIs.

        > which means that you often DON'T have to read the API documentation at all!

        I really wouldn't recommend that approach.

      2. bombastic bob Silver badge
        Devil

        Re: I'm sorry

        sometimes making up your own API is easier (rather than using someone else's concept).

        curl http://localhost:1234/function/param1/param2 <-- simple to implement and parse

        (this would return back success, errors, query results, whatever like any URL to a web server would, ideally as plain text)

        No need for XML, PUT, POST, JSON parsers, or whatever... unless you WANT to have it return XML or JSON or whatever. Pretty straightforward and flexible and do what you want. Not *entirely* RESTful but much better, In My Bombastic Opinion.

        (such an API invoked by a set of a simple web pages can do some amazing things, where simple code quickly forms request URLs from user input, implementing it on the back-end in PHP rather than as JavaScript in the client, and use off-the-shelf web server and PHP - or if you wanted to, do a javascript query, whatever and have your API return XML. Pretty fast to do xml as sprintf() and return it. Oh yeah, I would implement the server part in C. In fact, I *HAVE* !!! SEVERAL times!!!)

      3. bombastic bob Silver badge
        Devil

        Re: I'm sorry

        if you think of REST as a guideline in which you can be as flexible a you want, not so bad. Just do not restrict yourself to XML or JSON and it suddenly becomes VERY easy to implement both as a server and on the UI end.

        1. Anonymous Coward
          Anonymous Coward

          Re: I'm sorry

          > if you think of REST as a guideline in which you can be as flexible a you want,

          Yeah but then it's called HTTP. That's where the real merit was. Simple, logical and very effective.

          REST is just someone else who comes along and adds a layer of bullshit on top of it, a bit like Scrum master™ types.

          1. LybsterRoy Bronze badge

            Re: I'm sorry

            This resonates with me as I'm trying to understand the OAUTH2 RFC

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

  • Remember the hype for NFV? Whatever happened with that?
    A technical deep-dive with a networking guru who was there

    Systems Approach Since our recent posts have been retracing the history, and recounting the lessons, of software-defined networking, it seems like a good time to do the same with the other sea-change in networking over the past decade: network functions virtualization, or NFV.

    Most people trace NFV back to the call-to-action [PDF] several network operators presented at the Layer123 SDN & OpenFlow World Congress in October 2012.

    My involvement happened over several months before that presentation, working with colleagues from BT, Intel, HP, Wind River, and TailF on a proof-of-concept that gave the idea enough credibility for those operators to put the NFV stake in the ground.

    Continue reading
  • The time we came up with a solution – and found a big customer problem
    A fascinating firsthand retelling of the technical history of MPLS

    Systems Approach One of the more satisfying conference experiences in my career was giving a presentation in the SIGCOMM 2003 Outrageous Opinions session, entitled: MPLS Considered Helpful.

    The Outrageous Opinion session was at that point about eight years old; I had chaired the first such session in 1995. The inaugural session contained a number of memorable talks, such as David Clark's actually-not-outrageous position that networking people should all become economists.

    By this stage in its evolution the session had turned into something of a stand-up comedy show, and the idea of making a hopefully humorous defense of MPLS in front of an audience that either ignored or disagreed with the admittedly controversial technology came to me while out on a run through the German countryside.

    Continue reading
  • You should read Section 8 of the Unix User's Manual
    And see the importance of open and accessible operations

    Systems Approach If, like me, you were a computer-science graduate student who cut your teeth on Berkeley Unix – complete with the first open-source implementation of TCP/IP – you know Section 8 as the cryptic System Maintenance Commands section of the Unix User's Manual.

    It was obvious, to me, that this concluding section warranted a closer look because the introduction warned: "Information in this section is not of great interest to most users." Judging by my taste in research problems over the years, reading Section 8 turned out to be a pretty good investment.

    But before getting to Section 8, you first learned about the rest of Unix, where you discovered how empowering it is to be able to build new internet applications. Anyone interested in how targeted investments in open-source software, coupled with affordable hardware, can spur innovation should study the role of the Berkeley Software Distribution (BSD) in the success of the internet.

    Continue reading
  • What is Magma? An open-source project for building mobile networks
    Amar and Bruce explain how cloud native principles can be applied to wireless connectivity

    Systems Approach This month's column was co-written by Amar Padmanabhan, a lead developer of Magma, the open-source project for creating carrier-grade networks; and Bruce Davie, a member of the project's technical advisory committee.

    Discussions about mobile and wireless networking seem to attract buzzwords, especially with the transition to 5G. And hence we see a flurry of “cloudification” of mobile networking equipment—think containers, microservices, and control and user plane separation.

    But making an architecture cloud native is more than just an application of buzzwords: it entails a number of principles related to scale, tolerance of failure, and operational models. And indeed it doesn’t really matter what you call the architecture; what matters is how well it works in production.

    Continue reading
  • SmartNICs, IPUs, DPUs de-hyped: Why and how cloud giants are offloading work from server CPUs
    Where this technology grew from, and what it offers you

    Systems Approach The recent announcements from Intel about Infrastructure Processing Units (IPUs) have prompted us to revisit the topic of how functionality is partitioned in a computing system.

    As we noted in an earlier piece, The Accidental SmartNIC, there is at least thirty years’ history of trying to decide how much one should offload from a general purpose CPU to a more specialized NIC, and an equally long tussle between more highly specialized offload engines versus more general-purpose ones.

    The IPU represents just the latest entry in a long series of general-purpose offload engines, and we’re now seeing quite a diverse set of options, not just from Intel but from others such as Nvidia and Pensando. These latter firms use the term DPU (data processing unit) but the consensus seems to be that these devices tackle the same class of problems.

    Continue reading
  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask
    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • Here's an idea: Verification for computer networks as well as chips and code
    What tools are available? What are the benefits? Let's find out

    Systems Approach In 1984, artificial intelligence was having a moment. There was enough optimism around it to inspire me to explore the role of AI in chip design for my undergraduate thesis, but there were also early signs that the optimism was unjustified.

    The term “AI winter” was coined the same year and came to pass a few years later. But it was my interest in AI that led me to Edinburgh University for my PhD, where my thesis advisor (who worked in the computer science department and took a dim view of the completely separate department of artificial intelligence) encouraged me to focus on the chip design side of my research rather than AI. That turned out to be good advice at least to the extent that I missed the bursting of the AI bubble of the 1980s.

    The outcome of all this was that I studied formal methods for hardware verification at a point in time where hardware description languages (HDLs) were just getting off the ground. These days, HDLs are a central part of chip design and formal verification of chip correctness has been used for about 20 years. I’m pretty sure my PhD had no impact on the industry – these changes were coming anyway.

    Continue reading
  • It's time to decentralize the internet, again: What was distributed is now centralized by Google, Facebook, etc
    The idea was to remove central points of failure, not introduce them

    Systems Approach Anyone who studies internet technology quickly learns about the importance of distributed algorithms to its design and operation. Routing protocols are an obvious example of such algorithms.

    I remember learning how link-state routing worked and appreciating the elegance of the approach: each router telling its neighbors about its local view of the network; flooding of these updates until each router has a complete picture of the network topology; and then every router running the same shortest-path algorithm to ensure (mostly) loop-free routing. I think it was this elegance, and the mental challenge of understanding how such algorithms work, that turned me into a “networking person” for the next thirty years.

    The idea of decentralization is baked quite firmly into the internet’s architecture. The definitive paper on the internet’s original design is David Clark’s “The Design Philosophy of the DARPA Internet Protocols” published [PDF] in 1988. Near the top of the list of design goals we find “Internet communication must continue despite loss of networks or gateways,” and “The Internet must permit distributed management of its resources.” The first goal leads directly to the idea that there must not be single points of failure, while the second says more about how network operations must be decentralized.

    Continue reading
  • What is GitOps? This is the technical introduction you've been looking for
    If you need to get your head around the concept of configuration-as-code, start here

    Systems Approach It’s not hard to form the impression that building and deploying cloud native systems is rapidly becoming a solved problem, with GitOps providing the roadmap.

    The approach revolves around the idea of configuration-as-code: making all configuration state declarative (e.g., specified in Helm Charts and Terraform Templates); storing these files in a code repo (e.g., GitHub); and then treating this repo as the single source of truth for building and deploying a cloud native system. It doesn’t matter if you patch a Python file or update a config file, the repo triggers a fully automated CI/CD pipeline.

    Continue reading
  • What’s the big deal with service meshes? Think of them as SDN at Layer 7
    A technical yet demystifying dive into networking tech you can’t avoid

    Systems Approach I remember when I first heard about Service Meshes in 2017, and wondering what the big deal was. Building cloud applications as a graph of microservices was commonplace, and telcos were hard at work inventing yet other ways to chain together virtualized network functions. Service graphs, service chains, service meshes … how many ways do we really need to talk about composing complex systems from a collection of smaller components?

    It wasn’t until I recognised a familiar pattern that I got it: a Service Mesh is just SDN at Layer 7. That’s probably what happens when SDN is the hammer you keep hitting nails with, but I’ve come to believe there is value in that perspective.

    The figure below highlights the similarities between the two scenarios, both of which include a centralised controller that issues directives to a distributed set of connectors (physical/virtual switches in one case, and a sidecar container in the other case) — based on a combination of policy intents from above and monitoring data reported from below. The primary difference is that the SDN controller on the left is controlling L2/3 connectivity and the Service Mesh on the right is controlling L7 connectivity.

    Continue reading

Biting the hand that feeds IT © 1998–2022