back to article Polyfill.io claims reveal new cracks in supply chain, but how deep do they go?

Libraries. Hushed temples to the civilizing power of knowledge, or launchpads of global destruction? Yep, another word tech has borrowed and debased. Code libraries are essential for adding just the right standard tested functionality to a project. They're also a natural home for supply chain attacks that materialize malware in …

  1. Joe W Silver badge

    Actually needed?

    Yeah, it serves code to let older browsers use "modern" (whatever that means) websites.

    Except: Most modern websites are kind of... let's not be shy: shite - to navigate, search for stuff, bookmark (remember) stuff, slow to load (hello, 3rd and 5th party content), unwanted features (like zooming a picture on mouseover). They are really bad to navigate on a non-mobile, landscape oriented device, because that actually useful "look at the screen aspect ratio and reflow stuff accordingly" feature is not used, and websites serve you a massive, screen-filling, full width full height picture of something irrelevant for you (I'm looking at most university websites I have visited in the past, or news websites that shwo you only the single headline).

    It seems there are no web developers any more, no people caring about actual usability, no concern about performance, load times.

    Yeah, I'm old. And bitter. So?

  2. Dan 55 Silver badge

    The solution is, was, and always will be that website owners serve known good libraries themselves and update them themselves. This is at odds with the tracking industrial complex that the modern web has become and also makes websites more expensive to maintain, so it won't happen.

    1. Pseu Donyme

      Also, 3rd party resources are suspect data-protection-wise: the 3rd party gets the user's ip-address and the URL of the referring page (at least).

      There is even a German court decision against this: https://www.theregister.com/2022/01/31/website_fine_google_fonts_gdpr/

      In a nutshell: since a resource can be hosted locally it is not necessary to hand information about a visiting user to a 3rd party and therefore this isn't lawful in the sense of the GDPR Article 6(1), where all subsections (b-f) begin with 'processing is *necessary* for ...' (except the subsection for consent (a), which wouldn't be valid if made a requirement for using a site (Article 4(11), Article 7(4)). (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679)

    2. DS999 Silver badge

      That's a terrible take

      How do website owners tell "known good libraries" from malicious ones? No one knew polyfill.io's libraries were malicious until it was discovered. Before that discovery, they were very much in the "known good libraries" bucket.

      If libraries are installed on individual websites, when a library is found to be malicious EVERY website will need to be updated to fix that. If they're relying on someone else to manage that stuff, like Cloudflare, at least there is a way to fix them all (served by Cloudflare) in one fell swoop.

      The software distribution version of this argument is that application authors should statically link "known good libraries" rather than rely on the shared libraries installed on the host. I think everyone should realize what a terrible idea that would be, as now rather than one OS patch to fix everyone you have to rely on every application author to push an update!

      1. Dan 55 Silver badge

        Re: That's a terrible take

        When a website goes rogue, your plan is hoping all the big CDNs are willing and able to react in time and perhaps also willing to end the commercial relationship they have the rogue website if they have one.

        So yes, each website having it's own copy is better as not all websites will be hosting the same version of the library, not everyone will be exposed to the version of the library which has the malware payload as it depends on their browsing habits, and finally safe browsing lists, antiviruses, and ad blocks would also be updated in a similar amount of time that Cloudflare was.

        OSes don't come with all libraries installed so you can't really compare. And when an OS doesn't come with a particular library installed, yes, the usual thing is each application brings its own copy of the library to avoid DLL hell.

  3. This post has been deleted by its author

  4. Michael

    just host everything yourself

    I try to ensure that all web products my company develop only contains code we host ourselves. This means I have a cached copy of npmjs, nuget and other packages. It also means that these services fail I've a backup and can continue without issue.

    I host fonts,scripts images etc for all sites on our servers. I have scripts setup to measure API performance and verify that there hasn't been a change over time.

    All of these things can be done and lots of them can be automated. However, they require you to design them into the system and stick with it. It also generally means that you marketing team aren't going to get everything that they want.

    Our company website however is the antithesis of this approach.

  5. Korev Silver badge
    Joke

    > familiar favorites[sic] could go bad overnight. Nullsoft's WinAmp MP3 player app got sold to AOL, and promptly started installing the AOL desktop software by default.

    There was me thinking that targeting hospitals with ransomeware was bad...

  6. Anonymous Coward
    Anonymous Coward

    Not the browser I'm coding for? Just call it a security risk.

    > You've tacitly encouraged the holdouts to carry on traipsing the web with vulnerable, unpatched code on their systems

    Or they are using a fully up to date, fully patched browser - which happens not to be the one you've decided is The One, so it doesn't pander to whatever excess flummery you've decided to put onto your website.

    Which is best for your site clients, to use whatever featurettes your browser is trying out this month, or sticking to the functions that have passed the test of time, been accepted industry-wide and implemented by all the players (or locally stubbed-out 'cos they aren't of use to a text-only browser but the useul bits of the page still work)?

    1. heyrick Silver badge

      Re: Not the browser I'm coding for? Just call it a security risk.

      "a text-only browser"

      I think there are FAR to many people being paid to create websites that do not even know that such a thing is possible, never mind exists.

      1. Korev Silver badge

        Re: Not the browser I'm coding for? Just call it a security risk.

        Which could (well should) get them a slap as these can be used by disabled people for things like screenreaders

  7. CowHorseFrog Silver badge

    Its amazing how basically everybody just randomly references dependencies without ever investigating them in anyway. Whats even more amazing is how poor security managers for such runtimes are, they basically always allow everything.

    1. John Brown (no body) Silver badge

      And even fewer probably are aware that those dependencies are likely pulling in even more of their own dependencies from other repositories.

  8. lostinspace

    If you insist on using a CDN to host libraries you use, please please at least use the "integrity" attribute on your "script" element to ensure the files are what you expect

    Otherwise you are basically giving the CDN owner full access to all your users data.

  9. Lee D Silver badge

    "If it's not on my computers, it's not under my control".

    It's a simple rule of IT. If you want to host a website, serving content to users, and you're actually directing them to pull that content from elsewhere - it's out of your control. No amount of "reputation" should be encouraging people to do that. I don't care if the jQuery site or even the Google CDN of it is "usually fine", one day it's not going to be - whether that's a buggy version or a security flaw or a compromise or a man-in-the-middle attack or even just a forgotten domain renewal.

    Especially when all you have to do to fix it is download, save to your website folder, and change the path you're including.

    If the data you're serving to your customers is not coming from your own systems, you're just asking to compromise your users.

    1. heyrick Silver badge

      "you're just asking to compromise your users."

      And should therefore be considered liable.

      Nothing concentrates the minds of managers like legal threats, or should we say anti-bonuses.

      1. Alan Brown Silver badge

        Unless and until a court rules that's the case, it's hard to sell that risk to manglement

    2. John Brown (no body) Silver badge

      "Especially when all you have to do to fix it is download, save to your website folder, and change the path you're including."

      Depends on the licensing. Some won't allow you use them that way, indeed may not even work, if you serve them from your own host. In that case, extricating oneself from the tangle means re-writing your own equivalent libraries and functions.

  10. Lomax
    Boffin

    You had one job...

    > the Sisyphean slog of maintaining special access for those who can't or won't get with the program.

    I don't find complying with standards a "Sisyphean slog" - if anything it makes my job easier. Becvause I follow simple standards the sites I build will work in any browser that is standards compliant and able to render HTML5 (and supports TLS1.2+). I barely even need to test this - apart from some minor pixel level rendering quirks it just works. Sure, the sites will look better in a graphical browser that supports CSS3, and the interface will be more user friendly if JavaScript is allowed to run, but text-to-speech and Lynx users should have little difficulty perusing the information they serve, nor will anyone not wanting to trust my client side code be prevented access.

    Where interactivity is needed I usually build it using HTML and HTTP first, and use these same endpoints asynchronously with XMLHttpRequest. Often these fragments can be cached on the server. Resources have URLs, and these are human (and machine) readable, and they can be bookmarked. Browser history works without requiring any hacks - including ending up at the same scroll position when returning to a previous page. To the extent that I rely on external APIs I prefer to do so on the server side, and I will ensure that these fail as gracefully as possible with cached and fall-back content. I do not like single points of failure that are outside my control (I'm looking at you, Cloudflare). JavaScript is only used where it offers a clear usability advantage. I find no need for any client side libraries like JQuery or Vue.js since browser support for standard JS methods is really quite good across the board, but I do use some specialised libraries like Chartkick. Client side code is is combined, minified and served from my own servers, with a strict CSP. The most important metric to me is speed, particularly LCP (ideally <1s), and running lots of code on the client can (and does) negatively impact this. Rendering everything on the client also leads to a ludicrous amount of duplication of work, with more or less exactly the same instructions being executed not only for every visitor, but for every content load. That this is an insane approach and that SPAs were a terrible idea should have been obvious from the start, yet here we are.

    None of this is rocket science, nor can it be considered a "Sisyphean slog" - it's just the job. Shame so many of us seem to have forgotten how to do it.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like