back to article GitLab versus The Zombie Repos: An old plot needs a new twist

GitLab is chewing on life's gristle. The problem, we hear, is that deadbeat freeloaders are sucking up its hosting lifeforce. The company's repo hive is clogged with zombie projects, untouched for years but still plugged into life support. It's costing us a million bucks a year, sighed GiLab's spreadsheet wranglers, and for what …

  1. Paul Crawford Silver badge

    I did wonder about the volume of data held on free projects. 5GB is a *lot* of code, so unless folks are stuffing it with backup ZIP files, etc, it is hard to see that being used up.

    There was talk of moving stuff to slower storage, you would think that was already automatic (i.e. only files that are frequently/recently requested stay on even the HDD-tier of the back-end storage) and certainty if they are suffering from the $/GB for SSD use than why not have tiring where paid users get the fast/expensive stuff, and the free user's projects get punted down a layer on to less expensive and slower storage?

    Edit: Just checked, I have 4 projects on GitLab, one public using 2.4MB and 3 private, all totalling 4MB and last updated 2 years ago. How typical am I?

    1. Anonymous Coward
      Anonymous Coward

      I have 25 or so repositories, totalling about 1GB. Much of that hasn't been active recently.

      > the merest brush of a code fairy's gossamer wings will reset the clock.

      I'm not sure GitLab thought this through. It's not hard to write a bit of code that scans all local repositories, updates a small managed file and push the change. I've done it, purely for research purposes of course! So they would get involved in an "arms race" with devious members trying to keep their repositories live and GitLab constantly changing what they consider "meaningful changes" to a repository.

      1. Jim Mitchell
        Boffin

        I hope these repositories are set to not let some random El Reg reader update them on a whim.

      2. brotherelf

        Even raising an issue was supposed to be enough to keep it alive. And even then, it would have culled the very very dead wood that truly nobody cares about, not even enough to set up a cron job, let alone move to (gasp) a 5$/mo tier.

      3. veti Silver badge

        I think they'd be fine with that, at least at this point. If you care enough about your repos to do that, then fine, keep them.

        But GitLab suspects, and I suspect, that there are a non-trivial number of users who no longer care about their repos at all. Maybe they've got bored and moved on with their lives. Maybe they're dead. Who could tell?

        1. Adrian 4

          Just because the owner has lost interest or died, doesn't mean the code has lost all value. It could still be widely used.

    2. ThomH

      I’m on GitHub rather than Lab but to offer an example from further on the spectrum: amongst my set, I have one repository that is multiple gigabytes in size.

      It’s test cases, in volume, spelt out in JSON as insurance against bit rot. That’s even with a decent portion of them being GZipped prior to addition to the repository.

      It’s popular with a decent subset of people, but we’re talking dozens only. Would almost certainly fail an objective cost-benefit analysis.

  2. Mishak Silver badge

    When is a project a Zombie?

    Would there be merit in saying something that hasn't been downloaded or cloned is a better indicator, rather than looking for updates?

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: When is a project a Zombie?

      It guarantees Gitlab can delete accounts :-/. On Github, there's so many not-updated-in-years embedded repos that if they were deleted based on last "update"... well Platformio would become unreliable (which happens to show a weakness in Platformio*).

      Also, Gitlab could be referring to updates using thier own definition of "update", whatever that might be. It might not be based on mod time but rather a combination of modifications.

      *Platformio isn't strictly dependent upon Github, but there's soooooo many projects that do depend on it, there's.even "production" and "enterprise" projects that source from it.

  3. Len
    Meh

    Unfortunate timing

    The timing of this has been disastrous for GitLab. This happens just when groups such as the Software Freedom Conservancy were making inroads with their Give up GitHub campaign on the back of GitHub giving away other people’s open source code as part of its CoPilot feature.

    That GiveUpGitHub move seems to have caught on, judging from noises around the #GiveUpGitHub hashtag on Twitter and Mastodon and alternative services such as Hostea and Codeberg are reporting a lot of interest.

    Of course the most pure open source advocates would still have been suspicious of GitLab but if they had played their cards right GitLab could have been a refuge for many people leaving GitHub but slightly hesitant of moving to smaller forges such as Hostea or Codeberg.

    That doesn’t solve GitLab’s Freemium economics problem, of course, though perhaps an influx of paying projects on the back of the GitHub exodus could have changed it for the better.

  4. elsergiovolador Silver badge

    Off the cloud

    Maybe they should move off the cloud? All these "zombie" project could probably be hosted on one beefy server for like $500 a month.

    1. doublelayer Silver badge

      Re: Off the cloud

      I challenge you to prove that. Use the article's number of 145 PB of theoretical project storage. I'll even let you assume that the average user is only using a tenth of that (balancing out the people with tiny chunks of code and those who store larger assets there). 14.5 PB, with redundancy and availability, for $500 a month. Go.

      Before you try, you can't assume that everyone isn't using the storage, that the data can be compressed, or that the data can be usefully deduplicated. Someone storing as much data as Gitlab is has already investigated compression and is likely using it. You have to find sufficient raw storage.

      1. Richard 12 Silver badge
        Boffin

        Re: Off the cloud

        Most "zombies" will be forks, and thus be mostly duplicate of the origin - because of what Gitlab is.

        A user who mostly does fixes or maintains their own forks of some long-running or large projects may be be reported as using over 1GB but is actually responsible for well under 1MB of unique compressed data - or less.

        This is basically part of how Gitlab and Github make money - they can get paid multiple times for storing the exact same data, as multiple commercial customers maintain their own private forks of open source projects.

      2. elsergiovolador Silver badge

        Re: Off the cloud

        So you give a challenge and then moving goal posts? Okay.

        Before you try, you can't assume that everyone isn't using the storage, that the data can be compressed, or that the data can be usefully deduplicated. Someone storing as much data as Gitlab is has already investigated compression and is likely using it. You have to find sufficient raw storage.

        Why not? It is known in advance how much storage each user is taking and server is capable of compression and decompression and you can deduplicate data "usefully".

        1. doublelayer Silver badge

          Re: Off the cloud

          No, I give a challenge and state up front that the predicted way for you to move the goal posts isn't acceptable. I could see that coming a mile off. They need to store a lot of information? Come up with some reason why it's actually a tiny amount of information. They're storing a ton of data over there. I'm sure they've already looked at compression and deduplication. They still end up having to store a lot of data, and you claim they can somehow manage it for a laughably tiny bill. If you actually think that's possible, prove that. Don't tell me "Well it is actually only a hundred gigabytes, so two hard drives does it", because we both know it's orders of magnitude more even if we don't know exactly how efficient their compression is. You can't get anywhere close to their data requirements for your stated bill.

  5. lglethal Silver badge
    Go

    I am not a programmer, (so not a user of Gitlab, Github, or any other repo service), but one way, that I would have considered reasonable, is that if you are no longer contactable after a year of inactivity then your account gets shut down. So after a year of no activity, Gitlab sends you an email to your associated email address. If you dont respond, then your account goes away. Hell for safety make it multiple emails over the space of 6 months, but if your not contactable by then, I dont really see a problem.

    Perhaps for safety, after that 6 months, Gitlab removes the Repo from being viewed or used or whatever. Keep the data but with noone able to contact, it view it, use it , clone it, whatever. And keep it in that limbo for 3 months. If there are no massive howls of anger from the community, then it goes to big the Data graveyard in the sky. If someone suddenly finds themselves needing that repo, then it can be restored, and Gitlab can probably charge for its restoration at that point.

    I cant really see a problem with this idea, beyond that Gitlab will have to put up with zombie projects for another 18 months...

    1. Will Godfrey Silver badge

      This could be disastrous where a small project is completely stable and doesn't need any changes, but in regular use by large numbers of other code. The original author might not be available for a number of reasons.

      1. Hawkeye Pierce

        Define "disastrous".

        If "large numbers of other code" was making regular use, one would certainly hope that those responsible for that other code had measures in place to guard against exactly this potential scenario. Or the scenario that the original author takes down their own repository. Or various other scenarios.

        Because if they don't take those measures then all bets are off and frankly GitLab is only one of a number of problems you now have,

        1. An_Old_Dog Silver badge

          Turtles All the Way Down Scenario?

          Not having had to personally deal with this, I'll ask: how easy/difficult is it for a programmer to know ALL his/her project's dependencies? If a project depends on, say, X11 can the project-depending-on-X11's programmer reasonably discover all X11's dependencies and sub-dependencies?

          Point being, there may be an inactive project which has many things at higher levels depending on it, but due to the depth of the dependency stack, people running the higher-level projects might not realize how important that inactive project is when GitLab, or whomever, asks, "Is anybody using this? No? Okay, we'll pull the plug."

          1. doublelayer Silver badge

            Re: Turtles All the Way Down Scenario?

            You can know all your dependencies if you put in a little effort. If you use a package management system, it can print out a list of all the packages you have. If you're building everything from source, you know what code you've had to compile. Only if you're using a combination where the user has to install some libraries but you compile in others is it even a little tricky, and you can start from nothing and simply count which packages you have to install to get it working, then identify any dependencies those packages list. Not everyone does this, and it's not an automatic process, but nor does it require an unreasonable effort on their part.

            1. Richard 12 Silver badge

              Re: Turtles All the Way Down Scenario?

              No, I can only know the code and tools I consume.

              It is not possible for me to know which tools my upstream is dependent on.

              For a trivial example, I use clangformat. People using my code do not need clangformat, and clangformat is not in my repository, either as a gitmodule or a copy.

              It's part of the IDE I use.

              If the golden clangformat repo suddenly vanished, I would not know until I came to update my IDE and found it was gone.

              I can hope that the IDE project maintainer keeps their own fork. Maybe they do, maybe they don't.

              Should I keep a copy of my IDE's source? And all the plugins I use too?

              1. doublelayer Silver badge

                Re: Turtles All the Way Down Scenario?

                Whether you need to do this depends on what you want to track--if you just track the dependencies for stuff you write, then you're fine. If you're using someone else's package that includes it, then they'll need to do that tracking to make a working package anyway, so unless you're planning to compile your own, in which case yes you do need copies of the source for it, you can ignore it. If you do want to also track the dependencies of everything you run, and some people do, then you follow the same procedure. For example:

                $ apt depends clang-format-10

                clang-format-10

                Depends: libc6 (>= 2.14)

                Depends: libclang-cpp10

                Depends: libgcc-s1 (>= 3.0)

                Depends: libllvm10 (= 1:10.0.0-4ubuntu1)

                Depends: libstdc++6 (>= 5.2)

                Depends: python3

                You probably don't need to do this, but if you want to, the tools are available to you. You can also outsource it. Let your IDE writer check on clang-format. If they drop it, then replace it as you would do with a library you were pulling in. That could mean finding an archive and building it. It could mean switching to an alternative. It could mean no longer using it. You're in charge of your dev environment and your code's dependencies, so take the approach you think best and collect information you'll need to take that action.

                1. An_Old_Dog Silver badge
                  Holmes

                  Re: Turtles All the Way Down Scenario?

                  @ doublelayer: That's good information. Pursuing that, I find X11 (in my case, xorg) has 54 1st-level dependencies. Following just(!) the firstj dependency chain:

                  xorg -> xserver-org -> xserver-org-core -> xserver-common -> x11-common -> lsb-base. xorg: 54 1st-level deps. xserver-org: 44 1st-level deps. xserver-org-core: 44 1st-level deps. xserver-common: four 1st-level deps. x11-common: one 1st-level dep. lsb-base: zero dependencies.

                  That's just the first dependency chain of xorg; there are 53 more of them. That's one metric ass-ton of code, and because some (many?) programmers are now fearing their quiescent projects -- and the quiescent projects upon which their projects depend -- will possibly be deleted, those programmers are going to start "defensively forking". GitLab's storage system probably includes deduplication, but that won't help much, because when project X qualifies as a deletable zombie, its deduped blocks cannot be freed, because they are still referenced by not-yet-zombie-qualifying defensive-fork-A, defensive-fork-B, and defensive-fork-C of project X. End result, use of GitLab's storage increases, rather than decreases.

                  1. An_Old_Dog Silver badge

                    Re: Turtles All the Way Down Scenario?

                    Correcting myself re: "use of GitLab's storage increases, rather than decreases": "GitLab-storage of zombie projects does not decrease".

                    The defensive forks do not take up space, since their blocks are deduped, but they do delay ultimate release/reuse of blocks consumed by zombie projects.

                  2. doublelayer Silver badge

                    Re: Turtles All the Way Down Scenario?

                    Absolutely, that's an issue. I don't think many developers will fork every dependency, because updating them if they're still using the canonical version is annoying, but they probably do keep copies. I know I do that with the important dependencies even though I'm not expecting their code to be taken away. This is an unfortunate requirement. Storing code that you use is important, and if you're going to use a project that nobody else is archiving, you might have to do the archiving yourself. Having Gitlab keep a useful, public, and free archive up would be excellent, but if it costs them too much for their investors to continue supporting, they might stop.

              2. Michael Wojcik Silver badge

                Re: Turtles All the Way Down Scenario?

                The industry is currently in a tizzy over this very question, thanks to various powerful organizations (such as the US Federal government) moving toward requiring SBOMs (Software Bills of Materials).

                I've been involved in a number of discussions and research spikes into the SBOM process. In the general case, and in many specific instances, it is not easy to solve – even if you already have robust tracking of your first-order dependencies, and even for projects which are not a horrible Lovecraftian agglomeration of third-party components (see basically all "modern" web UIs).

      2. lglethal Silver badge
        Go

        I did suggest that when it gets taken down after 6 months of no replies, if there is no outcry from the community, then it gets deleted. Are you suggesting that people wouldnt notice for 3 months it being down, and would not complain about it disappearing within that 3 months?

        If it's vital, and suddenly disappears people would absolutely complain. Then Gitlab could bring it back, but perhaps by assigning it to someone else, as the original Author has not repsond to 6 months of messages, and so cannot be considered to be enaged with the community.

        OK. For those of you who have downvoted, propose your own solutions...

        1. An_Old_Dog Silver badge

          GitLab Back to Where You Once Belonged

          @lglethal: I like your idea, generally, but I'd suggest the project-owner-response timeout set to seven years. Laws here state you can't be declared legally dead until you've been missing for that long. And while the seven years is counting down, the project could go into read-only mode.

    2. Mishak Silver badge

      The problem is

      The project owner may have disappeared / died, but the code is still being actively used - a bit like this well know XKCD

      1. Michael Wojcik Silver badge

        Re: The problem is

        And some zombie projects may be of future interest even if they aren't used in production software now, such as research projects.

        I have a couple of projects from my graduate-school days on Sourceforge which aren't likely to be used in any running software at the moment, but might be picked up by some future researcher. They're small – but a lot of small can add up.

        Anyone who's done significant archival research can attest that sometimes your project turns out to benefit from something obscure that no one else has looked at in years. In the course of my research (not in CS, but then not everything is CS) I've read books in special collections that hadn't been requested for a century.

        Now, I'm not saying GitLab should be obligated to maintain these archives (though they did rather set themselves up for that). I'm just saying it's a shame to lose even some fairly trivial projects, because you never know when those might be interesting later.

  6. Chris Evans

    I don't understand the suggestion...

    "What if the free tier was contingent on offering 10GB of your local storage to the community, with the resultant aggregated free tier storage managed by GitLab as the hosting system" I don't follow!

    1. An_Old_Dog Silver badge

      Re: I don't understand the suggestion...

      I think the person who wrote about "offering 10 GB of storage" meant, "dedicating 10GB of storage on their home PC, and running a distributed filesystem package on that home PC to make it available to GitLab."

    2. brotherelf

      Re: I don't understand the suggestion...

      I guess it's alluding to a sort of BitTorrent-ish thing, where you would use some Web3.0 distributed decentralized filesystem. Even the up/down ratio thing finds itself again as storage donated vs used.

      1. Michael Wojcik Silver badge

        Re: I don't understand the suggestion...

        No need to invoke the rough and slouching beast of web3 (assuming you're talking about the Gavin Wood version, not the Semantic Web, which TBL has sometimes referred to as "Web 3.0"). There are plenty of ways to distribute storage that don't require that half-assed rubbish. IPFS is one obvious example.

        In this case, you wouldn't need full decentralization, though, because GitLab would serve as a central proxy. You'd just run a server (or more likely a tunnel client, to avoid firewall, NAT, and addressing issues) and dedicate a chunk of storage, and GitLab would know what was stored on your machine.

        In practice for this work GL would want a lot of redundancy, so the effective per-user storage would be significantly less. But a "give resources rather than money" tier isn't an inconceivable variation on the freemium model.

    3. doublelayer Silver badge

      Re: I don't understand the suggestion...

      You must install some software on your computer which allows Gitlab to store some files there and upload them. How this works for people who have limited bandwidth, turn their computer off with some frequency, or simply don't continue to have the software running isn't explained. Could it work theoretically? Yes, it would fix a number of problems. Would it work in practice? Probably not so well unless you had a really large set of users who kept it consistently operational.

  7. CrackedNoggin Bronze badge

    29 million non-active users, $1.3 million a year. (Should that be "accounts?")

    That works out to 4 cents a year per user.

    If they charged 1 dollar a year for a 5GB account, they could make a profit,

    and there would be less chance of Gitlab going belly up.

    If Gitlab goes belly up, all the code gets lost.

    There is no way to escape a free tier with 5GB being targeted for use as backup storage.

    1. ComputerSays_noAbsolutelyNo Silver badge

      If GitLab goes belly up, only the hosted-repo is lost.

      There will still be local clones of the repo with the associated users.

    2. doublelayer Silver badge

      The problem is that, when people who didn't update or maintain anything and who might have forgotten the thing exists don't pay $1, Gitlab would still have to do something about their repos. If they delete those, people will still complain. I suppose the only thing they could do is freeze them so you can only clone or fork and not offer any more free services to cut their losses. I'm not sure if this would earn a favorable review.

  8. iron Silver badge

    > Some of this is entitlement bias

    No Rupert it isn't. And since you're going to be that kind of dick I'm not reading the rest of your article.

    1. Michael Wojcik Silver badge

      Why stop there? Cancel your subscription!

  9. Anonymous Coward
    Anonymous Coward

    TL;DR

    seems the real problem is using a development platform as an archive - two different jobs. A little like trying to run a publishers inside a library.

    And that is not a new, or even a tech problem. It's a little worrying no one at Gitlab spotted that.

    I wonder how quickly Google etc would pony up if a vanished archive broke Chrome(OS) ?

    1. Michael Wojcik Silver badge

      Re: TL;DR

      seems the real problem is using a development platform as an archive

      That is precisely the value proposition of services like GitHub and GitLab that centralize git repositories, instead of leaving them distributed.

      Personally I believe that's Using Git Wrong, but it's the direction a majority of the industry seems to have decided on.

  10. Missing Semicolon Silver badge

    Read-only content

    How easy would it be for Gitlab to spend a few $10k for one of their engineers to cook up a read-only version of the repo interface that lets it be a dependency of other things, but does not implement all of the expensive parts. In other words, be as near as a few files behind and nginx service as possible?

    1. Anonymous Coward
      Anonymous Coward

      Re: Read-only content

      Not very.

      But I missed the answer to the preceding question of "Why ?"

  11. robinsonb5

    One thing I've noticed on GitHub - I'm guessing it extends to GitLab too - is that people tend to use forks almost like bookmarks, so you'll often find a project has dozens of forks which have never actually diverged from their parent commit.

    If they're not already doing something smart storage-wise to avoid duplication then removing inert forks (and ideally replacing them with some kind of redirect to the upstream project) could save quite a bit.

  12. Anonymous Coward
    Anonymous Coward

    The article goes on about the responsibilities of GitLab, but I must have missed the responsibilities of the code creators to not just shoot their code into the universe and expect somebody else to host it perpetually for free.

  13. Czrly

    Remember, please, that GitLab the software is Open Source!

    There's one critical thing that's missing from this article: GitLab, the software, is an open-source software!

    However much GitLab might try to lean on the fact that GitLab dot com offers some Enterprise Edition features -- not fully open-source -- to free users, the GitLab product stems from an open-source background and the core functionality certainly is still open-source. Many of the supposed freeloaders contributed patches and debugging time and feedback and well researched issue reports and other input into that product!

    It is quite dishonest for GitLab dot com, the commercial entity, to simply sum up the cost of keeping some hard-drives spinning! They also should perform the impossible calculation of how much of their income from actual paying customers should rightly be attributed to work from the community they're now spurning.

    I don't think anyone on the open-source side of this equation was or is complaining that GitLab dot com brings in income from exploiting the open-source portion of their code base -- it's within the terms of the license. But, to appreciate exactly *why* this feels like a massive rug-pull to many of us, ask this: would anybody have ever contributed to GitLab open-source, had they know they were just free labour for a corporation that chooses to optimise its bottom-line at the expense of this very community -- pretty much just like any other capitalist corporation?

    Prolly not, yeah? Capitalism and community don't mix!

    I mean, I'm bitter because I've just had to spend a tonne of my time migrating from self-hosted GitLab to self-hosted Gitea. This, it turns out, was a very good decision but I rather liked GitLab, back in the day, and do somewhat resent the way that they've been treating GitLab CE users as second-class citizens for a while -- pretty much making from-source builds too onerous to bother with, forcing the use of Omnibus or official, bloated Docker images, and pushing U.I. junk that can't be disabled, readily, in CE, that nobody asked for, but does nothing but plug an EE-only feature.

    The writing has rather been on the wall for at least some years!

    1. doublelayer Silver badge

      Re: Remember, please, that GitLab the software is Open Source!

      You pointed this out in your comment: the people who contributed to the source would, whatever happens to Gitlab, still be able to self-host the system. That's what the contributors and everyone else gain by contributing, and it isn't being taken away. Making something open source doesn't guarantee the authors will also run a free service running the code, but it should mean that the code remains available to those people for their use.

      I definitely see the complaints of the open source community about this, but not because they contributed to the code. If Gitlab had not promised a free tier, I wouldn't at all object to their service being commercial. I don't have a solution to this problem, but just because someone contributed doesn't entitle them to a service.

  14. NapTime ForTruth
    Mushroom

    Dinosaurs

    Let the dinosaurs die. Death creates space creates opportunity creates innovation. Ask any mammal.

    Too archaic a reference? Remember how Pan American Airlines once ruled the skies and spanned the globe? They died, mostly at the intersection of their own hubris and incompetence. Everyone who missed their presence missed their presence, but the industry thrived and innovated in their absence.

    Still too archaic a reference? Remember way back when the Global COVID-19 Pandemic©®™ evidenced - painfully - every weakness in our international supply chains? Lesson learning in progress, to a first approximation. Revisit, refine, or reinvent supply chain models and get back to work. The unfit and inflexible get left behind.

    This latest version of bit-rot and dependency-hell are weaknesses born at the intersection of hubris, laziness, and convenience, all of which is the natural order of things.

    Bug report marked as invalid; system working as designed.

  15. Anonymous Coward
    Boffin

    GitOld

    Ultimately, FOSS needs something like Project Gutenberg, to archive the work of XKCD's legendary Nebraska maintainer's project when he dies (and all other legendary projects that never need updates plus run of the mill abandonware).

    All it takes is money. It would require a company with significant FOSS interest and experience and significantly more resources than GitLab.

    Two companies come immediately to mind Microsoft and Google.

    The mechanics of what goes in the archive could be worked out and a way of unarchiving a project would have to be determined but the code would remain available.

    1. doublelayer Silver badge

      Re: GitOld

      To some extent, Microsoft is already doing it. They connected Github to a massive amount of cash so they're unlikely to start deleting anyone's repositories and they also made an offline archive in the arctic in case we have a world-ending catastrophe and want JavaScript libraries in the aftermath. That's not everything, but someone could clone all the old Gitlab repositories onto Github if they want an archive that Microsoft pays for.

      The Internet Archive is also in a position to be useful on this and they discussed archiving the Gitlab data already. They don't have the money that Microsoft does, but they do have the interest.

  16. Howard Sway Silver badge

    $1 million is certainly a lot to be wasting on a fossil collection

    First of all, it's not a fossil collection, it's code that might be very useful to other people (or might not be, but being able to find what you are looking for is a side issue to this debate) in all kinds of different ways.

    Secondly, $1 million a year is fuck all in a world of trillion dollar big tech firms, $1 billion unicorns, tech bro rocketry and sundry other examples of vast amounts of money sloshed about with abandon for questionable returns. If the big money boys can't be persuaded to just donate the small amounts needed to keep this free hosting alive, and don't realise that amongst all that code may be the gems that could make them yet another fortune in the future, they really have gone past the point of being of any more use to technological progress.

    1. doublelayer Silver badge

      Re: $1 million is certainly a lot to be wasting on a fossil collection

      A million may be small change to a big tech company, but it's not their million. It's Gitlab's million, and they aren't a massive tech company. We could try to find someone to give them a million so they wouldn't have to incur that cost, but that's not happening now, and so we get the situation where they're considering whether it's worth their million to keep it up. That's not automatically our problem because they chose to get themselves into this situation, but they could choose to make the preservation of this data not their problem if they end up deciding they want to use their resources differently.

  17. VoiceOfTruth Silver badge

    I disagree with this statement

    -> Open source is built on stable components. Stable means not being fiddled with.

    In which case the Linux kernel, and practically no Linux distro, is stable. There are dozens of updates to the distros I use every week. Stable? No.

    1. Pirate Dave Silver badge

      Re: I disagree with this statement

      Don't confuse "stable" with "secure". 25 years ago, you could plop an OS on a server and not worry about it for a few years until you needed to replace a card or upgrade the hard disk. That was stable. But then we decided that the Internet thing was the way of the future, and everything, even our TVs and fridges and lightbulbs needed to be interconnected. Hacking a single host at 28.8K was much less enriching than commanding a bot-net of thousands at 100+ Mb/s. That's why "secure" now trumps the old "stable", because you never know which mutha is gonna knife you if he can find a way in. I don't like it myself, but the boss ain't paying me for opinions.

      1. VoiceOfTruth Silver badge

        Re: I disagree with this statement

        I'm not confusing stable with secure. The number and amount of updates on Linux machines is not 100% due to security. If they were, it would mean that Linux is completely insecure with software put out there which is full of holes which seem to be discovered week after week. Some (a lot?) Linux people seem to laugh about Microsoft's patch Tuesday, while ignoring the nearly daily occurrence for Linux.

        1. Anonymous Coward
          Anonymous Coward

          Re: Linux people seem to laugh about Microsoft's patch Tuesday

          In case you haven't noticed.... There's a truly depressing amount of truth in the Windows update meme "Your mouse has moved. Please reboot to continue".

          Linux updates VERY RARELY need a full system restart - never mind restarting system components - and the remaining cases such as kernel updates are already vanishing.

  18. Anonymous Coward
    Anonymous Coward

    Offering local storage

    > What if the free tier was contingent on offering 10GB of your local storage to the community, with the resultant aggregated free tier storage managed by GitLab as the hosting system? Would that even work?

    It did on Wuala, see https://web.archive.org/web/20101207173856/http://www.wuala.com/en/learn/technology

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like