back to article GitLab.com luckily found lost data on a staging server

GitLab.com, the wannabe GitHub alternative that yesterday went down hard and reported data loss, has confirmed that some data is gone but that its services are now operational again. The incident did not result in Git repos disappearing. Which may be why the company's PR reps characterised the lost data as “peripheral metadata …

  1. Anonymous Coward
    Anonymous Coward

    Ofcourse they went public...

    Maybe I'm too cynical here, I cannot rule this out, but in my opinion Gitlab didn't have a choice but to go public. For the simple reason of damage control.

    Think about it: what do you think would have happened if they covered things up only to see the details leaked at a later time? Then it would become double trouble; not only would the community start criticizing them about their plain out ridiculous backup "strategy" as well as them trying to cover it all up. If they had gone this route and the details did eventually emerge then they could have definitely kissed their companies reputation goodbye, maybe even the entire company.

    So I don't see any goodwill here, only simple damage control. BUT.. I may be a little overcritical.

    Even so... Overlooking the fact that 100+Gb worth of data gets "archived" in files of a few kilobytes large has nothing to do with making a simple mistake, that is a plain out display of stupidity at its finest.

    1. Doctor Syntax Silver badge

      Re: Ofcourse they went public...

      "Maybe I'm too cynical here, I cannot rule this out, but in my opinion Gitlab didn't have a choice but to go public. For the simple reason of damage control."

      Maybe you are. All too often businesses come up with a different choice in that their responses are from a standard PR playbook: "Only a few....", "your ... is important to use" etc., none of which is believed by anyone with two brain cells to rub together.

      If at the end of this they end up saying that only a few users were affected there'll be plenty of detail to give it credence.

    2. druck Silver badge

      Re: Ofcourse they went public...

      They couldn't do a Talk Talk defence and bluster on about a tiny number affected users, and hope some more idiots will join up next week, developers aren't that stupid.

    3. TVU Silver badge

      Re: Ofcourse they went public...

      "Maybe I'm too cynical here, I cannot rule this out, but in my opinion Gitlab didn't have a choice but to go public. For the simple reason of damage control."

      Yes, they did the right thing whether it was intentional or unintentional. Covering up stuff ups never works and always makes the company look even worse (here's looking at you TalkTalk).

      That said, this time GitLab really got lucky and they must significantly improve, and test, their back up capabilities.

  2. RIBrsiq
    Holmes

    Dodged a Bullet, they did!

    I guess if you have to find out your backups aren't working in a non-testing environment then this is about as good as one can hope for, really.

    Now, git those backups working, ja...?

  3. Mark Simon

    Just wondering … ?

    I signed up for GitLab just a few days ago, and haven’t started using it actively yet.

    Would that be why I haven’t received any notification about this incident?

    1. stephanh

      Re: Just wondering … ?

      Did you try to "git push" to your repo? Then you would see:

      Permission denied (publickey).

      fatal: Could not read from remote repository.

      Please make sure you have the correct access rights

      and the repository exists.

  4. Stoneshop
    Coat

    Staging server

    downthebackofthesofa.gitlab.com

  5. frank ly

    re. "GitLab's prose account of the incident..."

    I'd like to see a poetry account of the incident. Can anybody help with this?

    1. Stoneshop
      Headmaster

      Re: re. "GitLab's prose account of the incident..."

      No, "push" DOES NOT rhyme with "flush".

    2. Baldrickk

      Re: re. "GitLab's prose account of the incident..."

      You mean this? as posted by someone else yesterday:

      Yesterday,

      All those backups seemed a waste of pay.

      Now my database has gone away.

      Oh I believe in yesterday.

      Suddenly,

      There's not half the files there used to be,

      And there's a milestone hanging over me

      The system crashed so suddenly.

      I pushed something wrong

      What it was I could not say.

      Now all my data's gone

      and I long for yesterday-ay-ay-ay.

      Yesterday,

      The need for back-ups seemed so far away.

      I knew my data was all here to stay

      Now I believe in yesterday.

      1. Call me Trav

        Re: re. "GitLab's prose account of the incident..."

        Bravo!

        Very clever!

      2. Oh Homer
        Childcatcher

        GitLab prose

        Shall I compare thee to a proper backup?

        Thou art more lonely and more disparate.

        The tape rewinds as I lift my coffee cup,

        And GitLab’s data hath all too short a fate.

        Sometime too hot the scripts of admin shines,

        And often is his gold complexion dimmed;

        And every fair from fair sometime declines,

        By chance, or nature’s changing course, untrimmed;

        But thy eternal data shall not fade,

        Nor lose possession of that fair thou ow’st,

        Nor shall cockups brag thou wand’rest in his shade,

        When in eternal lines to Time thou grow’st.

        So long as Git can clone, or eyes can see,

        So long lives this, and this gives life to thee.

        ~ William Websphere

  6. This post has been deleted by its author

  7. Adam 52 Silver badge

    "But the firm's incident log says 707 users have lost data."

    "707 users lost potentially, hard to tell for certain from the Kibana logs"

    How did you derive the first statement from the second?

    1. stephanh

      Any user who "potentially" lost data will have to pessimistically assume that they *did* lose data, is it not? For example, they will have to tell their own users that any bug reports filed during that time window may have been lost and may need to be re-created.

      1. Anonymous Coward Silver badge
        Thumb Up

        Losing users is not the same as losing users data.

        1. GrapeBunch

          Losing users is not the same as losing users data.

          Ring.

          "Support."

          "Uh huh..."

          "...Ask not for whom the tel blows, it tolls for the

          luser."

          Click.

  8. CAPS LOCK

    Plus or minus six hours of data loss? I think we might be on to something here...

    I'd certainly like to see minus six hours of data loss at my work....

  9. DaLo

    +-

    "±4979 (so ±5000) comments lost"

    What does that mean? Does it mean they may have either lost or gained 4979(/5000) comments. It would be interesting to see the comments that were gaines - something like "I'm sorry Dave, I can't recover any more data for you as you appear to have no viable backups", "Sorry Dave, I can't allow you to do that"

    1. Baldrickk

      Re: +-

      I can only assume they meant ~ (weak) or ≈ (strong) approximation symbols

    2. Doctor Syntax Silver badge

      Re: +-

      "What does that mean?"

      More or less.

      1. DaLo
        Facepalm

        Re: +-

        I guessed what they were trying to imply but the ± doesn't mean that when placed before a number. It was more a precursor to set up the Space Odyssey/HAL reference rather than an actual question.

        Sorry the Register font doesn't support the Rhetorical Question Mark.

    3. OrangeDog

      Re: +-

      I assume they think ± means "approximately"

  10. wolfetone Silver badge
    Pint

    At least they were honest, transparent, and were open to people looking at what they were doing or going through.

    We've all been stung by outages in the past and have never quite known why it went down, or why it took so long to come back up.

    GitLab should be applauded. I was affected by this yesterday, but I'm happier now they've had this issue and shown what they're like in a crisis.

    A pint for the team. Just the one, I'm not made of money.

  11. Anonymous Coward
    Anonymous Coward

    Now for the good news..

    .. as with all disasters, the upshot is that it will now follow sane processes for a while. Most businesses follow this up and down curve when surviving a brush with disaster (or an actual disaster) results in a correction, and it will take a while again before the company sinks back into the sort of complacency that makes it again drift towards problems.

    So, in a few weeks, GitHub will be the best place to be :)

    1. Doctor Syntax Silver badge

      Re: Now for the good news..

      "So, in a few weeks, GitHub will be the best place to be :)"

      According to your reasoning, GitLab will be the best place to be.

  12. SeymourHolz

    _

    If you are surprised that ANY kind of error can happen In The Current Year, then hubris may be your real problem.

  13. Anonymous Coward
    Anonymous Coward

    Imagination, or lack thereof

    "The Register imagines many developers may not be entirely happy with those data types being considered peripheral to their efforts."

    The Register (or rather, Simon Sharwood) imagines wrong. It just so happens that both my organisation and I personally, have GitLab accounts that we use to coordinate development of a few tools which are, after all, critical to our business. We use free accounts for this and our repositories are private (an option not available with their main and better funded competitor).

    Their issue tracker is how we demonstrate compliance for certain aspects of our quality management system.

    Are we unhappy? Not at all. We get a lot out of it and pay (yet¹) nothing in return, so hardly in a position to complain. Apart from that, we take regular backups of our projects (that is, the issues, wikis, attachments, all the stuff ancillary to the actual git repos) that we use to recover lost data in a number of ways. If it came to it, we could even self-host a GitLab server and dump our data into it, but we'd rather spend our efforts in doing what we're good at.

    In short, shit happens, smart people (and even us) plan for it. Sometimes plans go wrong. Occasionally the holes in the Swiss cheese line up. Too bad. That doesn't mean that anyone is incompetent or an idiot. It's just risk management in action.

    I hope someone more analytically inclined or with a bit more of a technical background will blog about this here in The Register, so that we too can learn lessons from this incident.

    If there is one thing I am "not entirely happy with" however, it's the "more sensationalism and less substance" turn that The Register has taken in the last few years and which just seems to be getting worse, which is why I hardly bother reading it any longer.

    ¹ We intend to, started off with a free account to check them out, then things got really busy and we haven't got around to that yet. Sorry about that.

  14. Zmodem

    it is what happens when you buy weston digital drives instead of barracuda's

    weston digital drives never last more then a few weeks in a desktop PC while barracuda's can go for 10 years

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like