Git
"Git is the worst piece of software ever created."
glad to see mature opinions and cold calculation in software evaluation /s
The announcement this week that Bitbucket is dropping support for Mercurial is another nail in the coffin for the software. Bitbucket, Atlassian's version-controlled software-hosting service, supported only Mercurial repositories when it launched in 2008. By 2012, the biz was featuring guest blog posts about the respective …
I would really like to see it preserve file timestamps, but equally I can see how that could play havoc with reliable conditional compilation unless there is something smart that would tell the compiler/make utility what had *changed* on a remote machine even if that newer timestamp is prior to the related object code created from the old source, etc, on your own machine.
Tricky...
I use Mercurial with github using the hg-git plugin (which is on github: https://hg-git.github.io) It lets you use Mercurial more-or-less seamlessly as a git client. Anything you can do with git[*] you can do with hg, using an interface which actually makes sense and doesn't hate me. Git branches map to hg bookmarks, tags just work, merging and branching and multiple heads just work, etc, etc. You don't need to set up your hg repository _or_ your github repository specially, you just tell hg to clone the git path and it just works. It's great.
I don't think I've ever used a native hg repository.
"A problem not uncommon in FOSS.... And now that Micros~1 has it, it can only git worse."
Microsoft doesn't write git. It's still fully in the control of an open group. Microsoft just bought Github, which is a site that stores and processes git repos. Github might write some code for git, but they don't control it and their code only gets in if the developers of git approve it. Many other sites, like the one the article is about, can also process git repos, and git works identically with each of them. Don't try to blame Microsoft for something they have nothing to do with.
He's fully right. The success of a software sometimes has nothing to do with its technical merits (as Linux punters always say about Windows...)
Git is truly an horrible piece of software cobbled together. Just open source is more and more dominated by a monoculture where a bunch of "influencers" tell everybody else what is fashionable and what is not - and people fear about not being fashionable. Then lemmings follow.
Yet, the bad effects of a monoculture are already evident - for those who aren't blind and brainwashed.
It's also the fact that well paid work is tricky to find so anyone with a little nous now gets into software development. And if you've only got a little nous then all you care about is that the list of things on your CV matches the list of things on the next job spec.
It used to be that the approach was important, understanding patterns and knowing when to apply them. Finding out how much a candidate knows about development is a lot trickier than checking a list though....
I never meant you don't have to learn and use Git - especially as you say can be required by most jobs - just like you may have to be able to use Windows as well even if you don't like it.
Still you can use something while not being blind about its design or implementation. I never felt the need to worship what I use to feel part of the cool and anointed ones.
I'm using Git in some projects, and that's formed my opinion about it.
Git is f-ing awesome. And interactive rebase is the best thing after sliced bread.
The basic concepts of history keeping and tree building are simply brilliant.
I've used: PVCS in mid-1990s - piece of shit. Initially, it didn't even support directories. Could only commit flat snapshots. Later it started to support subdirectories.
Visual Source Safe - was usable and useful for my personal projects and small team collaboration. It had its quirks. Concept of branch as a separate directory was pretty stupid. Files shared between directories was an interesting, but unmanageable concept.
Perforce: stupid ill-conceived concept of workspaces to map the repo directories to different checkout worktree struucture. Branching was so badly conceived (seperate directory, again), that I never ever dared to create them. If I created a branch, it would be global; there was no concept of local repository clone.
Git: Not easy to learn from scratch, but once you get it, it's the best. Hack the code, then slice and dice it to commits as desired, then after debug apply some surgery to those commits, and push them to the review system.
Not sure why this was downvoted as it makes a perfectly valid point. The original versions of Beta only held one hour, which wasn't enough for (e.g.) a film, many sports games, etc. That negates many of the things people would have wanted video recorders for.
They got round that later, but only (in part) by introducing a half-speed LP mode which also reduced the quality, compromising what was Betamax's much-vaunted advantage in the first place.
If- at the time of Beta's release- there had been a video cassette format that was better-than-broadcast-quality and the size of an audio cassette but it only held five minutes of video, would the Beta fans have bought that?
If, as I'm almost certain, the answer is "no", then we've established that even *they* agree with the principle there's always a tradeoff between quality, size and practicality. None of us liked the bulkiness of VHS, and better quality would have been nice, but in real terms there were legitimate reasons to prefer it.
While it's true that the bigger issue in the PAL world was cost rather than length, by the time Betamax hit Europe the format wars had probably already been decided by VHS's dominance in America, however; the inertia was likely insurmountable, even if it took a while to fully shake out.
Most v2000 systems also used relative hub rotation speeds to determine where in the reel you were. The ability to make a tape read-only then change it back to read-write was nice as well for those of us who primarily used it for recording. It also had space for a data track and in theory better 'inter-player' compatibility. I vaguely recall rumours of an auto reverse function in the design stage although gawd knows how that would actually have worked. A video read/write head doesn't seem like it could be seamlessly turned on its head.
We had a couple of v2000 machines back in the 80s.
Sadly cost and being late to the party killed it off.
To think that we're going to be forever lumbered with this sad example of "eh, good enough" design saddens me when there are so many better systems that already exist.
Forever? Not if somebody thinks of something better. Remember, Torvalds developed git as a 1-month side project when he couldn't find an available version control system that worked as he liked, without unbearable licensing terms. The entire open-source universe has proven its meritocracy-based roots. Torvalds himself would not hesitate to throw out git if something provably better comes along.
Miss the point much? Better things have come along but Git has the momentum and market share so it doesn't matter how good other systems are. They'd need a feature that Git doesn't and couldn't have before people would take the overhead of shifting.
"Torvalds himself"? Are you allowed to use his name in vain like that? (Sorry, but your post does have an air of zealotry....)
I have used Git, SVN, Mercurial and, yes, Visual SourceSafe because as a consultant I have to use what I'm given a lot of the time. I have used Mercurial for some of my own projects. I'm sorry to see Mercurial's simplicity, usability and sophistication undervalued like this.
Git's blazing performance is hard to argue with. Not having to wait for stuff to happen is a big draw for me as for many other developers. I think that's a good part of its appeal. That and the fact that it is very capable in the right hands. But Git's performance is a confidence trick, and its time may yet come.
I have always dreaded Git merge requests on big projects with many contribultors. I know projects that take days resolving merge conflicts between feature branches and trunk. This is because large feature branches are a Bad Idea that many organisations still don't realise is one. But it is also because Git is based on poor mathematics.
There is a better way than both Git and Mercurial.
I'd like to see Pijul succeed. https://pijul.org/
Pijul is based on Darcs-- https://en.wikipedia.org/wiki/Darcs. Darcs and Pijul are based on a mathematically sound 'patch theory' that significantly improves on Git and Mercurial performance during complex merge scenarios.
Pijul incorporates an improved algorithm over Darcs that promises to fix Darcs' occasional slow responses. But it is still not of production quality.
Anyone wanting a meaty Rust project to work on, and to improve the status quo in DVCS, please head on over :)
(Disclosure: participant and Rust learner)
I once had to submit a business case for why we should use SVN rather than VSS. Because Microsoft and all that.
It was the NHS, which tells you all you need to know about the quality technical decision making process that existed at the time. Luckily the case was accepted.
I left the job soon after. You're welcome, whoever it was that replaced me.
As someone unfortunate enough to have been cursed to operate in its presence, I can truly say (from experience *shudder*) that; "_no_VCS_ was actually better".
Or in my case, my handcrafted, timestamped, zip snapshots were MUCH better.
My favorite VSS tricks included; 'forgetting commits', 'forever locks of large portions of the repo', and my all time favorite - 'complete corruption of the repository on sync'. That last one requiring a complete restore from whatever backups you had.
CVS, *eh*
SVN, better but had issues, especially with branching and large repos
Mg, nice, but bloated and can be slow with large repos
Git, small fast, easy branching, handles large repos better than the above (even more so with recent patches).
Git was great if you didn't mind getting your hands dirty on a command line and were using Linux. Recent advances in tooling have made it friendlier to the GUI crowd and a first class citizen in Windows and MacOS. So I would rather take a simple, low overhead VCS that works (and is being continuously improved upon) or ones that don't. And I say this as someone who's used; CVS, VSS, SVN, Mg, and Git.
It's not like anyone is stopping anyone from using another VCS (well with the exception of VSS, thankfully).
1) "Forgetting commits": never happened to me
2) "forever locks": bad developers, anyway VSS supported optimistic locking too - but it wasn't the default
3) "complete corruption of the repository on sync": it is true a VSS repository was fragile and required constant maintenance to avoid corruption issues. It also needed a good network being shared folders based. Network issues could propagate to the database.
Sloppy management was a real risk. Used it for years without issues, even when one developer, after a policy required not to store .mp3 (and other) files on the file server (no tens or hundreds of TBs available back then) had the "brilliant" idea of storing them in VSS.... caught when the database nightly backtup time suddenly increased a lot... prompting an inspection.
"Or in my case, my handcrafted, timestamped, zip snapshots were MUCH better."
To share changes among developers in a team as well? How many developers were on your team?
Didn't really matter how large my team was when the VCS was VSS.
When working by myself; zip files were more reliable, results pushed to VSS on a regular basis. If, no when, it broke, I could quickly get back to where I was from my zips.
When working with others, the same, only it effected even more people.
It does matter - because how you manage code means what integration issues you hit or avoid - and how developers learn to work well in a team or not.
Hope you also had external backups of your zips, if your disk crashed....
Also I wonder if people working with VSS had all the needed updates installed, or, if working with "unknown sources" copies, never installed a patch.... or never run analyze to check for database issues, because RTFM is hard.... or run it out of FAT partitions instead of NTFS with all the risks of a crash crippling a repository.
VSS was bad, but with some efforts it worked, in a time when there wasn't many choice, and many developers had no clue what was a given version of a file.
Wait... you are defending a storage system that was "fragile" and "required constant maintenance"?
And the phrase "network issues could propagate into the database" sounds like the techno-babble from some clueless TV show, not a real-life experience.
Here is a quick rule: A version control system is the same as a file system. Don't use it for anything that matters until it has been around for a few years with no screw-ups and no major changes. Normal events, like a power failure or slow network, should only result in a tightly-bounded loss of recent changes.
I wasn't defending it as a great tool. I just said it was better than no VCS at all.
It had issues that could be kept under control with some work by the administrator. Just like some older shared-file based db engines that could corrupt data as well more often than needed. It didn't stop people to use dbBase and its ilk, in an era when RDBMS were outside the reach of many.
About my "technobabble", evidently you don't know VSS was built on SMB and thereby had all the issues file-sharing access can have over a network for atomic operations.
CVS could corrupt RCS files too, especially if you attempted to run it on shares as well.
In the second half of 1990s, most systems and networks were much more brittle and less reliable. While NTFS had already journaling, FAT had not, and even Linux had to wait for EXT3.
It's your real-life experience that looks quite limited.
We were lucky to have already a fully switched network, reliable redundant hardware, and systems were carefully maintained, so it probably minimized our issues. Your mileage may vary.
I once spent a couple of months working onsite with a partner in Beijing which used VSS.
They had the repository mounted as a network drive. They also sent out periodic reminders to people to make sure their antivirus software was up to date, because some virus had infected a checked in .exe on the network drive and VSS had promptly committed the change and propagated it to all the other clients.
I mean, there's all kinds of wrong there, but this is the kind of environment that VSS thrives in. Like athlete's foot.
(This company produced a mobile phone OS which we were porting our stuff to. An incremental build took about 90 minutes --- 45 minutes to do the build, because their build system was cobbled together with make and perl and shell scripts and DOS batch files and didn't understand what 'incremental build' meant, and another 45 minutes to flash the phone. The platform was unbelievably flaky. For example: when you exited a thread, the last thing the thread would do before unhooking itself from the thread queue was to free() its own stack... while running off it. The OS had one synchronisation primitive, a semaphore. Except the semaphore had a fixed-size buffer for the queue of blocked threads which was smaller than the number of threads on the system, which means if you had too many threads blocked on it, the old ones would start to wake up at random... excuse me, I need to go breathe into a paper bag now.)
no enumerated folders and file names was better than VSS
classOLD-BobEdit123-old-OLD-working-zanzibar(1) (2) (1) include in build.vb > visual source safe
at least that was harder to loose everything to an access db shitting it self, which some dullard had left with everything checkedout and locked before they went on holiday with the laptop that did the checking out....
VSS didn't use Access, at least until version 6, the last I used.
As an admin, you could "undo" checkouts of other users if you used the "Lock-Modify-Unlock" model instead of the "Copy-Modify-Merge" one. Then you would have needed to train properly your users. In such situations was also useful to block repository access to the moron to avoid it could lose his changes later. The Lock/Unlock model required a quite high degree of coordination, although it was simpler for developers not used to version control.
For the matter during an audit in those years we caught another group where all the code releases were zip files on the manager *laptop*. Their configuration management was quite caotic - they didn't really know what got into a build.
It looks to me that many VSS "issues" were down to users actually not having ever learnt to use the tool beyond initial setup and basic UI operations.
My stint with VSS was 1997-2002, in a group of about 30 developers and with proper maintenance, it worked and ensured proper enough configuration management for the time. I was happy later to switch to better tools, but sometimes you have to work with what you have available.
I asked to buy Perforce, but never got the budget. CVS didn't run on Windows (CVSNT did, but was quite new), Linux was in its early years too, and we were a AS/400 and Windows shop anyway. PVCS didn't allow for concurrent edits.
I had more problem later, when a linux admin took responsibility for the CVS repository, created blindly the repo in the root partition, found later there was no longer space in the partition, deleted blindly the CVS repo to make space , and when developers started to howl loudly, found his backups weren't working... you can have better tools available, but if you don't know what you're doing, havoc is just right behind the corner.
"I can only imagine that none of the contributors to this thread has ever used Visual Sourcesafe."
Even if they had access to Sourcesafe, that was no guarantee that it could be used: one consultancy (I will spare that bunch of charlatans the embarrassment) that chose it spent more time trying to recover 'lost' code than ever using it in a productive way.
That assumes that the distribution of users to repo type is the same.
If we take your numbers as accurate [they are taken from a Stack Overflow Survey showing adoption by Stack Overflow Devs, not Bitbucket Users] this could mean (exaggerated for effect):
3% of users could all be using the same Mg repo, so yea, 300,000 users but only 1 repo.
3% of users using some rather small number of Mg repos << 300,000 repos.
Most Mg users have small 1 user private repos, so not a lot of code for the effort
There's a *huge* overlap in the users that use _both_ Mg and Git.
I think the folks at Atlassian have looked at the data store, the user base, and which way the winds are blowing and have determined that their effort is better spent supporting the growing Git base than the dwindling Mg one.
OK, Atlassian figure that financially it makes sense to lose all their Mercurial users. I don't like that, but they're a business.
But how are Atlassian going to differentiate themselves from Github in the future?
3% is a small number, but we're still pissed off and since we have to convert everything anyway we may as well change hosts while we're doing so. Was anyone else, particularly in the corporate world, holding back from putting their Git repository on BitBucket *because* they didn't like the support for Mercurial as well?
I predict Bitbucket becoming the "Mini Me" to Microsoft's GitHub
Nah, I don't flatter myself I could have beaten back the tide of the network effect, if that's what it was. The best product can still win, if it's a free choice. I think in the case of Git vs. Mercurial, it was ecosystem factors like Github that pushed it over the edge.
Anyway, Mercurial may have been easier to learn, but having used it since, I think Git has proved the more versatile DSCS.
I like the way you avoided the word "usable" there.
Versatility is great in some things, it's not top of my list of requirements for version control systems.
(I wasn't really suggesting that you could have made Mercurial beat Git, merely that your decision repeated millions of times is what killed it!)
Versatility's not top of mine either, but if you want me to learn your product with a high barrier to entry (common to DSCS's I think), it's got to be a useful daily tool at a variety of scales/platforms/applications.
Every day, perfect niche tools die a lonely death.
I moved my company over to Git in 2014 (from CVS) and despite some rough edges it worked out very nicely. Biggest issues were the EGit plugin for Eclipse was a bit behind the curve and UI-centric developers moaned about having to use the command line. And Git for Windows had some issues with case sensitivity and other platform peculiarities. These days though it's a very stable and fast.
I've dabbled with Mercurial and it's fine and all but it's solving a similar problem to Git in a similar fashion. At the end of the day Git enjoyed the groundswell of support and it's unsurprising that businesses and tools would begin to gravitate around it.
There are complex operations which are handled better in a GUI - because it's simpler to see many information at once and reduce mistakes. Especially when you have to work on complex software without a large team and dedicated resources for configuration management.
For simpler commands which don't require a lot of information displayed, the CLI can be faster.
In fairness, they were using CVS and Eclipse to sync source from 20 repos. From the UI it was bad enough but it would be masochism from the command line. The Eclipse and the Team Sync view made it relatively easy to pull in remote changes. It still took 30 minutes to sync and up to 90 minutes to merge branches (switch branch, tag, merge other branch, tag, checkin). Hence my motivation to switch to Git. At the time the Eclipse EGit plugin was okay for the basics but it still required them to learn to do certain things from the command line and I had to document workflows for things like rebasing.
Personally I'm still happily use Git through SmartGit. I'll drop to the command line for complex actions, but when I'm trying to visualize the difference between two branches there is nothing beats SmartGit.
I was a wee bit earlier to that decision, but chose Mercurial as the reviews (and mine) showed it was easier to use that Git, something that was highly critical in a legacy environment (where people thought Subversion fixed ALL issues that CVS has). At that point, it was maybe 50-50, maybe Git a bit ahead due to its "made by St Linus" predicate.
Now, we don't host our source code externally, so I am much more concerned about continued support for Mercurial in Atlassian's SourceTree.
Our company was super paranoid about cloud hosting. When I moved to Git I used Gitolite but they later moved to an open source version of GitLab running on their own server.
About the only downside of Git, but this applies to Mercurial too is its great for source code but no damned good for binaries or documents. You can't merge those things very easily and a repo containing a lot of binaries soon bloats out of control. For docs we use subversion with binaries stored in nexus.
If you can transform ducomment binaries to containing text - for example, modern document formats are just a zipped bunch of XML files, you can use Git's clean/smudge attributes to transform the files from binary format in the worktree, to text format in the repository, diffable, etc.
Well, it's git and the git toolchain, which seduces users and leads to a self-fulfilling prophecy.
I've always disliked the "common cold" argument: if everyone else has the flu/siphyllis/ebola, should I get it too?
I'd very much like to stick with Mercurial. Does anyone know of anyone still prepared to offer a choice?
One bad thing is that it's all too easy to mess up! Merges never work well for me, so I always have to nuke and restart a couple of times.
I tried GitFlow, which was supposed to make local branching easier, but it was yet another set of commands to learn and I ended up having to drop back into rawer git to fix up when it went wonky.
And we now have repo sitting on top of git, which is another set of commands to learn and our admin has not set it up correctly, so it's yet more fun with random local branch confusion.
"Merges never work well for me"
Just as a friendly piece of advice, I'd look for either code organization or division of labour or both. If you need to merge often (and especially if merges fail) that means that your abstractions are flawed and that different people/teams make changes to the same files/structures/functions. You would probably have merge issues with any revision control system.
It looks you use a very simple development model. In any model that is even a little complex - having different branches and promoting code across them, needs of backporting fixes on previous releases, etc. merging happens more often than you can think.
Sure, if you only work on a singe branch and deploy only the latest code everything is far easier....
Hell, I'm working on a single, fairly small app and I still feel the need to have at least a development branch and a release branch. There's a lot of cherry-picking from one to the other at times. Mercurial actually handles this pretty well.
I usually end up having to merge because I made a change without remembering to update my local copy (because I am an idiot).
So far I've never managed to successfully merge a commit, so instead I just make a copy and then git reset --hard
. I assume that this is the intended workflow, so this is my life now.
I've used JIRA for many years and the one thing I've learned from those years is don't fuck about with it. Install it or sign up for cloud, create a project, set it to use the Agile Simplified Workflow, enforce a resolution status on close and then stop. No more customisation, no faffing around with workflows, nothing. Just create issues and work with them.
If you go down the route of customising workflows and such nonsense, then on your head be it. At that point it becomes a cottage industry which people whining "can I have this custom field" and "My workflow doesn't match my process" and its all spirals out of control.
Well kinda. You have to have design and architecture and requirements gathering and user research and UI design as well as tickets. And you have to organise the tickets into projects epics, sprints, stories and tasks. But ultimately it's all tickets.
I'm working on a large government programme with hundreds of developers and dozens of teams, all run on Azure, git, Jira and Confluence. And it's working really well.
Git can be a pain, and hg is probably better but anyone who thinks it's the worst software ever created hasn't used ClearCase or Lotus Notes (spot the connection)
I get on quite well with Jira... BUT... thats probably because I'm running and old self-hosted (and now well isolated) copy from before they decided to split out into Jira Software etc.
I have had the misfortune to use Jira cloud with the workflows "improved" by others.
So, I'm inclined to think you're right - Jira used to be a good tool and has grown into and overgrown and overbloated mess that no-one sane would willingly choose to use.
I had to use Mercurial for two years, professionally, and so I'll be very happy to see the back of it and know that that will never be repeated. If this is the final nail in its coffin, good riddance!
Git isn't perfect. There are a few things that I would change and fix, were I omnipotent, but the undeniable truth is that git is simply unavoidable. Sooner or later, one ends up using it -- for a course or to work on some open-source thing, KSP mod, passion project or wotnot -- so why bother learning another system that doesn't distinguish itself in any useful way?
The complete lack of conclusive argument showing Mercurial to be clearly superior (read: it makes my daily work-flow simpler, faster or better in some material and MEASURABLE way) is enough reason for it to simply cease to exist -- it's not like git is closed source, patented or making some evil bastard rich so there's really no reason for competition for competition's sake.
Much better would be for all these repo. silos (Microsoft, Atlassian and GitLab, primarily) to collaborate on making a git vNext and to fix the stuff that sucks. That really wouldn't take much.
Creating a proper concept of a user (with a fucking id) so that public names and email addresses aren't treated as versioned (and hashed) metadata would already solve one of my major complaints. Adding real file copy and rename metadata would solve the other. (Sometimes, I want others to know that a file was moved or copied even if the destination is nothing alike -- it helps others to know "where that went" or "where this came from", sometimes, or "with what that was replaced") (I was a Subversion proponent. Can't you tell?)
Oh. And make `push --force-with-lease` the fucking default (and, indeed, the only) form of `--force` for God's sake. It took me years to learn that this existed because I simply couldn't bring myself to believe that it was NOT the default behaviour of `push --force`. What sort of monumentally screwed up thinking lead to vanilla `--force` existing? Has Linus never heard of a transaction?
As far as education is concerned, I have no worries. I'll get even the most incompetent team using git with nothing more than a whiteboard and some scribbles -- they'll hate me for it, perhaps, but I've done this enough times to know how to bring the concepts across.
>What sort of monumentally screwed up thinking lead to vanilla `--force` existing?
Personal repositories. I use it all the time to push to my personal build and test machines.
Instead of pushing directly to the common repository, use a code review system. You'll never need to know about --force-with-lease and such crap.
The project I work on was already on git (some 6 years ago) so I never really got to see any alternatives. There's an element of not missing what I don't know of course, but in the main git seems to be OK, However, I have fallen foul of messed up merges, and 6 months ago ended up having to do a complete tear down and rebuild (three merges from different contributors).
Prolly using a sledgehammer to crack a nut, but now if something looks like being at all contentious I make a backup of the complete repository, and the plan is that if it goes Total Inability To Supply Uncorrupted Project I just drag the backup over the mess.
P.S. not actually had to do that... yet!
To be honest, they're largely equivalent with some differences in the detail, which is, of course, where the devil resides. I think we've all trashed respositories as we got to know the various systems but in the end, having any kind of repository is better than having none.
I still find Git positively arcane in comparison to Mercurial.
I am assuming that BitBucket has the responsibility of migrating all the commit logs and history from Mg to Git?
No? Oh they are just going to delete them...
Good to know what will happen to Git repos stored there in around 10 years once the world moves onto the new trendy "BitMasturbate" or whatever the new RCS system is called.
I think the new windows terminal and wsl2 is quite sexy
https://www.youtube.com/watch?v=8gw0rXPMMPE
If your workplace is all windows, you can install wsl2 and do all stuff in linux and still run those pesky windows applications you are forced to use.
The old wsl was dead slow and had poor filesystem support (all RWX), no sbus, no services, but it was still better than cygwin. wsl2 is a hyper-v virtualized ubuntu 18.04 with a nice windows terminal.
The brilliant thing about Git is when something gets screwed up, it somehow convinces you it was your fault, and then you can Google pages that will agree and make you feel like an idiot.
My theory is Git is entirely intuitive if you think exactly like Linus Torvalds. I apparently don't. But I'm told it's easy if you can just keep simple graphs like https://snipcademy.com/code/img/tutorials/git/fundamentals/three-stages-01.svg in your head at all times.
That's just it... I found git rather complicated (although there are just 3 or 4 commands total for typical use), it makes it possible to do rather stupid things to a repo too.
But, git began historically because Linus found the existing system they used was hard to scale to code with as many checkins etc. as the linux kernel. After finding only 1 possibly suitable "better" version control system at the time (which was not free or open source) they wrote one from scratch.
So git is designed to support complex merges and such, and make it possible to handle a project that sized. That's it. Then it just snowballed with more and morre projects using git; honestly, in my opinion the scalability (dealing with large projects) is excellent but user interface seems to be unecessarily complex. I'm a fan of a nice cli but git's cli is not that nice, the man page documents the many available commands (which is nice) but no quick reference or examples showing how to get started with it to get anything done (not nice.)
Linus wrote git because the Mercurial of the day, which was written in pure Python, was too slow and BItkeeper, the product he wanted to use, had a jerk founder/owner who changed the licensing terms mid-stream. That's pretty much it. He also took the opportunity to make git work for exactly his workflow, which it does. Of course, Linus's workflow isn't like most workflows so git sort of forces you to work in one specific way that Linus likes rather than in a way that is natural for most people, but so it goes. People used it anyhow because Linus is cool, so here we are.
Moving from SVN to a DSS, I chose Mercurial over Git for a few reasons:
1) The command line of Mercurial is very similar to SVN.
2) The workflow is also very similar.
3) History is inviolate. What is committed is there, warts and all, in Mercurial just like SVN (and CVS before it).
The thing that I have never liked about git, apart from the complexity, is that history is a bit wibbly-wobbly and malleable. Now, perhaps I am a bit old school (I have been in professional software development for 25+ years) and am a little set in my ways (or, rather, like a lot of developers, am lazy and prefer the comfort of familiarity), but Mercurial just seems to fit the way I like to work rather better than git does. Sure, I can learn to do things the git way, but I'd rather continue as I am.
It's a shame, because I'm happy on Bitbucket. But now I either have to port all my mercurial repos to git, or move them elsewhere. And as others have pointed out, if you're having to port to git then there is no reason not to port *and* move elsewhere. I think Bitbucket have shot themselves in the foot here and are losing their USP. If they are going to be just another git host then why use them over another git host?
The great thing about Git if that you can edit your local history by amending and doing other surgery to commits. Interactive rebase is the greatest thing after sliced bread. Once it gets to the master repo, though, there should not be any history editing going on. This can be enforced by use of code review systems.
True, but it doesn't mean there aren't poor tools too. If I hadn't used git from the command line before Atlassian's SourceTree, I would have thought git was thoroughly useless. SourceTree is OK for very simple use, but you enviably need to drop to the command line for anything remotely complex.
The same thing probably goes for Mercurial which I've only used via Tortoise Hg. That and now my company is only using it for legacy products, the lack for familiarity and differences to git's work flow are the things that stand out. My main grievances with Tortoise Hg is it doesn't automatically refresh, doesn't get the latest version on copying a repository, and its not always clear which the latest release commit to update to.
Makes for some rather amusing reading, also with the benefit of hindsight but even a contemporary reader would have thought the author was clutching at straws with many of his arguments. He also not once goes into the technical details of both systems, which is what I was hoping to read about.
(I used Hg for a few months years ago, concurrently with Git. I cannot say I strongly preferred one over the other, though I always appreciated Git's logic, flexibility and comprehensive docs)
We've been hosting Mercurial repositories with Bitbucket since 2010. We went with Mercurial because its feature set best fit the previous commercial product we were using, making it easy for me to export the repositories from the previous product and importing them into Mercurial without losing the change history. Git's feature set simply doesn't match what we got with Mercurial.
That said, Bitbucket was the last major Mercurial hosting company out there, so either we self-host or we move to Git. So we'll likely move to Git. But at that point, why bother with Bitbucket? They're the #3 hosting company for repositories, behind Github and Gitlab. So we'll discontinue our commercial account with Atlassian for Bitbucket, and instead open one at Github. Because why would we go with #3 when #1 is out there?