In fairness, the git command-line interface is... not exactly easy to learn. Of course, this is a very important project, and needs to be done properly, but I feel their pain.
https://xkcd.com/1597/
Linus Torvalds will pull Paragon Software's NTFS driver into the 5.15 kernel source – but he complained about the use of a GitHub merge in the submission, saying that GitHub "creates absolutely useless garbage merges." Early last month Torvalds gave Paragon Software a nudge that it really should submit a pull request – an …
This post has been deleted by its author
>Rules?
The basic rules themselves are so simple as to hardly need stating.
But a player will want to familiarise themselves with the minutiae of the game
<looks over top of glasses>
Hmm. I see you’re familiar with the game. Well played. Well played indeed.
Of course, Mornington Crescent is only one stop away on the Northern Line. But if we’re on the Charing Cross branch and there’s no step-free access I’m not sure that’s a legal move. I may have to defer to the TMO on this one.
the git command-line interface is... not exactly easy to learn
If only we could identify who was responsible...
However, there's quite a serious point here. Given the shortage of Linux maintainers it would presumably help if otherwise competent developers aren't being avoidably tripped up.
It's unfortunate that the git "porcelain" commands we use on a daily basis are essentially scripts on top of the "plumbing" commands. Git's needless complexity via multiple ways to accomplish varieties of tasks and ift esoteric command names (git symbolic-ref? A sculpture of a football referee?) does it no favors.
Id say the GIT CLI is no harder to learn than any other CLI, what makes git tricky to understand is its choice of nomenclature which activly fights the intent of the command.
While there may well be historical reasons for "pull request", but lets face it the vast vast majority of git users use a hosted repository, so any attempts at names that allow the ambiguity of the distributed aspect of git is pointless these day, it would be much better named "Merge Request" or "Merge Review" in my opinion, as that is effectively how its always been used in any project I've been involved in.
Rebase is another crap name used by GIT, despite having used git for the best part of 20 years i would struggle to explain its function in detail without some form of reference, "magic reset update merge keep working copy changes" command might be better as at least it conveys some of what rebase does, even if it would be a bugger to type on cli (but thats what the tab key is for in your terminal of choice)
A bit like the XKCD, somebody has to link to rebase considered harmful.
I did a rebase once. It was stressful. I think I triple-checked every command. Had no choice. I was working on an automated e-mailer shell script [sends analysis files from an SQL database within a zip file attached to an automated e-mail to several people via a crontab entry using shell and Perl] and forgot to sanitize the script first. Ooops. The script, I must say, is brilliant. My mistake unfortunately ruined my moment of triumph.
As many of us probably already know, ALL git history is available for anyone with access to the repo. So if you accidentally commit something you should not, a key, a token, a pass phrase, peoples e-mail addresses, whatever it might be, if you do NOT rebase to a point before that commit, it will be visible FOREVER, even if you do a subsequent edit to remove it.
(So at least the option to do a 'rebase' was there to fix it)
As many of us probably already know, ALL git history is available for anyone with access to the repo. So if you accidentally commit something you should not, a key, a token, a pass phrase, peoples e-mail addresses, whatever it might be, if you do NOT rebase to a point before that commit, it will be visible FOREVER, even if you do a subsequent edit to remove it.
It’s possible to rewrite git history. Like most of git it’s some magical shell incantation, but you can do a regex search/replace on the entire history. I’ve done this before to remove public email addresses from repos before pushing to GitHub.
You do need to ensure everyone gets the modified repo though so it may not be a trivial task.
what makes git tricky to understand is its choice of nomenclature
true. I had to re-train my thinking a bit to familiarize myself with that the @#$% they actually MEANT by "that" (whatever 'that' is for the current context). It did not take long before I "got it".
I've used P4, cvs, svn, and git. I prefer svn for my own things, but git's ok for public and work-related projects. I've also (*cough*) experienced "Source Safe" back in the day and I *ahem* "perceive myself to have been involuntarily violated and become dirty" because of it...
Re-trunk would imply there is a special trunk branch. but there isn't. In git all branches are equal. (Processes for humans might define some to be more special, but that is not part of git itself.)
I learnt git by just using git from the command line for a project I was starting, and just took the learning time. Having rather different terminology overall helped, because it reminded me the underlying conceptual model is different.
Also, spending time understanding that model really helped. a D-VCS is very different to a centralised one, trying to use git as if it were a centralised VCS is doomed to repeatedly being caught out by the differences.
PS. Barons Court (applying Mont's bypass, and you're in Nidd).
Id say the GIT CLI is no harder to learn than any other CLI, what makes git tricky to understand is its choice of nomenclature which activly fights the intent of the command.
While there may well be historical reasons for "pull request", but lets face it the vast vast majority of git users use a hosted repository, so any attempts at names that allow the ambiguity of the distributed aspect of git is pointless these day, it would be much better named "Merge Request" or "Merge Review" in my opinion, as that is effectively how its always been used in any project I've been involved in.
To give you just a little history, then: the "Pull Request" to which you refer is an invention of one of those hosted repositories, GitHub, and not part of core Git. GitLab calls them Merge Requests.
I don't blame git for not being friendly enough for hosted services (as you mentioned the better hosted providers have taken steps to help their users who just want version control, SVN/Perforce would fit them better but GIT won), i do blame GIT for being needlessly obtuse at times though.
Frankly its one of the few big FOSS apps that clings to the old 90's RTFM/man git/why havn't you read and memorised thousands of emails in the mailing list response to basic questions. When the answer is the docs suck for most humans and you need to spend a lot more time on the technical authorship and supporting documentation as IT IS a feature of the software, rather than responding with a passive aggressive i'm smarter than you, and i wont help response....
Could not have said it better. The CLI for git is a fantastic tool, but just like any advanced piece of software, you have to make friends with it first. You wouldn't sit down to write something in C having only a passing knowledge of the language, or you wouldn't write a Bind config or zone file without significant effort to learn the lingo of the Berkeley DNS named server.
The git nomenclature does suck! Not that it is bad, but that it is completely non-intuitive for someone who may otherwise be very skilled and fluent in a number of other projects that are now hosted on Github. The git command line is its own little language.
To the git project's credit there are very good tutorials to help you make friends with it, but the downside is they are some 30 page tutorials that are not geared to answer "the top 10 things you normally need to know to use git". (there are some of those floating around though)
So there is learning involved (the choice of nomenclature just makes things that much more uncomfortable). However, the choice of nomenclature does begin to make more sense the more you become steeped in git. (but that doesn't take the sting out of it the first time you sit down to use it...)
Thank you folks. All of you. I'd always assumed that my inability to grok git was due to my stupidity and perhaps a bit to a loss of mental accuity as I slog through my eighth decade on this planet. But you make it appear that perhaps part of the problem that git really is as obtuse and confusing as it seems to me. Thanks ever so much.
As far as the confusing and obtuse parts of git, I wonder if that is related to how git has some verbs where the subjects don't quite follow normal UK/USA speech patterns. It is like a Finn or a Swede who asks "I'm a bit low on cash, can you borrow me $20?" It is hard to parse because the direction of the word "borrow" seems backwards to most people. I've heard that enough times I should figure out the proper way to ask for my money back.
the git command-line interface is... not exactly easy to learn.
I dispute the accuracy of your opinion. example follows...
From the article: you should never ever use the github interfaces to merge anything
When I read that part, I knew EXACTLY why Linus would say this. For a raw pull request (i.e. submitting your dev branch's differences from the official repo) the github inteface works very well. For actual editing and stuff, no.
Therefore you should just do all of your merges and any additional edits using a decent local merge tool (maybe 'meld'?) between your local working copy and the dev branch on github, after you do a 'git pull' on your local copy of the dev branch so you know that what you're comparing to for your merge is at least CLOSE to what you'll be doing the pull request for (assuming that no major edits in the official branch affect the pull request).
Then use 'git commit' followed by 'git push' to your dev branch, and then do the pull request.
You really only need to memorize and master a few git commands:
git status
git pull
git push
git commit
git add
git rm
the rest of the commands can be looked up in the docs "whenever", as needed.
NOT that hard. seriously NOT. But yeah you need a token. And I solved that.
If you search for it on github, I wrote a simple program that lets you use a pass phrase to decrypt an encrypted file and put its contents in the clipboard (for X11 systems though). In short, you would encrypt the github issued 'password' token in a file, and then run the utility from a command shell (say 'bash') and enter your pass phrase, and the github token will be on the clipboard. Then, when you do 'git push' and git prompts for user/pass, you type in the user and paste the token when it prompts for password. This is due to github's recent policy change regarding pass phrases and git command line, and the method I use can be put on computers you do not own (like customer machines) as needed to access github repos without revealing your token.
That would probably be enough commands for my personal projects, when it's only me making changes, although I'd probably add `git diff` as well. For work, when my colleagues might inconsiderately change files that I'm working on too, or I need to backport fixes to a release branch, then `git rebase` and `git cherry-pick` are essential too.
If it's an important project and needs to be done properly, use subversion (and it's not as if I like that very much either). If it isn't, go ahead and let git play the shell game with your changes.
To this day I have no idea why git is supposedly so fantastic if it gets absolutely everything round its neck when committing and reverting. After messing about with it too much, I finally categorised it under "pointless stuff I have no time for".
Git is a DISTRIBUTED version control system. The distributed part makes it fundamentally hard! The arcane commands permit, for example, someone who has spent months developing a feature on their own local branch to painstakingly adapt it so that it appears to be a sensible sequence of changes to the latest code in the main repository. That's exactly what Linus requires, not a commit history of "Got it working", "Grab latest kernel" (dozens of changes that have already been made in the main repository), "Whew, got it working again", repeated.
Two developers I respect told me the way to really understand git is to read its source code. Otherwise as commenters here have said you rely on lists of commands that mostly work until you develop a feature on a long-lived branch.
I know it's distributed, but even when it's not really (i.e. I know only I'm using the branch and I'm up to date), and I want to do something like delete a few files, rename a few others, and add new files in the same revision, it does things like deciding some deleted files have been renamed to a new file because their content is similar. Apparently the most reliable way is committing the deleted files in one revision, then committing the renamed files in another revision, then committing new files in a third revision. So much for Linus' nice clean commit history.
Even reverting one single file becomes an odyssey in itself which could take other files with it.
If really have to read git's source code to understand how it works then that's a failure. It should have a handful of understandable commands with predictable outcomes.
And as for git's supposed raison d'etre, didn't Linus just say GitHub merges were useless garbage anyway? What task does GitHub call to perform the merge?
Of course! One is requesting a pull and the other is pulling a request : the second one is not the request of a pull, nor is the first the pull of a request, which should be obvious to anyone. However, if you disagree you can always submit a pull request to request an update to request-pull.
......but my problem is MUCH simpler.........
*
Has the next kernel update from RedHat actually been tested by anyone? I'm currently FED UP with having to worry about whether 5.13.13 might be a problem, since my (multiple) machines are all up to date on 5.13.12. It wouldn't be so bad......except for the treadmill of having to deal with this EVERY WEEK!!!
*
From 1982 to 1985 (i.e. more than three years), CP/M-80 was stable at version 2.2.
*
From 1986 to 1992 (i.e. more than five years), MS-DOS was stable at version 3.3.
*
From 2003 to 2005 (i.e. more than two years), RedHat version 9 (retail) was stable.
*
Today.......EVERY WEEK THERE'S AN UPDATE.................................................
*
Am I missing anything here?
*
Signed: NotAnArorak (just an ordinary person)
If you don't want constant updates the solution is simple : don't follow a dev track which is there specifically to allow frequent, poorly tested updates.
From your examples, I would suggest you want the releases marked 'LTS' or long-term-stable.
Try Debian. they're exceptionally careful about what goes in, and you won't suffer much harm if you miss out every other release for even longer life.
So yeah, things were a lot more stable when the only way to hack a computer was to sit in front of it.
* - yeah, I know, the Internet existed in 1992, but the number of people who actually had access to it was pathetic and the hacking culture was yet to be invented on the scale it has become today
5.13.13? And you have 5.13.12?
Oi vey... s'pah! You're out of date, you need 5.13.14 if you want your bluetooth earbuds to work.
You will need 5.14.1 or later if you also expect your Realtek USB Bluetooth dongle to work without having to faff about with renames and soft links in /lib/firmware/rtl_bt/
And that's only the first half of this week!
May I humbly submit to the overlord of the One True OS Kernel that it might, just maybe, be an idea to think about doing so, given all the grief that is currently being caused by state-level groups who are actively trying to subvert entire supply chains ?
Not that I would be so bold as to tell The Great One how to manage his pet project that powers the Internet, worldwide communications, space probes and practically everything that is not a desktop PC or a laptop.
But it might be worth considering . . .
Because anybody can create a GPG key, and a lot of changes are made by people who don't have a company behind them. A GPG key linked in some verifiable way to Paragon could be trackable, but a GPG key linked to an individual could lead to a real one as well as to a fake one. And they could also be stolen, so even if you know that someone is definitely legitimate rather than has submitted legitimate code thus far because they'll be inserting the dodgy stuff later, you don't know for a certainty that it's them submitting that code.
If you know everybody involved, having a system where you verify who the contributor is as your primary trust system can work. If you let everyone on the planet submit code, then that approach has reduced utility. It doesn't make it useless, but don't rely on it to save you from any particular situation.
... feels like the kind of thing that'll have an 'accidental' *uid = 0 buried somewhere in it.
Finally. 8^/
I've been thinking that is what will eventually happen (sooner than later) ever since it was announced.
I really see no need for it, none whatsoever.
Not in or within the Linux ecosystem.
I've been using Linux for over well 10 years now and I can't recall the last time I had to look into a NTFS format drive.
Go over it with a fine tooth comb, linus ...
Indeed ...
Better yet: how about rolling back all this BS and leaving things as they were?
It's disaster just waiting to happen.
O.
"I've been using Linux for over well 10 years now and I can't recall the last time I had to look into a NTFS format drive."
So because you don't need it, nobody else does either? If I unleashed that theory on the Linux kernel, I'd sheer off a ton of things I've never used. Compatibility with a number of processor types which I haven't used and don't plan to? Drivers for hardware that I don't have and isn't modern enough that I'm likely to get one. Lots of stuff in there I don't need, but that's a terrible argument for leaving it out.
As for uses, I have repeatedly mounted NTFS disks on Linux recently. Sometimes, it's just reading (Windows laptop died, quick can I get the hard drive out and recover some documents). There are two situations which occur with regularity where writing is important. First, when repairing a Windows drive in some way, including disinfecting something that was strong enough to make it hard to delete in Windows. Guess what. In order to mount the disk read-write, Windows has to shut down cleanly and clear the journal, which Windows never does by default anymore. I have to boot the drive to Windows hoping the malware I'm trying to remove won't take any new steps on startup just to restart it. I would value a NTFS driver that didn't make me do that.
Second, NTFS is sometimes used as the default filesystem for large external storage devices. The users plug it into their Windows machines and it shows up fine so they don't change it to something else. So to share a hard drive that I don't have permission to reformat, NTFS support is useful. If the NTFS driver has vulnerabilities, by all means hold it back until they're patched. But don't exclude it just because you don't use NTFS.
The word "git" has always been defined as "mildly derogatory, meaning a foolish or contemptible person" - that was seen as a joke originally when Git first appeared. A foolish or contemptible person is not an idiot and can be very good at some things (like writing software) even if you don't like them ... so the naming was a warning, not a joke - and it has turned out to be simply a fact of life.
I reckon merging Paragon's NTFS driver is an excellent idea. Leaving aside the obvious Linux/NTFS debate/dichotomy, interoperability between Linux and Windows has been and is still a problem, and making it easier will help the average user.
Hopefully, Google will also incorporate this driver into Android, if so it would make OTG connections to external media much easier (as for now only some Android implementations have propitiatory NTFS drivers—Huawei for instance). Also, merging Paragon NTFS into Android would make interoperability with PCs, USB sticks something that users didn't have to worry about. Until now, I've found the lack of compatibility a real pain.
I think all your use cases would be adequately covered by a GPL'd NTFS driver built on top of Fuse though. It's not like someone is ever going to want to boot Linux off of a NTFS drive is it? (Or are they??)
After using it for well over a decade and *really* hating it when first introduced to it, I quite like Git now and even run the odd training session for newbies from time to time. I don't think I could stomach going back to svn even for a personal project.
Everything anyone needs to know about git for day to day use is covered by the first two or three chapters of the Git book. https://git-scm.com/book It's a dead easy read too.
> covered by a GPL'd NTFS driver built on top of Fuse though.
Which of course has existed for years ("NTFS-3G"), and most distros bundle it. I have not had any trouble accessing removable NTFS -formatted drives with Linux during the past 15 years or so.
Hopefully the kernel driver works at least as well. In theory, it could have a better performance, but I'm not sure there are use-cases where it matters. For me as a Linux user, copying files on and off removable disks has been the only reason to mount NTFS volumes on Linux, and performance is not much of an issue for such use.
You're missing the point, it's not the interface per se that is the issue but some of the underlying logic and (especially) the naming conventions for various concepts, as has been illustrated in many posts on this thread.
Note that the story kicked off as a result of problems originating in a lovely cuddly web gui. There's more to computing than UIs !
I have never liked projects that seem comfortable with compiler warnings. They are there for a reason.
If you don't understand the warning, you shouldn't be writing code.
if you do understand the warning then you damn well know how to write it out.
If a project that has always compiled with 0 warnings suddenly throws a warning you look damn hard.
If a project that has "always thrown 100 warnings" throws one more then it's just more noise.
I've used compilers where you turn on all warnings and watch carefully for all warning given. And I've used compilers were almost all warnings had to be turned off because 95% plus of the "warnings" were either because of some obscure ambiguous junk stuck into the language "standard" or the compiler was just simply wrong. Yes, I'm looking at you GCC.
So turn off all warnings. Get code working. Turn on warning for a pass just to see if any of them are actually legit. Fix the very few that are. Turn off warnings again.
Then you have compilers that merrily compiled incorrect code (no errors / warnings) so you had to run third party code through real compilers to catch serious bugs. Yes, I am looking at you again GCC.
Then you have the "grammar police" compiler warning. About perfect legal and safe code that does not fits some guys opinion of what "good coding practice" is. The first to be turned off. Total bollocks.
I love compilers where you turn on all warning by default. They save so much time. Not many of those about. In fact I have not used one on a shipping project in more than 15 years. And that cost $3K per seat. And worth every penny. But that was for an embedded project. So real engineers. But for everything else cheap and nasty.
Sure, one can attach a vice-grip to the steering wheel stem to steer it. One can memorize the colors of the wires so one will know which ones to short out to start the car or turn on the windshield wipers. Without pedals one can operate the throttle by pulling a string. (More difficult to operate the brakes that way, but that's why dashless, pedalless cars have hand brakes.)
Sure, one can do all these things.
When we're all old we can tell our grandchildren, "When I was young, we didn't just have to walk to and from school uphill in the snow both ways while fighting off rabid bears, we also had to use git for source control, and we liked it." Bah!
According to the author, it is because he is egotistical and likes to name software after himself; first "Linux", then "git".
Also, his mother tongue is Swedish (he is part of the substantial Swedish-speaking minority in Finland), so he is unlikely to speak English with an East Tennessee accent.
Actually, The Man himself wrote (in "Just for Fun" p 88):
And Ari Lemke, who insured that it made its way to the ftp site, hated the name Freax. He preferred the other working name I was using -- Linux -- and named my posting: pub/OS/Linux. I admit that I didn't put up much of a fight. But it was his doing. So I can honestly say I wasn't egotistical, or half-honestly say I wasn't egotistical. But I thought, okay, that's a good name, and I can always blame somebody else for it, which I'm doing now.