Can you hear that sound?
It's Windroids salivating and Fisher-Price Programmers rushing to comment.
Will they get here before the exploit is patched...
Probably not.
A bug discovered in the widely used Bash command interpreter poses a critical security risk to Unix and Linux systems – and, thanks to their ubiquity, the internet at large. It lands countless websites, servers, PCs, OS X Macs, various home routers, and more, in danger of hijacking by hackers. The vulnerability is present in …
Maybe they won't get there before its patched, but who gives a shit? It'll take forever for every single thing out there that's got bash in it to actually get upgraded. Sure the major distros will be quick off the mark, but what about all the printers, network switches, VM hosts, etc etc & etc? Sounds like there's a lot of kit out there that is wide open as of this point in time.
Bash is a rather fat shell. Embedded systems which run Linux often use BusyBox instead, or another lightweight shell. Does anyone know if all POSIX compliant shells which support shell functions (zsh, pdksh, etc.) are vulnerable, or is it just bash? And what about the t/csh heretics?
I misread that and then thought it was a typo, but I see what it means now.
Anyway, for those who aren't full time linux sysadmins (and I count myself in that group - although I'm learning fast) if you're running a boggo install of Debian Wheezy or any supported Ubuntu variant, give this a shot:
sudo apt-get install --only-upgrade bash
(use sudo as and if required, of course)
Wheezy main and Ubuntu have patches in the repo already - bash_4.2+dfsg-0.1+deb7u1 on debian vulnerability matrix thingy.
As a well regarded security consultant just IM'd to me, and I'll admit I'm paraphrasing here, PATCH FUCKING EVERYTHING.
Steven R
Of a more clueful than average but non-expert's mind (mine. Well, I call it a mind..) whirring:
I've just checked my fully-updated (so my system tells me) Linux Mint install, and the test script gives a 'busted' result, so my PC is insecure. I note that I can't sensibly remove bash even though there are other shells installed (because I'm not sufficiently expert enough to work out how to replace bash and keep things functioning) and I am having trouble finding a .deb for a fixed version of bash.
Hang on - I've been mildy panicking and missing the flippin' obvious. Deep breath. Go to Debian site. Go to packages for the unstable build, download relevant .deb file for my kit. Got it. Now use GDebi to install it. Eh? The install's hung, let;s have a look at the terminal window. It's asking me a question.. hey I didn;t know the terminal window in that thing was interactive before! Thought it was just a log of what;s going on. Learn something every day.. OK, click in window, type Y I do want to install the maintainers version. Hmmnn.. same thing again for this other bit associated with it. Gdebi's done. OK, test the terminals I can see available from the menus - both now NOT giving 'busted'.
So I now appear to be as safe from the bash bug as anybody's likely to be. phew. I can't see yer average PC user managing that lot even if they are using Linux (and I have introduced several people to the joys of using Linux, over the years) though.
So why was a fully-updated (because I always install updates right away) Linux Mint install using a vulnerable version of bash if fixed versions are already available? I'm not being critical, I'm presuming there must be a good reason and would like to know.
If you're running CGI scripts with BASH, then this is the slapping you wanted.
Anyone who gets to an internet router which shells out for ping/traceroute already as control.
Who puts bash rather than busybox on an embedded device?
It isn't good, but it is more like worrying about a thief getting through the non-locked internal doors in your house.
"but what about all the printers, network switches, VM hosts, etc etc & etc?"
What sort of printer or network switch would be running bash? Those which I can think of all run proprietary OSs which have their own basic toolset.
May be a bit off the mark here, but that remark sounds a bit like the numerous claims in the media during 1999 that microwave ovens would start exploding as their clocks rolled over to the year 2000
(despite such clocks not having any concept of date...)
cue downvotes.
Its worth worrying about modern commodity home routers though, because a number of brands have started running cut down Linux distro's in recent years - albeit with largely custom toolsets ... as someone commented already, bash is a fairly porky dynamically linked binary - the sort of firmware in such devices tends to prefer simpler and often statically linked binaries. (avoiding the need for large numbers of shared libraries on a device which has a relatively small toolset)
I'll have my server patched in a minute… anything I have the source code to, no problem. It's all the commercialised crap that's a problem.
-----
make[1]: Leaving directory `/tmp/portage/app-shells/bash-4.2_p48/work/bash-4.2/po'
>>> Completed installing bash-4.2_p48 into /tmp/portage/app-shells/bash-4.2_p48/image/
strip: x86_64-pc-linux-gnu-strip --strip-unneeded -R .comment -R .GCC.command.line -R .note.gnu.gold-version
bin/bash
ecompressdir: bzip2 -9 /usr/share/man
ecompressdir: bzip2 -9 /usr/share/info
ecompressdir: bzip2 -9 /usr/share/doc
>>> Done.
>>> Installing (1 of 3) app-shells/bash-4.2_p48
>>> Setting SELinux security labels
-----
Told you so. :-)
"What about your router?"
Pretty sure Drayteks run Busybox (I don't allow remote access to my management page, only VPN so I can't check at the moment for reasons I'm too lazy to explain) but a colleague tells me that BT Home Hubs run Bash.
So, that might be causing some BT people some headaches this morning, if true.
Steven R
"It's Windroids salivating and Fisher-Price Programmers rushing to comment."
Why not? You aren't a proper Reg reader unless you gloat when an OS other than the one you use has problems. Still, bravo for being defensive even more quickly - never seen that before :)
I guess this means that Linux based systems are likely going to continue to be the most risky OS for remote exploits on the internet for the foreseeable future.
I have recently migrated most of our webservers to Windows Server because of the much higher number of on-going security issues I found when using an OSS stack.
"I guess this means that Linux based systems are likely going to continue to be the most risky OS for remote exploits on the internet for the foreseeable future."
Are the only keys that work on your keyboard CTRL + C & CRTL + V?
Just as a (mainly) windows user, I think your a 13 year old moron that spends his nights on to many forums salivating over <insert female movie star>, whilst having pointless arguments with Eadon.
GROW UP!
Sorry, I know, feeding trolls and all that.
>>"You can only do things within the privileges of the web server"
I don't think that word "only" means what you think it means. Give me arbitrary execution on a server with Apache's privileges and I can do quite a lot with that. At the very least I can connect to whatever database powers your site and scarf all your data, read all the code of your site (looking for other vulnerabilities) and we haven't even got to subverting your site to serve malware, yet.
Fortunately I don't think so many webservers these days run CGI, do they? But still, there is a reason experts have classed this as a '10 out of 10' for seriousness and it's not because they know less about it than you do.
Also note that you're only talking about the HTTP vector. There are others, though that is the most likely.
Give me arbitrary execution on a server with Apache's privileges and I can do quite a lot with that. At the very least I can connect to whatever database powers your site ...
using whose login credentials? You dont think that a readonly access to limited data is te same as full access to everything and to get that password anyway, you need access to the scripts that employ them, and apache does nit have access to the scripts necessarily.
>>"using whose login credentials? You dont think that a readonly access to limited data is te same as full access to everything and to get that password anyway"
No password, no login credentials. Post I replied to talked about "only" having privileges as the webserver. I'm quite right to point out that webserver privileges allow a huge amount of dangerous activity. If I can execute arbitrary scripts as the apache process I can do all of the things I described as more.
>>"and apache does nit have access to the scripts necessarily."
You're creating your own scripts with this exploit over CGI.
You might be able to connect but you cant scarf any data from it. Any sensible web admin will have configured it to not to allow random sql to be run over the connection between the web server and the DB.
Or at least that's how I've been doing it, and encouraging everyone else to do it, since I discovered you could do that well over a decade ago.
>>"You might be able to connect but you cant scarf any data from it. Any sensible web admin will have configured it to not to allow random sql to be run over the connection between the web server and the DB."
The number of web applications that assemble their own SQL: very high. Even if they don't assemble it dynamically they read it from a file of SQL statements on...guess where: the web server.
Number of web-applications that retrieve data ONLY by pre-created stored statements in the database: far, far fewer.
Number of web-applications where even pre-created stored statements couldn't be abused to extract tonnes of confidential data: vanishingly small.
In short, being able to execute arbitrary code with the privileges of the web server is a massive security flaw and don't pretend otherwise.
I'm not pretending otherwise. If people cant be arsed to put in decent security on your web site dont try and blame it something else. Its very easy to write procedures to do everything you want and prevent abuse and as a web manager its very easy to prevent coders from writing (or at least running) sql on a world facing server. You should do that anyway - not blame your fuckup on someone else finding a way to exploit it.
The phrase "given enough eyeballs, all bugs are shallow" is mentioned a lot, but there is rarely a mention, and never praise, for "fumbling fingers". Given a large population of fumbling fingers, all bugs are hit multiple times. Then you only need one poor (but intelligent) sot to say "why did that happen?" and then go rooting around in the code. But remember, likely this bug was found because one person got clumsy with an editor in a script and wondered "where did that crap come from? oh, that looks like..."
Dyspraxics fall all over themselves to be helpful!
In "Inviting More Heartbleed" (paywalled here ... what do you think you are doing, IEEE?), Alan Geer says:
At this point, we should ask ourselves a core question: Does looking at code actually work as a quality assurance mechanism? DES got more study than any other crypto algorithm ever will and serves as an existence proof that eyeballs can work. Evidently the eyes on it were pretty good, better than the open literature knew at the time. But the DES algorithm, even in optimized implementations, seldom runs longer than 2,000 lines of source code, whereas OpenSSL is more than 2,000 files with north of 600,000 lines of content. Does that mean OpenSSL needs 300 times as many eyeball-years to get it as good as DES? Perhaps the count of available eyes should serve as a limit on the size of a code base.
Bruce Schneier has asked whether security bugs are rare or plentiful. We don’t know. Theo de Raadt’s contention that all bugs are security bugs seems a bit too strong but better that than too weak. Either way, will a determined effort to find bugs yield security value? Yes, if bugs are rare enough that by removing what we find, we materially lower the count of bugs still in operation. If, by contrast, bugs are so plentiful that we can’t make a dent in the overall supply, then finding more is a waste of time as the ensuing work factor doesn’t change the equation one iota.
Given that it’s harder to find bugs in complex operating environments than in simple ones, is there something about how we do things today that has caused us to pass a threshold of complexity, a threshold beyond which quality assurance, no matter how we attempt it, will be infeasible at the level of effort we can or will put to the problem? Again, is the eyeball supply in a continuing shortage such that we should manage it? Have we reached “peak eyeballs” the way some say that we’ve reached “peak oil?”
I don't know who Alan Geer is, and with the following quote from his article I can't be bothered to find out:
At this point, we should ask ourselves a core question: Does looking at code actually work as a quality assurance mechanism?
Good software testing doesn't rely on a human looking at the code. Pair programming and code reviews should be used to ensure code is readable, easy to understand and matches business requirements. Actual testing should be automated with unit tests, integrated tests, load testing and tools such as fuzzers.
"I don't know who Alan Geer is, and with the following quote from his article I can't be bothered to find out"
Frankly, you should.
You should also stop jumping at words like a neurotic. In my opinion, anything downwards of using a theorem prover that your code does exactly what it says on the tin is "looking at code". And then you need to look at the tin...
DAM good analysis there.
I never had much faith in eyeballs at code reviews that I chaired.
Compiler listings(1) and robotic pretty prints(2) I found were useful. I dreamed of analysers that would create flowcharts, but never worked with any. Lint was good for C, but so much today is in various scripting languages.
(1) for misspelled variables and simple general correctness.
(2) for code engulfed in previous comments not properly terminated, or vice-versa.
>>"If the former it's scary to think just how many holes there must be out there"
Article says it's been present since 4.3. IF that is correct then that means since around February this year. Obviously distributions will vary according to precisely when they became variable, but we're looking at that sort of time span of vulnerability where it wasn't known. Patching everything may take some time.
>>"No, the article says The vulnerability is present in Bash through version 4.3, which is somewhat ambiguous, but means basically up to 4.3. The article also says the bug is 22 years old."
Oh shit. Thank you for correcting me. That means we've had a major vulnerability for a really long time. I find it really unlikely there aren't people out there who haven't know about this.
Note, the vulnerability notice I've read says problem since 3.0 which would mean at least since 2005. I'm not sure where the 22 years comes from. But I'm nit saying you're wrong.
Well, I've just finished hosing down the house, but the stink is going to last for ages...
Thanks bash... even zsh, with it's penchant for scarfing down brite shiny objects didn't think that environmental code-smuggling feature was a good idea.
Any CGI (including any process it spawns, and any process *that* spawns etc), that *might* spawn a shell, for *any* reason (like globbing filenames, or piping to a mailer), will be more than happy to fall for this.
It clearly is break.
And if you find yourself wondering more than 15 minutes about what bash substitution will do to the the variable-holding text that you have just written and are passing to another command or even an eval ... you know there is a nagging problem of reliability and trust that will be unable to ever shake.
>>"My impression is that it inherits the bad qualities of standard *NIX shells and adds a bunch of its own"
Your impression is very wrong. It's fundamentally different to Bash. They're very dissimilar. For example, Powershell is entirely object orientated. This isn't really the place for it, however. I'll link back to an old discussion on it if you're interested. Link. It started off just asking what was the best Powershell terminal but then some people turned up and started ranting about how inferior Powershell was to Bash and it became a very informative discussion (albeit some people got pretty upset). If the above is your genuine impression - that Powershell is Bash with worse bits grafted on, then seriously - read the above and see what you think.
You cannot graft anything to bash without ending up with an eldritch horror that will haunt your nights. The man page insinuates as much.
But ksh and csh are not the way to go.
Just take a proper script language with minimal syntax, preferably functional (hint lots of parentheses hint), that has some syntactically nice ways to start processes and network/control them like a good process juggler, with workflow features and ETL gimmicks directly included.
As for Powershell ... yeah, I have the book by Manning, but, ... I still have to make time for it.
ksh and posix sh are very good, very lightweight and, so far, very solid. Most people miss the more interesting but still straightforward features such as arrays. Combined with awk(1) or, better, nawk(1) the world is your oyster. Of course Python or similar is better for longer term, more maintainable and larger tasks, short of a compiled language. In fact, a good Python implementation can be faster and better (I believe Sun had problems for a long time because Python was faster and more secure than Java during the early iterations).
A big advantage of Bourne/posix/korn shells is that even the oldest scripts should work and all "system" shell scripts should restrict themselves to Bourne (I remember a curious login bug for a user caused by a Sun system utility written in csh - convert ot sh, bug gone).
I'm sure powershell is wonderful. I even got a version that works on os x. But really, even on windows there are good versions of python, perl and ksh available. Why choose a scripting language with negligible portability or where the chance of it being installed on a non-Windows (or even most Windows) systems is slight. The only good reason is that, like visual basic I suppose, good interaction with other Windows programmes is built in. That, of course, is a very good reason for system administrators.
"... Scan your network for things like Telnet, FTP, and old versions of Apache ..." and old versions of anything else - FTFY.
I'm not aware of many web servers that run BASH any more. Also BASH normally sits behind other stuff like sshd so IS protected by authentication.
Yawn - can't be arsed to get excited by this: IT IS NOT A HEARTBLEED SCALE SNAG. It's just a bug. Schools and Unis will probably want to patch this quickly though - for obvious reasons 8)
Cheers
Jon
This is a quite serious problem I'm afraid. ..... John Sanders
Quite a game-changer would be another way to sum up the exploit and vulnerability vector, John. And something quite serious for the NSA's newly created Chief Risk Officer, Ms Anne Neuberger, to fluff and not ignore and realise is an opportunity to change a series of catastrophic intelligence disasters into something else quite different and increasingly more successful and engaging.
It is in more worlds and spheres of collateral influence than just IT and Media that Competent Cyber Warriors Reign Immaculately and Rule Imperiously. The secret though is to realise that in there/out there one be not alone, and there can be many who are considerably better skilled in the Right Dodgy Royal and Ancient Future Builder Arts. Such a wisdom keeps one sufficiently alert and far enough ahead of the games being played to be almost thought of as leading, and that may be thought of and treated by some into the Madness and Mayhem of FUD and Continuity of the Status Quo, as a Live Existentialist Threat, and that is surely a Monumental Mistake that Intelligence Services and Servers make in Orders in order to comprehensibly fail spectacularly.
If you're running Ubuntu etc then the bash problem is already fixed.
The biggest problem with heartbleed is that there are a lot of embedded infrastructure devices (routers etc) that were vulnerable to heartbleed, but are very difficult - if not practically impossible - to fix.
Now in theory the same could apply here: the routers etc could, in theory, be running bash. However almost one of the embedded systems use bash for scripting. They almost all use busybox etc. That means they do not have this problem.
bahs is on servers, and some desktops. Those should all be on good patch streams, either due to being auto updated or through BOFHery.
So all this "Bigger than Heartbleed" crap is just a media beat-up.
(I initially suggested "bashdoor" as a code-name for this vulnerability)
Debian and Ubuntu *are* affected (though to a lesser extent than distros that have not switched away from bash for their /bin/sh yet). While their /bin/sh is no longer bash *by default*. It can be changed (and often is) with a "dpkg-reconfigure dash". Also bash is the default shell for new non-system accounts and that has an impact on the openssh vector (where the login shell of the target user is launched to interpret the remote command).
Not to mention the fact that Debian or Ubuntu ship with a number of bash (not sh) scripts which may end up being invoked in a context where an attacker has injected environment variables with content starting with "() {".
So Debian and its derivatives *do need updated asap* as well.
That vulnerability has been in bash since version 1.13 so over 22 years.
I'll also take the opportunity to thank Florian Weimer of the Debian and RedHat security teams (not bash's author as some have said) for the very professional handling of that security vulnerability disclosure.
All my servers
And the servers in the company I work for that are internet facing. (Tomorrow I will patch the 100+ internal ones)
This is not good, but at least it is a painless straight forward patch, even considering that I had to resort to a manual compile for a couple of servers.
Interesting links:
https://security-tracker.debian.org/tracker/CVE-2014-6271
https://access.redhat.com/solutions/1207723
https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
https://marc.info/?l=oss-security&m=141157106132018&w=2
bash is large, and some would say bloated.
It has it's use as an interactive shell, but it shouldn't be used as a programming shell - especially for potentially vulberable network facing tasks unless there is a VERY GOOD REASON to do so.
Even worse, are those Linux morons that #! scripts to /bin/bash even when /bin/sh will do.....
The same sort of insular people who will happily type a long bunch of privileged commands prefixing everyline with 'sudo'.
The same idiots whose solution to a bugged api implementation is not to fix their inplementation, but instead write a new 'standard' from scratch.
The sort of people who used to moan about Microsoft software being unportable, chanting 'software should be open' when they really meant 'software should run on Linux'.
The same people causing the same unnecessary portability problems to the community that they used to criticise MS for.
If someone produced scripted code for me that was dependant on bash (or zsh/tcsh/mksh/ etc.) for no good reason, I'd seriously question their ability.
This isn't a rant against Linux itself, or indeed a lot of the users, but if you are the sort of Linux user mentioned above, feel free to downvote!
even when /bin/sh will do.....
When exactly will /bin/sh do and why should it have helped in any real-world situation (leaving aside 20/20 hindsight)
If someone produced scripted code for me that was dependant on bash (or zsh/tcsh/mksh/ etc.) for no good reason, I'd seriously question their ability.
The only thing in question is whether you are the point-haired boss of Cave Jclson, the RPG programmer moaning about the kids and their modern structured programming.
/bin/sh, as long as it is not linked to bash or zsh or ... and really is Bourne/posix shell, is the only shell script to use for system scripts, e.g. stopping/starting programmes, user utilities provided by the system (as opposed to written by the user).
This is such ancient advice and knowledge that I am surprised someone has the temerity to question it. Few things are more frustrating than getting some apparently simple script and finding that the writer knew only Linux and bash and used the non-portable aspects of both for a script that he claimed to be portable.
Oh, and for those who think Linux is UNIX: many true UNIX systems still have not got bash installed and it rarely comes as a UNIX default installation. Now I know, from these discussions, that bash's support is a one-man show and that the Linux world's idea of review is for someone, who knows who, to skim through the source code by eye, probably with no test suite and no Lint or GCC equivalent switch, I am even more appalled at the apparent equating of "shell" with "bash".
Mageia is consistently rated as one of the top 5 most used distros of Linux.
I've never heard of it, and I've been running Linux on work machines for 18 years. More importantly, it's not so much the servers that are the problem, since they can be readily patched as long as they're running maintained versions of Linux (and they really should be). The problem is all those devices that might have a vendor tweaked version of Linux installed, edge devices such as routers for example. These often have appallingly coded control panels that rely on CGI scripts and services open that they shouldn't. Hopefully they'll be running an alternative shell like ash or dash, but they may not be, and the vendors are usually piss poor at releasing updates to the firmware.
The problem is all those devices that might have a vendor tweaked version of Linux installed, edge devices such as routers for example.
Most of those are likely to be running Busybox rather than Bash. It also depends on whether they invoke shell scripts with user-defined variables, which may limit the attack surface.
And it's still not fixed anywhere, because the "cobble together a quick fix for the specific exploit" approach so common in the FOSS world has yet again failed to patch the actual underlying flaw properly. Which given it stems from a fundamentally poor design in bash is hardly surprising. I suspect an actual fix will be notable for actually breaking genuine applications because it'll likely need to change the way bash works.
The end result of all this is an even worse security nightmare, because now the bug not only exists but every black hat knows about it too. So the attempt at "patch then announce" has utterly failed. This is why vendors like Microsoft take their time to patch issues, rather than the freetard perception that they just don't care.
I've tweaked the code to test /bin/sh and Bash explicitly. Bear in mind, on OS X, /bin/sh is Bash and not a symlink and is thus vulnerable.
Might be worth changing "which" to "type -fp". The former is a C shell thing that's not guaranteed to work on all systems as it relies on a shell function being defined or something being installed in the path that emulates the C shell "which" builtin. A "type -fp" should get you an installed bash shell that's on the path.
My Ubuntu system uses bash version 4.3.11(1)-release (says "bash --version"). My executable dates from April 23 (says "ls -l `which bash`).
Yet the test in the article shows my bash (from April) isn't vulnerable to Shell Shock.
The advisory says bash through 4.3 is vulnerable. I'm not entirely clear what "through" means, but evidently some time after 4.3.0, there was a fix released such that 4.3.11 is not vulnerable.
The advisory makes it clear that the recent bug discovery was really made only recently, so I'm very puzzled as to why 4.3.11(1)-release isn't vulnerable.
Was the Shell Shock bug fixed accidentally, somehow, before April 23? Or did someone spot the exposure and quietly patch it over? Who made the fix? Someone at Bash Central, or Debian, or Canonical? Which versions, exactly, after 4.3.0 are not vulnerable?
see my point above, the test code was wrong, as it assumed /bin/sh and bash are the same thing. Try the updated test in the article:
env X="() { :;} ; echo busted" `which bash` -c "echo completed"
and you may be in for a bit of a bash shock (sorry, bad pun :-)
I think "through", in this context, is American for "to" or "up to". No idea how Americans say "through" when they really mean it as on the way to something else, nor if it means to the end of or the start of .... Remember, American is English spoken by foreigners. So do not be too hard on them.
The real problem: this is not an OS bug. This is a shell (bash) implementation bug and would appear on any platform that can run it.
As for Linux being modern: Win8 is modern. Linux is now getting, in computing terms, elderly: heavily patched and extended elderly. As a UNIX bigot, I prefer a real UNIX, preferably a BSD variant. Of course, UNIX is significantly older than Linux. But UNIX is immortal.
However Linux is all right provided users and developers understand that Linux-compatible does not mean portable.
There isn't a completely solidly centralized process of testing with Linux but testing of it is done. You have things like the Linux Test Project and there's Autotest. There are a variety of tools for testing.
The problem with something like this is that it's a design error. The reason you can pass function definitions into environment variables is so that when a Bash process creates a child shell, it can inherit the parent's defined functions. So a child shell is created, environment variables are inherited and when that happens the child shell notices something has a () in it and executes it thinking it's a function definition.
It's almost stereotypically old school UNIX. Someone needed something to happen, saw a quick and simple way to achieve it, implemented it. It's what I call the 'Stallman Approach'. Need something - build something.
With a more modern design, more OO-based, this probably would never have happened. But UNIX/Linux is very big on passing around text (the basis of all its pipelining and things such as this). We didn't have all this new-fangled OO stuff back then. Seriously - I remember when OO was the new thing. We had Bash and Vi and we MADE THEM WORK!
Anyway, I've gone on a slight side-track. The point is, that yes there are automated test tools and some automated testing is done (though lots of room for improvement tbh). But that something like this is really hard to pick up with automated testing. There's NOTHING wrong with the implementation of the code. No out by one errors, no buffer overflows, it doesn't access memory it's not supposed to. It's just excellently written to do something it shouldn't.
TL;DR: Design problems are tricky.
"The problem with something like this is that it's a design error. The reason you can pass function definitions into environment variables is so that when a Bash process creates a child shell, it can inherit the parent's defined functions. So a child shell is created, environment variables are inherited and when that happens the child shell notices something has a () in it and executes it thinking it's a function definition."
Interesting you should say that. This suggests you are looking at design patterns rather than coding errors.
Searching for such patterns was a key part of why the Shuttle software design programme had such a low error rate.
In the 30+ years since that programme started developing software techniques to find such patterns have improved quite a bit.
Now what happens after each case of that pattern can be a difficult decision but it seems hard to believe this can't be done on a large enough or fast enough to bring about a substantial improvement in delivered software.
>>"Interesting you should say that. This suggests you are looking at design patterns rather than coding errors."
Actually, though I've never put it in those terms, the older I get the more that is my first approach to reviewing code. Reviewing other people's work is one of the things I do professionally these days (I consider myself very lucky, btw) and I do start with this if I'm beginning at a high enough level and it's not just "is this okay to push live?"
I would be very, very interested in any automated tools or approach to testing design patterns. I suspect like many problems, once we can formally define it, automating it will be straight-forward. And that would be a very big deal. (I got modded down furiously the other day, btw, for suggesting that within my lifetime computers would one day be better programmers than humans).
>>We had Bash and Vi and we MADE THEM WORK!
Well, perhaps not so far "back then". It sh or csh or, if lucky, tcsh and vi (I mean vi, not vim). But we could use make, awk etc. fluently too. awk is still a wonderful tool for everyday, off-the-cuff use. What's more, we did not have to "make them work". They worked and still do. Our efforts went into the job in hand, not making the tools work.
re the test: do not use env(1). Run it without env to see what your default shell does or run it with an explicit path or change shells.
We can turn this on it's head and ask "what source code scanning tools exist for the code produced by Microsoft?" and then add..."Why don't they use them?".
The simple answer is that complex programs can contain many "bugs" and there may be many pathways through that program together with interaction with other programs. At lease with open source the problem, once identified, is fixed very rapidly whereas with closed source this may be many months before a "fix is released.
From what I've heard the "bug" in bash has been around 20 years or more and has only just been found.
On a separate note... if Unix and derivatives are as poor as suggested elsewhere in this thread why do the majority of the servers that form the internet use it as the operating system of choice?
Reading some worrying stuff in the comments here.
1. "Not a big deal for embedded stuff as that should have ACLs or be on separate networks." - Yes it's good practice to separate user networks from back-end but that doesn't make this problem go away.
2. "It's already fixed/patched." - Oh lordy. Just because some distros have patched doesn't mean all your vulnerable hosts are insta-healed.
Yeah, I think you can spot the difference here between the professionals and those with a football team mentality. Someone further up was dismissive of the problem because it "only let you execute with the privileges of the web server". That's actually only talking about the HTTP vector but anyway, who think's that's not a security disaster?
But six people have modded that post up so far I'm presuming because it sounds wise or supports their belief in an OS's invulnerability. It's a quite alarming degree of smugness.
I ran yum upgrade bash on my internet-facing servers and the update's already there, fixing the issue. So it seems it'll be a quick fix for CentOS and RHEL boxes.
Fedora 17, however, doesn't have a fix. Looks like that's one case where you'll have to svn checkout & compile by yourself...
Well it's not exactly an involved fix. Someone has basically just added code a patch which scans environment variables for the beginning and end of a function definition and then spits out the text "error importing function definition for Foo'" if there's anything still trailing after the definition. The patch is probably about six lines long. ;)
It's not even what you could call a good long-term solution (though it will probably end up the long term solution due to lack of other easy options), it's just an "OMG!PATCHTHISNOW" bit of coding.
The real joy is tracking down and fixing all the vulnerable systems and worrying about whether you've been compromised by this exploit, not that it's been patched today.
Red Hat has become aware that the patch for CVE-2014-6271 is incomplete.
"Update: 2014-09-25 03:10 UTC
Red Hat has become aware that the patch for CVE-2014-6271 is incomplete. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions. The new issue has been assigned CVE-2014-7169. Red Hat is working on patches in conjunction with the upstream developers as a critical priority. For details on a workaround, please see the FAQ below."
And the patches are out:
25 Sep 2014; Lars Wendler <polynomial-c@gentoo.org> +bash-3.1_p18-r1.ebuild,
+bash-3.2_p52-r1.ebuild, +bash-4.0_p39-r1.ebuild, +bash-4.1_p12-r1.ebuild,
+bash-4.2_p48-r1.ebuild, +bash-4.3_p25-r1.ebuild,
+files/bash-eol-pushback.patch:
Another security bump for CVE-2014-7169 (bug #523592).
At least in Gentoo, and yes I've just re-patched, again. Still, amusing to see these "exploits" showing up in web server logs and have no effect.
Yes, they often do. Once again security researchers shout very loudly "biggest hole since whenever"... while reality is a bit more nuanced.
Seems to me that crying wolf all the time is hardly a worthwile strategy to pursue (but of course it is commercially almost imperative given the competition between various security outfits).
Given all that there are probably still routers etc that do run bash... but definitely not all of them.
"So is this a problem just for CGI?"
No. Some DHCP clients, OpenSSH, etc anything that passes user input into an environment variable for Bash to read (and accidentally execute). And even if your program is clean, if it spawns a child that launches another script that invokes Bash, you're still at risk because the env var is still there lingering in the background.
It won't affect everything, but just enough to be a PITA.
C.
Well, unless your Windows computer connects to a webserver running GNU/Linux which has been compromised using this exploit and serves you malware / steals your credit card info / exposes your real identity / etc. A security problem like this is a problem for everyone regardless of favoured OS. One reason I hate all the football team mentality - it's such an attitude of "I'm alright so that's all that matters". No islands on the Internet.
Unless your router is a full-fledged Linux box with GNU Bash installed (unlikely), you should be safe. Most don't have the storage for a full-blown Linux distribution, thus rely on the more compact Busybox shell:
RC=0 stuartl@rikishi ~ $ env X="() { :;} ; echo busted" busybox sh -c "echo completed"
completed
Not vulnerable, at least my version isn't.
As the formerly proud owner of a Macbook Air,
I just downloaded the bash source, downloaded the patch,
applied the patch, compiled, checked the resulting binary
and replaced /bin/bash and /bin/sh.
It Just Works. Kinda.
In the mean time, my Ubuntu machine got
the fixed version through the automatic update.
Did a 'yum update bash' on Centos 6.2 and was surprised the version it was giving was less than 4.3. However, after the update the exploit no longer worked.
Is there a test for routers? I don't have a internet facing remote management port, but is there some other way to exploit this, if indeed it is a problem on my router?
Bash has always worked this way people calling it a 20 year bug is incorrect. The problem is with shell accounts that are given access to run unmonitored scripts just because they are background daemons. I'll be honest on privileged accounts executing any code that you need password authentication for otherwise will execute the code with out any password needed. Try it in java, use the runtime class and run a sudo echo "hello world". There is no security that stops the execution of this code it runs and there is nothing you can do about it except update your sudoers to exclude that user. Wow wait updating the sudo configuration and not redeveloping java ... that can't be right. This stuff is ridiculous its not bash it's the freaking super accounts that are set up to run the daemons.
try running this command from you shell account:
env X="() { :;} ; sudo echo busted" `which bash` -c "echo completed"
or this one
env X="() { :;} ; apachectl restart" `which bash` -c "echo completed"
Yes that's correct you get prompted for a password and yes if you don't enter the correct password the script fails with a segmentation fault.
or
Yes you get a error message saying you need root access to run the command.
With that said then lets see where is the problem then. ohh that's right its the user configurations that run the daemons that would execute the code via injection.
so then how do we fix that well simple take the root access away from the daemon ... but won't that cause issues? It doesn't on my machine but of course i actually write good configuration files/ and hide powerful commands. yes that's right hide commands by changing the PATH variable for that user and denying read and write access for that user to my /usr/bin /usr/sbin /usr/local/bin and so forth then i crate symbolic links from those directories to a "sandbox"
and whats my result ... you guessed it protection from something like this. You guys got to stop blaming the code and instead start to understand it ... remember the daemon accounts that run the services don't need access to 7 different c compilers and httpd doesn't need access to chown or chmod or make or exec ... httpd just needs to know where its config files are and the location of its web content
It cracks me up this is the reason why people started to make sandboxes and so forth. So we could have a safe environment to execute new or vulnerable code. Funny thing is that these sandboxes can actually run what ever you want them to just as long as you configure it properly.
Anyway i don't see a bug i just see poor configuration and poor user account restrictions because if you use the google and search for bash commands that execute multiple commands on one line you will see a lot of results showing you exactly how to write them and it won't be something that was only published yesterday
so in conclusion i'll leave you with another bug.
if you string multiple env vars that run commands and/or programs with a ";" you will get the same freaking result. ";" does not mean end of execution it just means end of current command. Have you ever written a shell script with more then one ";" and have you every thought that shell scripts ended with a ";" ? well if you have, you have never actually written a shell script. ";" just denotes the end of the current command so you can do exactly what this "bug" is and run another command
executing
echo "hello" echo "hello"
prints out
"hello echo hello"
executing
echo "hello" ; echo "hello"
prints out
hello
hello
This post has been deleted by its author
This post has been deleted by its author
most of the internet has been running on linux for decades, oddly it has stayed up pretty well ,This like heartbleed is full , of if this and if that, or what could happen, but yet again, the internet is still working, so we can assume, not much harm has been done. Heartbleed should have leaked 1000's of passwords and 1000's of websites should have been taken over,but none of that happened, there was a ton of hype about what could have happened. Rememeber NIST backdoors software trusting them is laughable, we have linux because windows got owned regularly thats why linux is running most of the net. On page 3 of the comments here some one explains how setting up things properly make this thing a joke
The media does seem to have amplified the issue somewhat, but then again, that's what they're there for, to amplify the news that others raise. And bad news sells! This centuries-old fact is not news.
What I observe with HeartBleed and ShellShock was the idea of "branding" a bug, which seems to have resonated with the media outlets.
Even more so than the bug where Debian's patching caused OpenSSL to generate weak keys. That bug was particularly nasty, but generated a lot less press than these two have.
With it, I've noted a lot of misinformation out there, claims of all kinds of embedded devices/Android being vulnerable (see my tests with busybox above) and claims that it's a Linux or Unix-only problem (Windows can run bash, e.g. using Cygwin or Interix).
So in the open-source world we've now had a few high-profile security holes pop up. As you point out, some of them have been around a long time. HeartBleed was nasty as it revealed bits of RAM accessible to the web server which amongst other things would include the SSL private key.
ShellShock doesn't give you that (as the private key should be owned by root and unreadable by anyone else) but it does allow you to execute arbitrary commands, which is nasty in its own right, as it only takes a privilege escalation bug to gain access to such information.
The good news with ShellShock is that it's only a limited set of environment variables that get passed to CGI scripts, and so it's not that difficult to mitigate against if you have a CGI script that executes some command line application (e.g. gitweb executing the "git" command). Not difficult to do a few checks of %ENV, pluck out the bits you want then set the offensive ones to `undef` before shelling out.
The other factor is that bash is never linked to applications, it is a stand-alone binary executable, replacing it will not cause ABI breakage like replacing OpenSSL can, and it typically does not come bundled with applications either as a dynamic library or statically linked. That makes containment and clean-up a lot easier.
Sorry, I said earlier that the bug was introduced in 1.13 and that's how 1.13 ended up in the article (I can't remember what the article had before that edit). That was wrong. It was indeed introduced over 25 years ago in bash 1.03. There is irrefutable evidence of that.
See http://www.dwheeler.com/essays/shellshock.html#timeline and http://thread.gmane.org/gmane.comp.shells.bash.bugs/22418 for details.