
I'll bet...
The Wintards will be all over this.
Oh, wait all they know is point and click. Never mind...
A huge amount of Linux software can be hijacked by hackers from the other side of the internet, thanks to a serious vulnerability in the GNU C Library (glibc). Simply clicking on a link or connecting to a server can lead to remote code execution, allowing scumbags to steal passwords, spy on users, attempt to seize control of …
You bet, because the arrogance and ignorance shown toward professional companies has been rather relentless. While MSFT and to a lesser extent Apple have woken up over the years and actually done something about their code etc, Lintards hid behind the 'we are better than anyone'. Well, guess what when you have amateurs hacking at code, this is what you get.Reality finally strikes for Linux arrogance that it is free of the risks faced by more popular sw. The BS wagon must finally have overflowed on the FOSS we get to inspect the code therefor it is better nonsense.
The good news that might be jumped on by the Linux apologists is that the install base is finally large enough to be worth attacking. Since the mental shields are down, you all make a great target :-)
Yes, but I would say there's still a fair bit of crow for the free *nix crowd to eat on this one.
- The bug has been out in the open for more than a year.
- It seems they DID opt for obscurity while fixing it because it was too sensitive to do in public.
While I regard the second item as prudent, it's pretty much been an article of faith for the Penguinistas that work needs to be done publicly and ALL vulnerabilities disclosed publicly as soon as known. Hell, they've even criticized Google for giving a 90 day grace period on vulnerabilities.
All in all I still think the *nixes are more secure than the commercial offerings. But the Wintards aren't the only fanatics in the flame wars.
Even though it is obvious you are a troll and cannot read, but I will bite:-
"While MSFT and to a lesser extent Apple have woken up over the years and actually done something about their code " - they are still shit what are you talking about?
"you have amateurs hacking at code" - Google and Redhat are amateurs and BTW Microsoft have their own Linux distro too.
"Reality finally strikes for Linux arrogance that it is free of the risks faced by more popular sw" - good lol - Linux is on more computers in the world than Windows or MAC, it just is not on as many desktop PC's.
Microsoft have lots of bugs which they admit they will never fix and this issue is easily mitigated by a Linux administrator. Further, glibc will be patched to fix this, so yes FOSS wins again.
Even though it is obvious you are a troll and cannot read, but I will bite:-
""While MSFT and to a lesser extent Apple have woken up over the years and actually done something about their code " - they are still shit what are you talking about?"
Which will be why a lot of hackers have turned their attention to apps, applications and plug ins such as Adobe Reader, Flash Player and Oracle's Java..
""Reality finally strikes for Linux arrogance that it is free of the risks faced by more popular sw" - good lol - Linux is on more computers in the world than Windows or MAC, it just is not on as many desktop PC's.
Microsoft have lots of bugs which they admit they will never fix and this issue is easily mitigated by a Linux administrator. Further, glibc will be patched to fix this, so yes FOSS wins again."
If Linux is on more computers than Windows or OSX, that makes this any Linux security hole potentially a FAR more serious concern than any Windows or OSX flaw. It's also like that a lot of those computers are in places where a Linux administrator is not available, thus there will be no one available to mitigate the flaw, or install patches.
The fact is that the Linux advocates missed a security flaw for the best part of a decade while sitting there and criticising Microsoft, Apple and other big businesses for their security.
Don't get me wrong. I am not a fan of a particular OS and will happily use whatever I need to do something (be it Linux, Windows, Unix or OSX), and I don't believe any OS is 100% secure.
"The fact is that the Linux advocates missed a security flaw for the best part of a decade while sitting there and criticising (sic) Microsoft, Apple and other big businesses for their security."
Not to mention that most administrators have missed the the security flaw that Windows is for that last 2+ decades...
I'll bite... I know I'm stupid for doing so, but I'll bite.
Look at the description of the bug. This is something which should never be able to happen in a proper code review environment. So far as I know, there's no company or operating system which has large number of highly skilled developers actively watching their repositories for this kind of stupid.
Linux, FreeBSD, Windows, Mac OS X all suffer differing levels of stupid. This particular flavor of stupid is actually as the troll suggest a special kind of linux stupid. Let me explain.
While the Linux kernel developers and to some extent the glibc developers have embraced within some constraints the use of data structures, their means of embracing them has always been weird and highly inconvenient.
See, where object oriented languages make implementing data structures a breeze and therefore can centralize major fixes of code to where the failure exists, structured languages like C tend to make use of some interesting creative tricks to accomplish the same. The gnome community for example implemented gobject which is the most obscenely inconvenient mechanism to reproduce the entire C++ language in C ever ... well next to Microsoft's COM. They go so far as to manually implement vtables which in a single inheritance environment doesn't cause much harm, but in multiple inheritance can be a disaster. On top of that, they implemented some of the weirdest RTTI methods I've ever seen.
glibc doesn't use gobject. Instead it tends to borrow from the Linux kernel kind of stupid which makes weird use of over-inflated monster structures which are REALLY REALLY REALLY efficient, but their complexity is bonkers kind of stupid. I've seen so many poor uses of rbtree.h and rbtree.c that I shake in my boots whenever a header file includes rbtree.h. I also know that all it would take is one bad line of code in rbtree.c to completely destroy the entire linux kernel for security.... and it has barely any unit tests at all.
Well... at least if the glibc guys would have used a linux style data structure, this wouldn't be a problem... but they didn't... instead they decided it was too much work to use one of the simulated classes. Instead, they reinvented the wheel... with 4 sides on it and made an array and chose to manage it themselves. This means all security holes or bugs found in the code would be localized. So, while this bug has been fixed in 5634543 different places in the kernel and glibc already, it was probably too much work to fix, so they just left it there. Funny thing is, I probably saw it a long time ago (1999) when I was writing a DNS resolver and peeking at glibc to see how it's done.
Let's be honest though... all operating systems have these problems. Only Lintard and Wintards and so forth are stupid enough to think that it's unique to the other guy. If you actually were smarter than an amoebae, you'd realize that all code is insecure and Windows and Linux are both pretty decent for what they do but should never be trusted for security. That said, neither should any other code.
I regularly teach how to hack through Checkpoint, Cisco, Palo Alto, etc... firewalls. I show that finding a nifty problem in a kernel driver or better yet in the syscall interface of the kernel can give you a golden ticket without the firewall software ever seeing the malicious code. I've got a few in my toolbox at the moment for Linux if I need them. Darwin is a goldmine of them. Windows is a little trickier since you have to actually dig a bit because it's closed. But, pretty much all operating systems are written like shit.
If you want a personal opinion on which I think is cleanest at the moment, I actually have to give Microsoft the crown. Ever since the introduction of the Windows 8 kernel, it's been such a massive improvement that I like them best. They have some of the best coding practices at the moment and they seem to be taking process really serious. There was a few shortcoming in retaining legacy driver support in Windows 10 which bit them, but at this time, they're quite good. Mac is pretty close to the bottom. Apple releases more half-finished code than even GNU does these days. Their unit testing is pathetic and I expect there to be massive amounts of "Fixed it.. broke it again.,.. fixed it... broke it again" in the Darwin kernel.
LLVM is maybe the most important project ever in the open source, but the quality of LLVM has been decreasing far too rapidly. The errors and warnings generated by the compiler are generally terrible for assisting with identifying root cause or even general error location. As such, the quality of the Mac kernel is only as high as it is because of duct tape and crazy glue... possibly some bubble gum as well.
Oh... ummm I forgot...
RedHat generates absolutely massive amounts of "it kinda works, it must be done" code.
Google does pretty well when they're focused. I'm actually often amazed at how much good code comes from them. That said, there's a good bit of slop as well. But would you seriously believe you can employ that many programmers and have nothing but good code?
If RedHat were out of the game, there would be far less new bad code in Linux.... that said... there would be far fewer bug fixes as well. So I'm not sure if it would be a good or a bad thing.
I'm hoping there will be a new small and simple OS which could make a run for being the new "Let's try it" platform.
aww the poor Lintard babies had to downvote your post. a little bit too much reality for them. when the revolution comes you little Lintards will be the first against the wall. actually, a firing squad is too good for you. we'll just stick you in an elevator whose microcontroller has had its software written to exacting GLoonix Open Sores standards. (i.e.: code written by twisted sycophantic knob-polishers running around like headless chickens avoiding the retarded vituperation of that quadruple chinned Finnish bloatwagon named Loonis)
Well - except someone DID inspect the code - Redhat and Google - and flagged a bug *before* it was exploited (as far as anyone knows).
So, whilst there might be a notion that FOSS is perfect because Granny checks the apache source while Gentoo is installing, the fact that normal people rarely read source does not mean no-one reads the source.
Sure, Google - hehe - bypassed its "90 days policy after which you're dead" in this case - it was afraid if it was made public a lot of its own infrastructure would have been at risk... as usual, different standards for you and your competitors, right?
And how do you know nobody exploited this bug? It has been there sitting unseen for eight years.... and this is not the first time I see DNS resolving code failing for non "common", longer yet fully compliant answers (usually because there are more valid data than most DNS returns). I have a router that made many devices based on BusyBox fail because of its longer DNS answers.
Lintards hid behind the 'we are better than anyone'. Well, guess what when you have amateurs hacking at code, this is what you get.Reality finally strikes for Linux arrogance that it is free of the risks faced by more popular sw. The BS wagon must finally have overflowed on the FOSS we get to inspect the code therefor it is better nonsense.
Can't tell if srs but assuming you (AC suggests you might be) are you're not nearly as smart as you think you are for an extremely long list of reasons - not least you'll note it's Google who dug this one out. Just throwing that out there.
You think you've made some kind of snarky point with your remark. All you've done is highlighted open source working as intended. The problem was identified and is being worked on. No code is perfect, and this highlights the importance of open source.
Now can you tell me how many vulnerabilities are in your closed source OS I wonder, in a company that fired nearly all its Q&A? Oh yeah - you cant.
I am so not looking to recreating my image (virtual hd as well iso's) files. It doesn't matter which ecosystem gets hit since I do them all. And I'll be seeing trickle down from each upstream package.
I could care less about comparing security track records, more eyes, less evangelists, please.
The Wintards will be all over this.
To which you only need remind them of the recent bug that left many Windows anti-virus packages with serious holes in them, amongst other things.
I'll say it again. There is no such thing as a completely safe operating system. If you want to avoid being hacked, stay offline!
I get Stratus VOS and VMS confused, but I certainly did zero downtime patching on one or both of them. The OS let you replace a running executable and the runtime migrated all the threads as they terminated, you could even migrate threads between nodes on a cluster thereby enabling zero downtime firmware upgrades.
That's more likely VOS, but I'm no expert with Stratos kit.
VMS still needed reboots for certain library updates (yes, I'm looking at you, C RTL - you were usually the worst offender), and if you had to AUTOGEN the system to update certain system parameters. Clusters might achieve uptime measured in years (if you could reboot individual nodes to apply updates) but standalone boxes, not so much.
Autogen mostly (maybe not entirely) went away when VMS systems with sensible amounts of memory arrived. Much of Autogen was about tuning the allocation of limited real physical resources in the most appropriate way for a given system's workload, in a way which widely used OSes don't bother with. When the system has multiple GB of memory, that's not always a big issue, and that now includes VMS too. Autogen's still there if you want it.
VMS itself is still with us, the port to x86-64 is announced and timetabled, and VMS development and support is now being done by people with clue outside HP (with HP's agreement). Many of those people are well known from previous roles when VMS was a DEC product.
http://www.vmssoftware.com/
(no connection except as an observer)
"You'd have to jump through a lot of hoops to build one these days if for some odd reason you wanted to."
What's so odd about it?
For example, FreeBSD has known the /rescue folder for quite some time now; it's basically a folder which is packed with statically linked binaries (from bzip2 to mount, sed and tar and a whole lot more) and the reasoning behind it is quite simple: if for some reason your libraries become unavailable (for example because of the /usr filesystem crashing, some installation going wrong or even a human error in removing the wrong file(s)) then you can always fall back to these tools.
I've never needed it myself so far, but I still think that there's nothing odd about the underlying philosophy.
If interested then the rescue(8) manualpage has more information on this.
No one will own up to that. So much for the 'lots of eyes on the code' BS. Since there is no payback on actually reviewing code, it doesn't get done. Commercial companies OTOH have a vested interest in improving their products, hence the focus from MS and Apple, and even a bit of Google on proactively finding holes and fixing them.
It seems there's some history starting in 2000 with a vuln which was apparently fixed in 2013 for version 2.18. I just checked a freshly updated Debian system and it is running a much old version. I guess there is some good reason to keep using the older versions if that's what Debian has been doing. Can someone here explain this?
Debian has issued patches that remediate this and other vulnerabilities for all presently maintained versions. Mine (version 8/jessie) were patched automatically this morning.
The notice sent (conveying detailed information for version 7/wheezy) recommended reboot to ensure that no references to the old version were overlooked, but indicated that what really was needed was to cycle all the services that referred to the old and vulnerable library, which I assume would be nearly all of them.
@/dev/null and others
http://www.linuxquestions.org/questions/slackware-14/glibc-security-patch-cve-2015-7547-a-4175572402/
It appears that there was some kind of patch released then from OpenSUSE. The patch for at least part of this vuln is still being applied to Slackware versions which consequently don't respond to the proof of concept exploit. Please read whole thread.
As PV put it...
"I've had two requests in email to remove the patch since glibc had supposedly fixed the issue that prompted it, but left it in place anyway. Maybe luck, maybe slack."
Tramp Icon: every device with an Internet connection is only a few hundred milliseconds away from all the other devices. We are all in this kayak together so perhaps paddling in same direction would be a good idea...
Sonar is not great.
namespace Foo { int bar(){ return 0; } } // is fine by sonar but
int Foo::bar(){ return 0; } // causes false errors
We evaluated on a large code base and I recommended we discontinue it's use in favor of Clang/Valgrind.
Coverity is worth the money but Sonar is a false economy and is vobotten in these parts.
The commentary in here is hilarious. The fix has to bake for 30 days? WTF. Linux crowd needs to get on the ball, this defect was known a year ago. Were MS or Google sitting on something like this for that long there would be a hue and cry from the lintards.
The bigger issue is, what about all the routers and other crapware that has this hole, those that won't ever be upgraded. You knw, the ones sitting between you and internet armageddon.
The bigger issue is, what about all the routers and other crapware that has this hole, those that won't ever be upgraded. You knw, the ones sitting between you and internet armageddon.
Perhaps you missed the bit about the problem residing in "glibc's DNS resolver". Nothing about the core routing of the Internet relies on DNS to continue routing.
Now, if this had been a flaw in BGP, then shiver me timbers. But this is really more an issue of devices that require DNS (or more importantly, perform DNS for others, a la BIND).
Most home routers run uclibc, which doesn't appear to be affected by this bug.
Anyone with a more sophisticated router/firewall setup might need to check their systems.
ldd --version
on the command line should tell you if you have glibc installed, and if so what version. It's not guaranteed to work on all systems, but it's worth a try.
I doubt much consumer routerage is running glibc, they tend to favour leaner libc's such as uclibc or (very recently) musl.
As for core-infrastructure DNS servers (not routers, as mentioned above), if they are pointed at upstream DNS servers that can be compromised, then we've got bigger problems.
This is because they still use the C language.
C is a contributory factor, not the core reason.
The real reason is because too many protocols like DNS, SSL, etc have been amateurishly specified. They're often binary, they're poorly defined and poorly documented. This makes it difficult for implementers who, even if they want to check every field for validity, haven't got the documentation to do so.
What is crazy is that there are tools and libraries out there that make protocol specification easy and the consequent implementation (with built in validity checking) is completely automatic, even in languages like C. It's called ASN.1. This is a serialisation standard which has value and size constraints built in. If you specify in the protocol schema that an array is a fixed size the auto generated deserialiser will not read anymore than that and will return an error. Providing the tools and library are themselves ok (and they are generally ok), you won't then have a buffer overrun. It's really neat for things like that.
Take a look at the Example on the Wikipedia page.
Maybe I'm being unfair to DNS - it's older than ASN.1. However if a DNS response was an ASN.1 PDU with a well written schema behind it, bugs like this would be very unlikely to have ever happened. The same goes for every other buffer overrun bug you have ever heard of.
ASN.1 has been around for nearly 30 years now but people are still ignorantly defining absurd binary protocols in poor ways that make it very difficult for programmers to avoid mistakes. Worse still ignorant people are creating thing like Google Protocol Buffers that are a poor imitation of ASN.1 and touting them round as being the best thing since sliced bread. GPBs are simply woefully bad in comparison.
I blame universities for not teaching students properly.
The "A" in ASN.1 stands for "Abstract" - it isn't in itself a binary (or any other kind of) wire format, it's a description of the permissible components of a data structure.
The data can be encoded for interchange in many forms (BER, CER, DER, XER,....) some of which are binary and some of which are text. The most common binary representations make robust handling unnecessarily difficult (particularly in respect of the size of each data element which may be encoded in a number of different ways or be indefinite). This has led to precisely the same issues as occurred here being found in software using ASN.1.
There's a lot to be said for using a common representation format (it makes analysis easier and bugs can be fixed once rather than multiple times in different software), but there's a lot of bloat in ASN.1-based implementations that exists only to deal with rarely used features - and having large chunks of code that are rarely exercised is not an ideal basis for reliability either.
There's a lot to be said for using a common representation format (it makes analysis easier and bugs can be fixed once rather than multiple times in different software), but there's a lot of bloat in ASN.1-based implementations that exists only to deal with rarely used features - and having large chunks of code that are rarely exercised is not an ideal basis for reliability either.
ASN.1 is still the only thing we have like this that has a binary wire format and does constraints checking. It is the closest thing we have to a common representation format that doesn't miss out constraints specification and checking. It also does types and extents tagging too.
If Google added constraints, message type and extents tagging to GPBs it would useful. It would be a clone of ASN.1. As it stands you cannot stream read GPB's wire format, you have to have a-priori knowledge of what message is being sent, and you're reliant on devs writing extra code to check constraints.
The commercial ASN.1 tool sets I've used has been pretty good. If only someone like Google would do a decent open source implementation.
I think you'll find the "A" stands for "Awful", although after writing our own ASN.1 toolset I believe they missed a trick and it should really be "AAASN.1". A classic example of design by committee.
"the only binary wire format with constraint checking" is a bit rich - you could gzip an XML file and that would meet that description, and the constraints specification and checking you describe goes rapidly out the window in the real world, where there are dozens of ill-conceived OIDs that partially overlap. If you want your software to work, you'd better accept all of them.
It has legacy uses and is pretty firmly entrenched with any public-key based stuff, but I would never adopt if for a new design.
> The hell of Java is far too high a price to pay to avoid buffer overruns.
Java is full of security holes anyway. However, using C for networking code is just asking for it. Buffer overruns just like this have been breaking the internet for 30 years. And a lot of these bugs being found now have existed for 20-30 years, so you have to ask, how many wide-open holes exist but haven't been found yet?
Nuke all this shit now, thank you God in advance.
Clown number one - "Lintards!"
Clown number two - "Wintards!"
The Bard - "Rebellious subjects, enemies to peace,
Profaners of this neighbour-stained steel,
Will they not hear? What, ho! you men, you beasts,
That quench the fire of your pernicious rage
With purple fountains issuing from your veins!
On pain of torture, from those bloody hands
Throw your mistemper'd weapons to the ground..."
"Just remember that to exploit the vulnerability requires a compromised upstream DNS server, or MTM vector.
If that's happened, m'lud, I'd submit that this vuln is the least of your problems"
Because we can all control which DNS server that website we want to access decides to store its DNS domain info on can't we? Oh, wait...
"It's 2016 and clicking a link can pwn your Linux pc"? Worried about how much it would upset the freetards??
True. I mean this story only really came out yesterday and this morning Linux machines around the world are being updated as we speak.
I can't remember the last time Windows could do that. Usually takes them a few months to even acknowledge the problem doesn't it, let alone fix it?
"So it's when the story breaks that's important to you, not when the vulnerability is introduced?"
Yes, because more often than not opportunists will only know about a problem like this until it goes public. These type of people would be scrambling this morning to find out how to exploit this, at which point the machines would be fixed.
It's not ideal to have these issues in the first place, but the people writing this software are only human. So more often than not, the fact a problem is introduced in itself isn't a problem. It's how it's handled that matters, and Linux developers have proven yet again that these things can be fixed in an appropriate time frame.
"Usually takes them a few months to even acknowledge the problem doesn't it, let alone fix it?"
Average time to fix security holes is lower on Windows (there is less time from an exploit known to a fix provided - so less time at risk). This is because most Windows vulnerabilities are not published like this before being fixed.
"This is because most Windows vulnerabilities are not published like this before being fixed."
Well as this appeared to be only 'widely known' (1,2) yesterday and where I'm sitting now my Debian, Mint and OpenSUSE 42.1 have all already been patched I don't think we need any lessons in Redmond's way
1) https://googleonlinesecurity.blogspot.co.uk/2016/02/cve-2015-7547-glibc-getaddrinfo-stack.html
2) https://isc.sans.edu/forums/diary/CVE20157547+Critical+Vulnerability+in+glibc+getaddrinfo/20737/
"But most Microsoft holes are not published at all until a patch is available."
Yesterday was the 16th - the date on the new libc is the 16th. My machines only picked-up the patch when roused from sleep today . As MS found a magic way of installing to machines that are off or VMs that aren't actually being used ?
It is possible for a malicious DNS server to return too much information to a lookup request, and exploit the glibc flaw to flood the program's memory with code.
If you're using your ISP's or Google's or some other authoritative DNS service... then how can you be infected just clicking on a link?
"... was labeled "P2 normal," suggesting it was not being treated as a super high priority. ..........
It appears Weimer and O’Donell – both glibc maintainers – were investigating the flaw in private, away from the public bug trackers, due to the sensitivity of the issue."
I wonder if there's a 'silent scream' process whereby something like this gets labelled "P2 normal" and the Special Ops team leap into action. It would make sense when your bug trackers are public. Then again, they were both glib maintainers so it may have been truly independent action by just them, initially.
Thanks for the update el'Reg. I'll get on with protecting my servers.
I just have to ask though:
bug in glibc's DNS resolver – which is used to translate human-readable domain names, such as theregister.co.uk, into a network IP address.
Was this really necessary? I would posit that anyone reading this site, or at least 99.99% of them, will already know what DNS is. Even most semi-technical users know what DNS is for. This line may be needed on a general news site, but on The Register, read mostly by techies?
This is bad - very bad. And probably very embarrassing for the glibc maintainers. Trying to apply a little thinking though (rather than arbitrary religious affiliation to Some OS), it seems that ALL of the operating systems have had their share of major security issues and there's no sign of that stopping. I do have some sympathy for the view that many Linux fanbois have a "we're better than you" attitude, but that's true of any of the groupies. A lot of conflation of glibc and Linux going on though! High overlap between the two sets, but they are still distinct.
With respect to the actual exposure, I'm finding it rather hard to judge. Most embedded devices aren't affected [due to uclibc]; other systems will be using internal DNS servers which [assuming proper configuration] will mitigate the exposure. My immediate thoughts are that the truly vulnerable equipment is limited to anything which is (1) "big enough" to run glibc and (2) on the network perimeter, so not using internal DNS servers. And, of course, the DNS servers themselves!! Any thoughts on these lines welcomed.
We also need to remember that this is glibc... not Linux. So my Windows desktop has at least some exposure because we really on cygwin for little things like git.
Seconded!
All operating systems I have worked on (including RSX-11, CDC NOS, CP/M in various flavours, VMS, Xenix, IRIX, AIX, MS-DOS, Windows in various flavours, Linux, MacOSX) have had their share of SERIOUS errors, growing more hazardous over the years as they become more complex, and machines become more interconnected. An OS for me is a tool, and I will pick such tools as work best for me given the application. I will also realize all have their hidden flaws because they were made by a bunch of ape-descended life forms who are so amazing primitive they still think digital watches are a pretty neat idea (just like me ;-)). Our eyes evolved to pick out juicy fruit and crunchy beetles from foliage, not to find bugs in code.
Thus I do not worship an OS, just as I do not worship a hammer. I will say "OUCH" when any OS I use goes wrong in this manner, just as I say "OUCH" when I hit my thumb with my hammer.
"So my Windows desktop has at least some exposure because we really on cygwin for little things like git."
Install EMET. It should block this.
See http://blogs.technet.com/b/srd/archive/2016/02/02/enhanced-mitigation-experience-toolkit-emet-version-5-5-is-now-available.aspx
I'm willing to bet there are similar issues in every libc and any of the runtime environments for any "safe" language like Java, .net, python..
Would be nice to see proper discussion of the issue and what people should be doing to not get caught out by it instead of retarded finger pointing and sneering.
Question:
Can vulnerable systems which query a patched system by exploited remotely?
https://www.debian.org/security/2016/dsa-3481
"While it is only necessary to ensure that all processes are not using the old glibc anymore, it is recommended to reboot the machines after applying the security upgrade."
I don't want to reboot one system just yet.