So SBG4 was undamaged where my server is
But the reconnecting of servers has gone up from the initial estimate of today the 15th to "a gradual restart" on the 22nd, so 2 weeks downtime for a building that "wasn't damaged"
328 posts • joined 10 Jun 2009
Does it really? I mean really? Care to link to one of those proof of concepts?
Spectre, as I understand it can only be exploited by code with root privileges on one virtual machine to attempt to grab info from another.
The biggest weakness in any organisation is always going to be the human element. The "password on a post-it note" syndrome.
Why bother going to the immense time and trouble of developing speculative Spectre exploits to harvest random data when you could just honey-trap a senior executive who has all the access you need?
Most of my browsing is done on an old crappy laptop that sits next to my sofa and only has integrated graphics rather than a GPU.
For years I ran Opera on it until they threw themselves over a cliff by moving to webkit and dropping most of the customisation stuff that I loved. That's when I moved to Firefox as I discovered textensions that allowed me to set it up just the way I wanted.
It seems plenty fast enough for me (especially with Adblock dropping all the crap that really slows most pages down). So no I won't be upgrading this machine any time soon.
I might give it a go on my gaming / work machine, which does have a nice fast GPU. But then again, Firefox already zips along on that machine anyway, so I'm not sure there's a point.
There was an article a while back (I think it might have been one of the on-call ones or maybe during the big Ba snafu recently) about the perils of testing your disaster recovery systems in live environments. Who wants to be the one who admits to crippling a datacentre because their failover testing failed!
I had my own mini problem with this outage, since my recovery plan relied on OVH not losing ALL connectivity throughout Europe...
The CEO has just tweeted his explanation of the incident
Claims it was two unconnected major incidents, one power failure and one optical fibre control f*ckup. Though how he expects anyone to believe that the first didn't lead to the second I've no idea.
And as for their 'failover' system, I tried to use it to move an IP address from the datacentre with the power failure to an unaffected one, but the move task just got stuck in their API, The move task is still there, with no way i can see to delete it, so now I may have problems at some unspecified time in the future if the move does eventually happen when I don't want it to.
What I woudn't have given for a 1200-baud modem!
I had to do overnight support for vital banking systems using a portable teletype machine, which, if it worked at all, could only manage 300baud, spitting out text a letter at a time on to thermal paper with the consistency of that shiny bog roll cheap hotels used to use.
I worked there starting in 1995 shortly after the Telegraph had launched (in November 1994) the UK's (World's?) first daily news website as Electronic Telegraph.
At the time, we did nightly updates taking copy from the print edition to put online. Each edition was produced by just three people to start with. On the nights I was on shift, one of my tasks was to check through the whole update for problems before putting it live.
I never, ever let any problems slip through...
and never ever had to race back to Canary Wharf in the middle of the night to fix things....
Oh and coincidentally, the original deskspace for the site on the 11th floor of One Canada Square was right next to obituaries!
"I can't say I agree with Corbyn on lots of subjects but I do respect the fact that he seems to believe in certain principles."
This, so much this. Blair and then Cameron were all about spin and message. May is an outright liar, so it's refreshing to have someone that has principles and will say the same thing next week as he said last week. He's also someone who believes in talking rather than dropping bombs on people. I hope we get more politicians of conviction on the back of this
For PCs during that period, in pure tech terms , Acorn's ARM machines running RISC-OS were way ahead of offerings from anyone else and prior to that the BBC micro (built by Acorn).
It's just such a shame that Acorn lacked any international marketing savvy then.
IPv6 is badly designed and thought out. It could have been made backwards compatible with IPv4 which would have ensured a smooth and orderly adoption but the 'designers' thought they could do 'better' with the result that it has had to be dragged kicking and screaming into the world and twenty years on it's still ignored by many.
See https://cr.yp.to/djbdns/ipv6mess.html for a detailed analysis of how the IPv6 designers got it so horribly wrong.
I'll just point out that the various plugins to disable the API only do so for unauthorised users, so if you install one then you need to log out from the admin panel to see it in action, otherwise the API will still return any info you request.
I really, really wish they'd just kept all this shit as a plugin though, which is where it belongs.
Thanks, I have now found https://wordpress.org/plugins/disable-json-api/ which has been updated to disable the whole REST API for unauthorised users.
But I can't get my head around why the Wordpress developers haven't made this isn't the default state, If individual users have a use for the API then fine they could switch it on. But then again I don't see the argument for moving the API into core in the first place, rather than leaving it as an addon (where it started life). To me it smacks of a "look at us aren't we clever for doing this" type of thing, rather than something that is genuinely useful to most people.
There are all sorts of things you could build on top of the API, but I'm suggest that for 99% of them you'd be better off doing it a different way.
I just had a look at the details of the bug. It was found in the new REST API that Wordpress enabled by default for the first time in 4.7.0
When I read the patchnotes for 4.7.0 I sighed inwardly at having a new API which I had no interest in using currently, enabled by default and I looked for a way to turn it off. It seemed that there was no easy way to disable it and the documentation I found cautioned against doing so anyway as the API is apparently used by unspecified core routines
Here's a quote from someone on StackOverflow:
"The REST API is not really a security issue, but I suppose some could surface in the future. It's much more important to look at Hardening WordPress - WordPress Codex and Brute Force Attacks - WordPress Codex
As of WordPress 4.7, the filter provided in core for disabling the REST API (via functions.php) was removed because the API is in core now. There is no official option to disable the API as some core functionality depends on it. So if you disable the API, you may see breakage because by default the API core and is available for use by themes and plugins and other sites."
(I bet the author of that reply feels pretty stupid about that first sentence now!)
The whole thing is just an accident waiting to happen. I shall look again at ways to turn off this unwanted API.
"Unfortunately, it is by now impossible to avoid this abomination if you have to stick with a major distribution".
I hate the philosophy of systemd too, but it's still fairly straightforward to run the current Debian release using sysvinit instead.
I switched all my servers back to sysvinit when I discovered that during a standard reboot systemd was shutting down logging to syslog BEFORE all applications had been cleanly shutdown, thus important messages were lost. For instance, If you just went by syslog it would appear as though Mysql had crashed and not been shut down cleanly.
Anyway a guide to switching back to sysvinit here, it's very simple:
When all the hype about Docker started I had a look at it and timely security updates was something that put me off the whole thing. That and the layer upon layer of the filesystem structure with seemingly no easy way to merge redundant layers was frankly a little psychotic (it may be better now, I haven't checked).
I have a house in rural France and around here nobody seems to have heard or care about H&S rules.
It's common to see people working on steeply pitched roofs without any safety equipment whatsoever.
There's one old boy who works on his own with a van and a long ladder repairing roof tiles. He was at a house across from me last year and it made me feel quite queasy to see him going up on the roof all on his own, even climbing the ladder one-handed as he held on to a stack of new tiles on his shoulder with the other.
is that there is no concept of archiving. So to properly verify the current entries you need the whole blockchain which just keeps growing and growing.
Unless that is, you have some sort of central authority to sign and publish checkpoints in the chain periodically.
Yep, Apple need to get off their high-horse. All they've effectively done is create a super-super user. It doesn't make root problems magically go away, it just moves the target.
Meanwhile, slightly offtopic, but try checking the details of an HTTPS certificate in mobile Safari... and you can't.
My understanding is that TLS was a 'rebranding' of SSL when it got to v3.1 (i.e. TLS v1.0 = SSLv3.1) . However reports often seem to mix the terms as we have in this story ( "An attacker can exploit support for the obsolete SSLv2 protocol – which modern clients have phased out but is still supported by many servers – to decrypt TLS connections.")
So in simple terms is my TLSv1.2 connection vulnerable simply because the server still supports SSLv2 (even if I'm not using it) or only if my connection is actually SSLv2?
And if I'm confused (as an experienced IT person) what hope does the average user have?
Not that I'm complacent, I patch the Linux servers I manage at least every week.
However security consultants like to make the latest bug sound like the end of the world, when really it isn't and isn't anywhere near. Well-managed servers will get patched in a timely fashion, some badly managed servers will get deservedly bitten, need to be rebuilt, and in the process we may get to learn who the IT-incompetent companies are (I'm looking at you Talk-Talk).
The world will keep turning and a few more cowboys will go to the wall.
"Oh yeah, and what about the man page on "ln" which eschews the usual unix idiom and waffles so effectively that no-one can figure out which comes first: the file name or the link name. man pages are a cowpat in the field of technical documentation."
I don't know what Man page you were looking at but on Debian 8.3 man ln starts:
ln - make links between files
ln [OPTION]... [-T] TARGET LINK_NAME (1st form)
Then goes on to list the variations and what each option does. Pretty clear to me.
Biting the hand that feeds IT © 1998–2021