Re: Huge markup
Cloudflare doesn't bother to charge for bandwidth - that's how cheap bandwidth has gotten these days.
I serve around 35TB through Cloudflare each month. AWS costs thousands for that, and our dedicated server provider just hundreds.
160 posts • joined 31 Oct 2007
Trying to find a phone today that can't do NFC would be quite difficult. Not impossible but very difficult.
Remember this is to make captchas easier, not be the only option. As I already have a yubikey I look forward to using it instead of clicking on traffic lights.
And no bits can't fake it as per the original article. Cloudflare uses the fact that the original device manufacturer of the keys signs the keys in batches of 100,000 and Cloudflare has a whitelisf of vendors. A bit could emulate the security key in general but won't be signed by a reputable manufacturer of security keys and thus will be rejected.
I'm slightly disturbed by this article and comment section - do people seriously not know that hardware security tokens exist and how they work?
I've been using a yubikey for years for security reasons. It's fantastically convenient and virtually unbeatable security wise. Way better than sms or 6 digit nunbers for multi factor authentication.
Sure if you don't have a yubikey already you aren't going to rush out to buy one just to beat some captchas, but I would have assumed a lot of this audience would already have them. Or they should be seriously thinking about getting one at least.
Is it really Facebook re-using content, when the news organisations happily and freely post it themselves?
They are asking for a platform to share their news on, then are demanding cash for them doing it.
And now they are moaning about the platform they don't pay for being taken away from them.
Patching on Linux is at least simple - no reboots.
It's amazing that Microsoft hasn't figured out how to avoid regular reboots by now. Any Windows admin boasting of high uptime is admitting his servers are insecure. My production Linux servers all have over a year uptime generally (last reboot was datacentre maintenance related).
Good write up - first news site I've seen that didn't say it was a Cloudflare outage.
Cloudflare got blamed by everyone else since a lot of their error pages were visible to end users. The error pages were only there because the origin servers only had Level 3 transit of course so alternate routes weren't available.
People saw the Cloudflare logo and instantly assumed they were the source of the problem.
To be fair, the paid version which I've used for a couple of years now does not require participants to have a login and offers phone dial in mechanisms as well.
It's one of those actually quite well implemented products which no one really knows about.
Zero software to install unlike Microsoft Teams which repeatedly asks you if you wouldn't much prefer their app or Zoom who forces you to use an app.
I came here to mention this too.
The reason why BGP is involved is likely Cloudflare removing their contributing servers from the F root entirely.
This probably took time because they were hoping to just fix the code instead of disabling all their F root servers, but they couldn't do it fast enough so they pulled the plug.
Without Cloudflare F root servers in the pool, all the other F servers would pick up the slack which never had any issues.
We've got a decent sized Vmware cluster for our prodution workloads. 3 nodes, 96 cpus 576gig ram. Currently looking to expand this significant actually.
A lot of our stuff is Foss, and Vmware is running around 30 Ubuntu VMs. I have to pick and choose where we spend time tinkering however - I can tinker with our outbound mail server or a specific database but the entire platform the company runs on? I'm not prepared to (and don't have the time) to tinker there. Easier just to pay for it since its mission critical (and we have a provider who supports it too as needed).
Incidentally it still comes out way cheaper than AWS even with the Vmware licence fees.
$1b sounds a lot more FRAND than $0.
Since they got caught paying $0 that doesn't mean they get the FRAND rates for all their past infringement. Otherwise no one would bother paying at all until they got caught and sued.
No but putting tracking the movement of fellow workers and automating checks on their calendars is creepy as all hell and certainly not "do no evil". They can disagree with some of those projects without actively spying on individual people working on them.
Sounds like they got too cocky thinking they were untouchable and that no one would notice what they were doing.
Compared to other cloud outages,this one is very minor. Not only was it detected and acknowledged quickly, it was also resolved extremely quickly and the postmortem let's you know exactly what went wrong in great detail.
Outages happen. If only they were all this pleasant to experience.
Yes, but no. Its progressive jpeg but for multiple progressive jpegs at once.
Having 10 progressive jpegs on your site isn't much use if the first one has to load fully before the next one starts.
Cloudflare"s technique allows all 10 to progressively load at the same time.
And Telstra in Australia decided to route a good chunk of the domestic Internet to Melbourne and two very confused routers that sat there bouncing packets back and forward until their ttl ran out.
Halfed our servers traffic for an hour and Telstra doesn't handle any transit or peering for us at all!
Another happy Pebble user here too. Pebble Time, little scuffed and the battery isn't quite a week anymore but it's fantastic.
This is the first watch that makes me think about replacing it. Nothing short of a week battery will satisfy me - sleep tracking is occasionally useful no matter how much the Apple watch users say its not.
All that assumes that the underpaid staff at the stores with essentially root access follow that elaborate secure procedure.
How staff in stores can override a procedure like that I'll never know. It should be automated for them and if the user can't verify themselves then it should be escalated to a special department with tighter controls.
The argument about government CAs isn't a good one.
You can always verify who issued a particular certificate, so if you went to Google.com and you noticed their SSL certificate was issued by a Chinese CA it would be blatantly obvious.
For most potential targets various monitoring would pick it up so manually verifying it each certificates CA isn't needed - it'll be noticed by others.
Partial failures like that typically mean the connection can no longer reliably carry traffic, but it still thinks the link is online so it never enacts the fail over procedure.
So no prior failure is required, just the monitoring being told that something is up when it's actually down.
These extremely rare failures actually happen all the time. Earlier this year servers I manage were also knocked offline by a partial failure which prevented automatic fail over.
FTTP can (and does) still have congestion at many different points.
Firstly it's using GPON with a fibre running at 2.488gbps shared between up to 32 houses. If those 32 had 100mbit plans and decided to use them at the same time then you have a (small) problem.
Then you have POI congestion where the ISP doesn't buy enough bandwidth. This happens all the time and affects FTTP and FTTN equally.
And then you have ISP congestion from the cheap ISP's with garbage internal networks.
Fixed wireless has a fairly fixed max total speed per tower however and its shared with a lot more people so it's most susceptible after satellite.
Of course you automate it. You'd have to be crazy not to!
Every certificate I deal with (thousands) is fully automated these days except for specialty types like wildcard and I have them partially automated.
Anyone manually mucking around with certificates in this day and age either doesn't have many, has some very pedantic requirements or doesn't know any better.
I think the point is typically everyone's computer would put it in the exact same location making attacks against multiple computers trivial.
A buffer overrun or similar attack with ASLR means each computer is different from each other, so when attacking you have to first find your target addresses which makes it a lot harder.
It's not about having code jumping around constantly on a single PC.
But it doesn't have to go there. There are multiple routes via multiple providers.
Prior to Google's announcement, a Japanese ISP already had one or more routes to each destination. The new 'shorter' Google route got added in addition to the already existing ones.
With some sort of monitoring you could detect that routes via the new announcement are failing, then revert back to the longer pre-existing routes.
4gig tablets will very much gain benefit from 64bits - 32bits can not ever use all of 4gigs of memory.
Try it. Approx 0.5gig will magically vanish when you load a 32bit OS on a computer with 4gigs of ram.
Remember the bits are used for address space, RAM isn't the only thing in the address space.
Your graphics card's ram is automatically subtracted from the 4gig of RAM, plus various IO things take their share as well.
32bit address space doesn't equal 4gig of RAM.
And if you go for some nice deep cycle car or truck batteries instead of the much smaller UPS batteries you absolutely phenomenal capacity.
A lead acid battery rated at 300 amps (not continuous) equates to 3.6kW for a single battery. And at lower power levels you'd get incredible duration.
I've got a small car battery dedicated to a 1.5kW inverter for emergencies. A friend had a black out a couple of weeks ago and it kept their TV and Playstation going for about a day before they got power back. The voltage afterwards was still 12.5v which is roughly 50%.
Biting the hand that feeds IT © 1998–2021