Glad that they don't use their own infrastructure for their status domain...
They got that part correct... *cough* AWS.
Global internet glue Cloudflare experienced a brief network outage on Friday that broke multiple apps and websites, including your humble Register. On its status page, as of Jul 17, 21:37 UTC, the DNS-and-everything-else provider said it was "investigating issues with Cloudflare Resolver and our edge network in certain …
This post has been deleted by its author
Well.. DNS is decentralized. You may recall that before DNS, name resolution depended on a flat-file and you had to retrieve that flat file in order to get any updates/changes.
You're also conflating two (or three) sides of DNS. The client's side and the authoritative (the client's resolver is somewhere in-between) side.
Your client can have as many resolvers listed and they'll resolve whatever it is that they can communicate with (within reason... there's a whole host of other rules that the OS relies on before dismissing the primary DNS server and moving to the secondary/tertiary server). On the other end, well... that's only as good as the DNS servers chosen by that domain's operator.
You mention AWS... it is interesting that AWS takes a very peculiar approach to this issue. In order to prevent downtime either from overload and/or some weird outage of say the .com or .net or .org TLD DNS servers, they actually use four different TLDs... .com, .net, .org and .co.uk for their nameservers.
The issue isn't that DNS isn't decentralized.. It is that CloudFlare is becoming "too big to fail".
My question is why weren't they following the golden rule, "read-only Friday". Routine is routine, until something shits the bed.
"The outage occurred because, while working on an unrelated issue with a segment of the backbone from Newark to Chicago, our network engineering team updated the configuration on a router in Atlanta to alleviate congestion" - says https://blog.cloudflare.com/cloudflare-outage-on-july-17-2020/. The word "routine" doesn't appear in the whole post
I saw it happen (working from home), I went and got into the pool with a beer and when I got out everything was up and running again. Problem Solved!
The internet is designed to be 100% reliable but I am never surprised when things like this happen because "designing" it to be reliable doesn't mean that everyone sets their access up to be 100% reliable. But generally any issue like this is fixed quickly.
Ah! So that explains it. I was surfing before bedtime and found there were a number of sites I couldn't get to any longer. I called my ISP (who answered because even after 10pm IDNet are awesome) but they were unaware of any problem with their servers.
's a little bit disturbing that a router in Atlanta can prevent me (a Briton) accessing The Register and Thinkbroadband. Not in a political sense but why are so many of my DNS queries to/from UK sites going via the Atlantic?
I noticed around the end of the day (eastern USA) that I wasn't able to get to a bunch of Internet sites. I just assumed that systemd-resolved had b0rked itself and I'd fix it later, but when I came back to the computer in the evening, everything was fine. I use the 1.1.1.1 DNS resolver, which is operated of course by CloudFlare. I guess I'll just have to set up my own DNS resolver that talks directly to the root servers.