Re: One-in-two
you mean like windows, mac, and linux? do you manually patch all your systems? really?
19 publicly visible posts • joined 18 Jan 2014
wait... sequential processing considered harmful? That's what the article is saying. Everything must be paralle. OK, but given two parallel machines with 100 streams running, the one whose sequential processing speed is twice the speed of the other will run all 100 streams at twice the speed. I fail to see the dichotomy between serial and parallel performance. Processors today have muiltiple cores, and rarely would a single processor use all the hardware available. Running multiple co-operating sequential tasks (à la Hoare CSP) is a model that works. The 100 serial tasks could be written in any language.
Getting the sequential logic right using python, and the looking at C if performance becomes an issue is a reasonably useful heuristic for developing individual processes. An application should almost never be a single monolithic sequential process, but a composition of many. The lack of widely adopted practices for elegant IPC is hardly unique to C. Python is the same. Composition is an interesting problem, but aside from heavy computation, it is not at all clear that language constructs are more helpful (or closer to the actual hardware) than just writing explicit CSP.
Patching is hard... really monstrously hard. It is the single most important *security* activity, and yet is has zero visibility in most organizations. What *patching* means, is being able to update your operational systems weekly, having pre-operational systems that you can deploy to and have confidence that they really will see whatever happens in ops. If half the energy spent on security baubles and consultants' checklists were spent on process and equipment to enable patching, the world would be much better off.
terms of server refers to Mapmakers Group Limited. Google that, and you get a Moscow address and phone number... and linked in has a director for the company, there for 26 years... I don´t claim high confidence in the information, but what there was was dead easy to find.
The issue of trust is not just on a personal level, but also on a state level. Today, everyone must trust that the US has not had Intel include something objectionable in it's processors. Theses days, such an requirement is really hard to take for nation states. So while it is unreasonable for one person not to trust anyone, it is equally unreasonable to expect everyone to trust any particular someone.
That means nation states setting up some sort of systems they can trust by assigning resources towards assuring themselves that hardware is trustworthy. That's not unreasonable.
how do you know the 3d printer isn't infected? You build the first printer from scratch, and it has
very limited functionality, just enough to build the next printer, and you iterate, so that at the end you
have a trustable device. This isn't reasonable for a person to do, but for a state actor, maybe...
This objection is specious... your router has two uplinks. It uses Router advertisements to tell everyone on the LAN ''this is your network segment" and all the hosts on the net pick their addresses using SLAAC, which takes a few seconds. When your primary internet goes down. the router advertisement daemon running on the router just has to notice, and start advertising the backup network, everyone will autoconfigure to the new addresses, and in a few seconds everybody is up again. the IPv4 version used NAT meaning your address changed when you changed uplinks, just swapping router advertisements means all the clients get new addresses, looks exactly the same.
The fact that no consumer level router does this, is a supply and demand problem, not a technical one.
So you have 175 mbps link, and you are worried that it is so full that a voice link consuming (worst case) 64kbps or... 64/175*1024 = 0.036% of the available bandwidth will be impacted? really? DNS traffic is essentially real-time, as is gaming traffic... and video conferencing like Skype, G+... there are lots of use cases where low latency is important for internet traffic, and voice is no longer anything special. I find your QoS concerns very 20th century.
I think you are assuming the ISPs will just block it. What the ISP would actually do is apply some sort of packet preferencing. So the rest of the internet will work fine, but Netflix will have poor quality, drop outs, pauses. The consumer will see the internet works but netflix doesn't and contacts netflix. Netflix sees impact to their brand, and higher support costs, and it is not clear that the consumer will actually blame the ISP.