Re: If the only purpose is work checks...
It should have changed in England and Wales a decade ago, except the SNP decided it merited an exception from their usual policy of not voting on legislation that doesn’t cover Scotland.
66 publicly visible posts • joined 26 Oct 2007
They actually do get obsolete hardware wise. Take the ASA 5555-X, it has a first generation Intel Core processor from 2006. As firewalls tend to need to be flexible in what they’re doing, quite a bit ends up on there rather than on a dedicated accelerator. So overall firewall throughput is 4Gbit, but turn on any of the advanced inspection features and that drops to 1.2Gbit. If you want to do VPN with it, throughput is only 700Mbit. I used to run a small cluster of these to provide enough capacity. Whereas a modern 3100 series can do 10s of Gbit from one box.
Bandwidth is ever increasing and malware writers come up with more sophisticated ways to evade detection which in turn requires more smarts on the inspection devices. So eventually that old firewall won’t keep up.
People always seem to leave off the rest of Gove's comment about experts.
"I think the people in this country have had enough of experts from organisations with acronyms saying that they know what is best and getting it consistently wrong."
How often do you hear that economic forecasts have been revised? Many of them make astrologers look good and accurate, which was his point.
HTTP challenge is not the only method of authenticating to Lets Encrypt. If you control DNS you can put a token in there that does the same job. You’ll want to be able to automate your DNS as the token changes each time.
But webservers are generally easy, you can install things like certbot to do this for you. The pain is with ‘appliances’ where the only access is through a GUI.
The sockets API that everyone decided to use came from Berkeley and is known as BSD sockets.
But the implementations are not derived from BSD code. Infamously around 2000 Windows was using a BSD-derived TCP/IP stack, but that was rewritten for Vista. There’s probably the odd utility they haven’t bothered to rewrite, but the core of Windows or Linux networking is not BSD.
Yes, Dinorwig can output 1.8GW at full power which would cover something like Hinkley Point C going offline suddenly. But its capacity is ‘only’ 9GWh, so after 5 hours it’s empty and you need to wait for it to be refilled.
Nicking from withouthotair.com, the situation you need to prepare for is a major lull in wind across the UK for several days. The number he uses is 10GW of wind power being unavailable for 5 days = 1200GWh. (And as the amount of wind capacity we build goes up that number only gets bigger).
So you’re looking at 100s of Dinorwigs, are there that many suitable sites in the country? Are people going to get upset when you start flooding valleys for reservoirs?
Depends what you’re doing.
If you’re updating a pair that does failover and there’s a bug in the new version, if you’ve updated the primary first then you’ll see the bug and you can failover to the secondary that’s still running the old code.
Whereas if you updated the secondary first you now have two broken boxes.
The later lawsuit by Symantec (who bought Veritas) was over their Volume Manager. That would tend to sit underneath the file systems rather than being a part of it.
I suspect a more likely source of encumbrance would be IBM depending on how the OS/2 and NT divorce was sorted out.
The first properly released 64bit Windows XP. But that doesn’t mean MS didn’t have a version of 2000 internally which they decided not to release. You can find things like https://web.archive.org/web/20000301193850/http://www.microsoft.com/WINDOWS2000/guide/platform/strategic/64bit.asp which talk about what MS had.
In the same vein there are beta versions of Windows 2000 for DEC Alpha floating around. But it never got to the full release as Compaq killed that processor.
I work in academia which is very much a mixed Mac/PC environment.
The common use case for a Mac VM is support/development. New bit of software comes out, or even just an updated version, you want to install it on a test machine first. With a PC it’s easy enough to have VMs running the standard image for testing, you can roll back and all the other good stuff.
But Apple insists on Mac hardware to run their OS, even in a VM. So do you have to give all your support people Macs so they can test things as well as a PC? Having a nice big host let’s you consolidate into one box which is one of the advantages of virtualisation.
Not just a Cisco issue, this is going to cause havoc for any vendor of Network Access Control that's keying the device off the MAC address
At least Apple backed away from their original proposal to periodically change the MAC address within an SSID. That would make managing wireless utterly horrific.
You don't need a specific allocation for the IPv6 network. The IPv6-only host doesn't need to know about IPv4 at all.
NAT64 is usually combined with DNS64. An IPv6-only host will do a DNS request, the DNS resolver will see if the response only includes IPv4 addresses. If so, it rewrites the response to an IPv6 address that points at the gateway and encodes the IPv4 address. The gateway then has the smarts to spot this IPv6 packet is actually for an IPv4 host and translates it.
NAT works in both directions, you publish your service on a public IPv4 address and translate it to the IPv6 server sitting behind the gateway.
Article 95 lists the 50 metres rule if you don't have a licence that exempts you.
It exists in theory, called NAT46.
There are two bits you need, first is your gateway to generate IPv4->IPv6 mappings on the fly so that your IPv4-only hosts have an address they can communicate with.
Next is you need your DNS server to rewrite queries from IPv6 addresses into these dynamic IPv4 addresses.
Because they're matching it against the hospital records they already have. So you can do things like spot everyone prescribed a certain treatment later turned up in hospital suffering a particular illness.
The identifiers are then kept separately and the data released only has the things you mention.
Power consumption anyone?
I upgraded from a Athlon64 X2 4200 last year to a Core i5 3550. The old system used to idle at 100W (measured at the wall). The new one is less than 50W.
Newer machines won't be quite that dramatic a difference, but there are still some savings that can be made by replacing older desktops.
Isn't the process migration something the OS should be doing? AMD and Intel already shut down many bits of their CPUs (including whole cores) when they're not in use.
AMD are already doing that with their latest generation graphics cards, they're calling it 'ZeroCore'.
The problem with lots of small servers is the overhead. Even your dinky little ARM servers are going to be using some power all the time. Whereas in a virtual environment you'll have fewer servers so the base load of them sitting idle will be less. You can even shut down servers completely if the load isn't sufficient for them all to be running.
That was presumably the E5-2600 version of the DL360 Gen8 (which appear to have gained a p suffix), whereas these are DL360e Gen8 (note the different suffix).
Nice way to confuse people HP. At least Dell are giving their systems different numbers (their equivalents are R420 (E5-2400) and R620 (E5-2600).
I don't think Intel are missing the point, they just see gaps in their line-up that need filling. (i.e. between single socket and a full-on dual socket, and between dual socket and their big E7 quad setup).
Intel cores are currently faster, clock for clock. For example, lets take anandtech's review of the E5-2600 ( http://www.anandtech.com/show/5553/the-xeon-e52600-dual-sandybridge-for-servers/ ). In that they take a simulated E5-2630 (6 cores, 2.3GHz) and they reckon it beats an Opteron 6276. Thats a $616 CPU beating a $788 CPU (list prices for both).
But still, the Opteron reaches futher down than the 2600 series does, so Intel knocked off a few of the extras to produce the 2400 series to compete with those. Likewise, the E7 is overkill for many where you just want 4 sockets-worth of ram and cpu, so bump up the E5 to fit there.
I may be wrong, but I was under the impression that these things are essentially still using IE6, but hiding it round the back. What happens when XP support goes? You're still stuck using components that MS aren't supporting, so you'll be stuffed in just the same way as soon as a bug comes up.
There's a paper http://www.thoughtcrime.org/papers/ocsp-attack.pdf on how the protocol can be MITM attacked.
Rather than bandwidth usage, resource usage of the server running the OCSP service might be an issue if you get millions of requests. If it takes a while to respond, people will complain their browsing is slow (you're waiting for OCSP to respond before trusting the site). If it fails to respond at all, do you block access to the site? (That'll be disabled if people want to get to their stuff). Or allow access, risking people going to a site that shouldn't be trusted?