Seriously, how many different spellings do you need for one man's name in this article?
16 posts • joined 12 Jan 2008
I've always been more concerned about the risks of roaming; when your phone roams to another network, your provider must send your 'shared secret' (key) to the roaming provider in order that their network can authenticate your handset.
This means that a level of trust is established between providers in their roaming agreement, but of course what's to say that a foreign government can't now gain access to your SIM encryption key, which can then be used to decrypt your comms globally.
It wouldn't be terribly surprising if the UK and US governments are using this method to collect SIM keys of foreign nationals - it's certainly a lot easier than sending messages which are likely logged and traceable and require not-insignificant computational time to crack.
It's funny, I was making this point in a post here back in Feb 2010:
The scary part of the US Patriot Act is that it has provisions for a gagging order. You can expect the really important providers to "legally" lie to the customers in the coming weeks and months because the Patriot Act requires them to do so.
It's therefore actually very easy for them to come out and say that no such monitoring is in place; doing otherwise would have them in breach of the USPA.
What's worse is that this is flagrantly in violation of the US-EU Safe Harbour regulations and the implication is that the EU was lied to about the extent of this as well.
Umm this has been going on for as long as I've been visiting data centres.
In literally every multi-tenanted data centre that I've been in in the UK you'll see quite a few boxes hosting SIM cards with a bunch of GSM antennas on top or routed outside. Some of these purpose-built boxes hold hundreds of SIMs and can switch between them as the networks bar them individually.
Most of the guys doing this (and I've spoken to a few of them) are doing it because of the crazy-high termination rates on UK mobiles. There's a lot of money in it, and it's only slightly in breach of the GSM operators T&Cs, which frankly probably wouldn't be enforceable in court anyway.
"An SSL certificate certifies that a given domain name maps to an IP address" ... err, no it doesn't.
An SSL certificate merely informs you that *at the time of issue* the holder was validated to be authoritative over the Common Name (CN) that it identifies (e.g. my.onlineshop.com). Unfortunately it would appear that certificates have been being issued without sufficient checks on whether the applicants were indeed authoritative over the [sub]domain/CN, so this trust is effectively broken for those CAs (hence removal of the CA roots from various browsers and OSes).
DNS poisoning is entirely separate and very useful, but ineffective on sites secured without SSL if you haven't already got your hands on either the original signing key from the site your subverting or an alternative certificate from a lazy CA who didn't check your credentials properly.
A more scary prospect is that a lot of CAs will issue certs purely based on sending an email to addresses like firstname.lastname@example.org. If you've managed to take control of the DNS for a short period, you could probably get a cert issued using this method ready for a later attack.
Men's 50m freestyle WR stands at 20.91s, an average speed of 2.39m/s.
If the pool had been 2 inches shorter (0.0508m), the record would be 20.89s.
The world record has frequently been broken by 2/100ths of a second, so I guess it's easy to understand why it's a big deal.
Having said that, I'm sure I read somewhere that designers of competition sports venues deliberately try and 'tweak' the design within the rules in order to maximise the possibility of records being broken at that venue, as this attracts more media attention and revenue.
I don't agree with the steps that Sony are taking - it's likely to reflect poorly on them whilst ultimately only acting as a temporary fix.
That said, I imagine that Sony are under significant pressure from the developers who target their platform. Ultimately the profitability (and survival) of their console is dependent on having games publishers who will develop for their platform. If someone else's platform appears to be less prone to illegal copying, the games publishers might go elsewhere.
So maybe the whole industry is being a bit greedy at this point (just like the music industry has been for years), which is driving what might seem like very backwards behaviour, but ultimately I suppose it's the just economics of the situation, and a whole chain of companies acting in their best interests due to external pressures.
I would have thought a highly accurate inertial navigation system (based on accelerometers and gyroscopes), perhaps coupled with some wearable "boot sensors" to more accurately track distance and a digital compass would provide much more accurate, dependable position fixes in 3 axes essentially via dead reckoning.
Certainly distance could be very easily tracked - unlike submarines, ships and aircraft using dead reckoning, the medium (water, air etc) isn't moving due to currents, so there's less drift to account for.
Perhaps a camera sensor that works like a giant optical mouse could track distance travelled like an electronic trundle wheel...
I think the point here is being missed. The reason that the US was able to "snoop" on SWIFT data since 2001 was due to the Patriot Act.
The point is that SWIFT has been mirroring all their data between it's site in Belgium and in the US. Once those data are on US soil, they are subject to the Patriot act, regardless of the nature of the underlying financial transaction they represent.
Same applies to Gmail, Google Apps, Salesforce etc etc. Once your data is on US soil, it's privacy is questionable since the Patriot Act [probably] supercedes even the Safe Harbor (sic) protocols.
It's surprising that there's been almost no report on the topology of the affected network, and how that ultimately contributed to the large-scale effects of this incident.
As with most universities, Exeter University widely uses public IP addresses in it's 188.8.131.52/16 primary network allocation for connected devices (this huge subnet means there's no technical requirement for any address translation since they're a long way from subnet exhaustion).
However, the University regularly uses /21 (255.255.248.0) subnets internally, with insufficient segregation [with VLANs] of logical segments. In addition, many network segments are wired in long spurs, which means that isolating one network segment may necessarily require isolation of cascaded segments which needn't have been architechted in that way.
Finally, and arguably most importantly, the university uses no internally firewalling of it's subnets in their central routing platform (think: zoned firewalls). There is firewalled access for traffic originating outside their primary /16 network, but that still leaves 65k+ addresses all of which can directly connect to each other. To my knowledge there is little or no IDP or traffic monitoring across segments, although this only helps if you actually segregate your networks at the Layer 2 & Layer 3 level.
Certainly, there's no excuse for an attack which (ostensibly) only affects Windows workstations and servers to mean that VoIP networks should be affected, and indeed it should be possible on any corporate network to leave VoIP up and running, even if there's shared infrastructure.
Finally, you have to wonder how the majority of network connected Windows machines went un-patched.
Just my $0.02, but didn't seem that anyone else was saying it.
...most of the recent Sun and Dell x64 hardware that I've come across doesn't allow you to regulate the onboard [very high] rpm fans, so I suspect if the average server room started using this policy they'd see server fan speeds increasing as they falsely try to compensate due to the higher ambient.
Haven't tested this in a a year or so, but I remember seeing an increase of 0.5A on some x64 iron with the fans running at full 15,000rpm.
Obviously the likes of Google can get away with this, since the hardware is sufficiently customised. The article does hint at the fact that this requires integration of server cooling and 'CRAC' systems.
As an aside, some of the smaller equipment / servers rooms that we've seen (around 4 racks) are deliberately run cooler than is really necessary simply because in increases the response time in the event of HVAC failure (takes longer for the temperature to start triggering auto-shutdowns etc). This is important for organisations that are running their own small server rooms on the cheap, and don't necessarily have N+1 on those units (no matter how much they need it!).
Phase-change cooling is nothing new; the latent heat of vaporisation and condensation make for an especially effective method of heat transfer (see http://en.wikipedia.org/wiki/Latent_heat and http://en.wikipedia.org/wiki/Computer_cooling#Phase-change_cooling).
It's always nicer when you can do it without a compressor, but as far as I can tell the only major innovation here is the "wick" technology. The only time I've ever read about vaporisation coolers on production (as opposed to experimental and demonstration) systems is on supercomputers, where individual CPUs have a closed module attached inside which the vaporisation and condensation occurs. This only really works when you know what the orientation of the module is likely to be (since the gases tend to bubble 'up' through the more-dense liquids), which I gather is the main reason why you don't see this on PC CPU and GPU coolers, since they may be mounted in a variety of orientations.
It looks as though the Vapor-X retains all of the liquid within the wick, which means that it would work [almost?] as effectively upside-down, so thumbs up for the engineering!
Biting the hand that feeds IT © 1998–2021