* Posts by NBNnigel

17 publicly visible posts • joined 5 Dec 2015

'Password rules are bullsh*t!' Stackoverflow Jeff's rage overflows

NBNnigel

Legacy of LanMan?

My long-held theory is that the typical 8 character 'as-complicated-as-you-can-make-it' password policy is a holdover from the days of Lan Manager support in Windows. The problem (there were many) was that LM hashed passwords were split into two 7 byte halves (maximum 14 char password). Meaning a 13 character password could actually be cracked as two separate passwords (of 7 + 6 char length respectively).

And the input string was null padded out to the 14 char max, meaning an attacker could instantly tell if a password was less than 8 characters (because the second half would be entirely null padding). Hence, 8 character passwords. Considering how old LM is, the standard complexity requirements no longer make sense (and never made sense in any remotely 'greenfield' case).

Either there are lots of ultra-legacy windows shops still running, or (more likely) there are lots of cargo-cult sys admins out there. Neither prospect sounds appealing from a security standpoint.

The trouble with business executives…

NBNnigel

I realise this won't be popular...

... but in some cases, IT are partially to blame. I work in the 'business' side of the organisation (in a government department), but I also understand IT more than your average civilian (particularly on the security side of things). In my experience, sometimes the reason business units do IT procurement "behind IT's back" or run off to cloud-based services is because IT are an unreasonable and inexplicable impediment. This might be unique to government, but I've come across a few IT areas now that manage risk by simply saying 'no' to every request. Unless, of course, it comes from someone in senior management. Then they seem to jump right to it, regardless of how insane it is.

Also, some functions that really should be managed by competent IT folks are instead managed by (as far as I can tell) non-IT folks. Usually 'content management' teams that, for whatever reason, are situated in the IT organisational group. Consequently, regular civilians end up assuming that when IT says 'no' they're just being unreasonable (even in cases when there's a good reason) or that IT simply doesn't know what the f*** it's doing (thanks to previous experiences with 'IT, but not really' content management teams). Examples:

1. I was working on a small project (1 man army) where I needed to write some stuff in python (mostly web scraping and writing custom parsers for weirdly structured HTML). It took me a month to convince IT to install a python interpreter on my machine. Tried to install a library: nope, read-only access to the directory. Over the next couple of weeks tried to convince IT that it was kinda pointless to have a python interpreter installed without being able to install libraries, but they wouldn't have a bar of it. Yes, technically I could have used urllib and regexps. That is, if I wanted the project to take 10 times as long to do. Eventually I had to give up and work from home (partially in my own time).

2. I use a *gasp* cloud-based service for making flow charts (lucidchart). Or I used to. When I tried to access the site from my current workplace I was confronted by the all-too-familiar 'blocked page', courtesy of our wonderful content blocking system (which seems to run on a whitelist basis, and MITMs using a custom CA cert that's that's dangerously insecure thanks to its outdated fingerprint hashing algo, but that's another story). I put in a request for it to be unblocked: nope, cloud-based services are not allowed here. When I ask why, am told it's an information security risk which, in fairness, is a legit issue in government. Everyone who works in my department has a security clearance, and has gone through countless security training courses, why can't they be trusted to not upload sensitive data to some random server? What was the point of the training in the first place? Hasn't that horse already bolted, given we can send emails to external addresses? Actually, isn't that a risk we've always taken given we allow people to walk out of the building without having to endure a bag check and body cavity search? And weirdly, why are Azure and AWS unblocked if this is such an issue? Yes, technically I can use your crappy flow charting software on the "officially approved software" list. But I'd rather use something that doesn't massively hinder my productivity. More working from home for me...

3. My department 'uses' Sharepoint. Yes it's awful, but I have no control over that. And because IT's access control policies don't allow 'business users' to be given any site-level admin permissions, we have to go begging to one of the 'content management teams' if we want to create a new document library. Or a new calendar. Or a new anything... These very simple requests can take 2-3 days to be fulfilled. I'm guessing because the 'sharepoint content managers' don't have a clue what they're doing and go bother 'real IT' whenever one of these requests comes in. Might there be a more sane approach we could take? Apparently not, is the official answer from IT. Now I understand why Sharepoint is only used as a glorified network share drive... Don't even get me started about the time I needed to write some crappy little CLIENT-SIDE Sharepoint app...

And the list goes on. The only good experiences I've had when dealing with IT were when I've been able to figure out where the knowledgeable folks are hiding (tip: they're usually in the 'IT infrastructure' teams, where you actually need to know what you're doing or the lights go out). Maybe none of this applies to any of you highly skilled IT folks. And I don't doubt that 90 percent of the time the problem is the 'business side' not having a clue. But a little introspection probably wouldn't hurt either.

Privacy watchdog to probe Oz gov's right to release personal info 'to correct the record'

NBNnigel

It's worse than the 'privacy expert' thinks...

Actually reading section 202 of the SS Act, there's a nice little note right at the bottom: "In addition to the requirements of this section, information disclosed under this section must be dealt with in accordance with the Australian Privacy Principles." The APPs are part of the Privacy Act. So in this case, it looks like the SS Act is subordinate to the Privacy Act (not the other way around as suggested by the quoted privacy expert).

2 years jail, strict liability offence.

nbn™ puts the acid on Australia's ISPs to speed up its NBN

NBNnigel

Re: The Need For Speed

Yes, only a small factors... Oh wait, that's utter horse-shite: http://www.abc.net.au/technology/images/general/blogs/chirgwin/nbn/grafikVDSL2eng.png

Turns out that VDSL2 (i.e. FTTN) speeds decrease exponentially as you get further away from the 'node'. I'm truly astounded at your ability to be so consistently wrong.

Competition and wholesale costs, not lack of fibre, crimp broadband in Australia

NBNnigel

NBN: A one man, one act play

Has it occurred to you Richard that maybe, just maybe, fttp is the only thing pushing up Australia's average speeds, because it's the only thing worth a crap compared to ADSL? And hence its kinda insane to be rolling out a patchwork of different technologies, which top out at 100mbit (at best) in real world conditions? (I'll believe it when I see it, hub architecture HFC and Vectoring VDSL Lab tests). People aren't interested in marginal upgrades that are already close to obsolete.

As for market structure, the biggest issue will always be that wholesale fixed line comms is a natural monopoly due to the market cost structure. It's especially relevant since our genius politicians continue to insist on selling off our wholesale FL Comms to private monopolists. This is official policy of both major parties regarding NBN btw. The other big factor driving RSP consolidation was the ACCC's boneheaded POI decision at the beginning of this whole fiasco. The only public figure who seemed to understand the implications at the time was Simon Hackett, as evidenced by him getting the f*** outta dodge soon after.

There. Some actual substantive issues to discuss. Although please do go on with this one-act psychodrama that brings us audience members along to experience the dizzying highs, and crushing lows, of Richard's buyer's-remorse fueled angst.

Axe net neutrality? Keep the set-top box lock-in? Easy as Pai: New FCC boss backs Big Cable

NBNnigel

Re: Congress is supposed to make laws, not bureaucrats

Yeah that sounds like a good idea. I'm sure those geniuses in congress are totally across the technicalities of cable TV set top boxes. While we're at it, we should also close down health and safety regulators, building regulators etc. and just get congress to legislate what temperature your local fish and chips shop should run their deep fryer at, and what nail gauge should be used to secure building fixtures to foundations etc.

Oz gummint's 'open government' strategy arrives at last

NBNnigel

High value indeed

Luckily for the Government, the UK has already done the work for them (again). The 2013 Shakespeare Review identified a number of 'core' government datasets, such as the UK company register (rec 4). So I guess Australia will be opening up its compa..... oh... wait... we're privatising that.....

Never mind.

M.2 SSD drive format is under-rated. So why no enterprise arrays?

NBNnigel

Re: Actually the reason we don't use 'em is...

Thanks for the detailed reply. I definitely feel like I achieved my daily learning quota today :)

NBNnigel

Re: Actually the reason we don't use 'em is...

"Host systems don't like their PCIe reads to go missing"

Is this something inherent to the PCIe spec or just because of how manufacturers currently implement said spec?

Software-defined traditional arrays could be left stranded by HCI

NBNnigel

Re: It's not about HCI per se

@dikrek

I think your characterisation (in terms of OPEX and CAPEX) is a very useful way to think about this issue.

Perhaps one other thing to be wary of: the notion that CAPEX is a 'once-off, upfront' expenditure. In many cases it's periodic (i.e. capacity scaling). And the best case, IMHO, is when CAPEX becomes continuous (i.e. OPEX) and time-bound (I guess what people mean by the term 'elastic'?). Best case for buyers anyway, as the conversion of CAPEX to time-bound OPEX implies the resources have become commodities (highly competitive supply market), incremental (smaller units allow better capacity matching), and temporal (capacity needs vary across time, sometimes fluctuating on a 'time of day' basis).

NBNnigel

sounds sensible

To me, that sounds more sensible. Although I have to admit it sounds suspiciously similar to having a bunch of commodity servers configured as a cluster via some sort of orchestration/scheduling software (like Maxta?). If this falls under the definition of HCI, then I wonder if HCI is just another marketing buzz-phrase floating around in enterprise-tech vendor world ("You should buy our HCI appliance... er I mean 'solution'. You can't fight synergy... er I mean HCI... it's bigger than all of us").

Frankly, I can't help but think that some of the hype around HCI is just the latest attempt by 'hardware integration' vendors (i.e. appliance makers) to stave off the commoditisation of their market. Or, in other words, just another way to vendor-lock customers. I can see how SMEs might benefit from purchasing a HCI appliance when the savings from low administrative overhead is bigger than the efficiency cost of capacity mismatch and scaling 'lumpiness'. But surely we're only talking about 20-30 percent of the total market, at most?

It's also interesting to think about all of this in the context of two longer-term trends in server hardware: (1) the eventual convergence of storage and random-access memory/processing cache (i.e. NVMe being the latest step along that path) and (2) the increasing viability of deploying low-latency, high bandwidth and RDMA capable networks.

These two trends seem to point in opposite directions. The former suggests 'compute' will eventually be pulled back in to these 'hyper-converged'... things.... because latency between processing and cache makes sequential computation less efficient. But the latter suggests that the components of computation (processing <---> communication bus <----> cache/volatile memory --> storage) can be physically separated, and thus scaled separately, without incurring the sequential computation penalty. Simply put, as internal 'networks' become giant PCIe busses, it makes less and less sense to physically converge all of your computing, memory and storage hardware in to a single appliance.

So to me, the more interesting question is: when will high performance networking be sufficiently standardised to allow 'hyper-deconvergence', maybe even retro-fitting of existing under-utilised hardware resources? If I were a appliance-maker, or CPU/storage manufacturer for that matter, this would be the question keeping me up at night. The other question keeping me up at night would be: 'how can I best derail high-performance networking standardisation'...

NBNnigel

"Hyper-converged infrastructure (HCI) systems combine servers controlled by or running hypervisors converged with storage and networking."

So... it's the whole server bundled as a vendor-specific appliance? Sounds awesome, especially if you're a vendor. And I guess it would be easy to scale, but probably not very cost-efficient. And given that economies of scale still rule the day in data centres...

Daisy-chained research spells malware worm hell for power plants and other utilities

NBNnigel

Re: Errr

I think that quote is assuming power-plant owners and the like are not stupid enough to expose their PLCs on an internet routable IP (and hey, stupider things have happened). Also assumes the employees at the power-plant aren't dumb enough to pick up a USB key they find lying on the ground and plug it in to a work computer. And assumes someone at the plant doesn't get spear-phished etc. etc.

I'm sure there are a bunch of plausible attack-vectors that don't involve internet routable IPs. I think the dude being quoted was just picking a random one (not the only one).

Industrial control kit hackable, warn researchers

NBNnigel

Huh. So this is why HP all of a sudden released a security advisory for their server management interfaces: iLO (and brethren) runs on moxa hardware (http://www8.hp.com/us/en/business-services/it-services/security-vulnerability.html#!&pd1=1_2). Anyone running HP 'Integrity Superdome' should be firmware updating ASAP. Assuming you're not out of warranty. Or you have paid the HP firmware ransom (i.e. you purchased a 'HP care pack'). And assuming you can find the firmware and instructions, cleverly concealed somewhere on the HPE website/monstrosity.

From HP's sec advisory page: "This vulnerability could allow an unauthenticated, remote attacker to perform man-in-the-middle attack (MITM) or redirect outbound traffic to an arbitrary server that can cause disclosure of sensitive information."

So of course it makes perfect sense that HP have marked mitigating firmware as 'entitlement required'. Everyone else running HP server management interfaces (e.g. iLO): HP's suggested 'workaround' is to "disable System Management Homepage".

Good luck and godspeed!

Australian health records fed into big data maw ... because insight

NBNnigel

Fools and clowns

If they actually cared one bit about the public interest, they'd be handing the data to the ABS for free (given the ABS have decades of experience and expertise in confidentialising unit record data, and legislation that would see the Chief Statistician thrown in jail if they screwed up).

But evidently they just want to recover part of the $1bn or so that they've wasted due to their utter incompetence when it comes to IT. These clowns just don't get it. If they want to involve the private sector so badly, why not expose some simple APIs with a mandatory user authorisation step (i.e. OAUTH2/OpenID Connect)?

Oh that's right, they screwed that part up too with the abortion known as myGov.

HTTPSohopeless: 26,000 Telstra Cisco boxen open to device hijacking

NBNnigel

Doublespeak

Hah, I love the way Cisco has worded their explanation to make it seem like the device is still secure. Yes, SSL MITM is an attack on the client. But they've worded it to be misinterpreted as "hard-coded uniform SSH keys & SSL MITM are both attacks on the client." Nope. Wrong.

I assume they made the disclosure because someone other than the NSA obtained the SSH key (yes, the ONE key). And, just a wild guess here, I'm guessing it's the SSH key for root. Which means those routers are wide open; an attacker could do literally anything they wanted to with a root shell into those routers. Like non-HTTP traffic sniffing, exploiting trust relationships, injecting (seemingly) signed code into windows updates etc.

These are just consumer routers right? Not backbone routers? Otherwise I might be staying off the net for a few days...

It's almost time for Australia's fibre fetishists to give up

NBNnigel

Many bad tradeoffs

It's not surprising they'd achieve such speeds on a SINGLE strand of pristine copper in lab conditions. Hell, they'd probably achieve that without the fancy g.fast/x.fast/whatever.fast technologies since most of the bandwidth gains come from clever algorithms that neutralise cross-talk between MULTIPLE stands of copper ('vectoring'), so that bandwidth is not eaten up by error correction. While it's very clever technology, it isn't free:

- The nodes will need exponentially increasing processing power as more subscribers are added to a node. It becomes much harder to calculate the correct 'inverted phase' frequencies for each line (to cancel out the cross-talk). So variable costs from adding more and mroe processing power increases in proportion to uptake levels, doubly so because increased computation means higher node electricity consumption (and cooling).

- Additional costs get pushed down to the end consumer. Vectoring only works if ALL modems on the end of the copper run are coordinating with the node; otherwise one 'noisy' strand messes up the inverted phase calculations and error rates jump. This means consumers will be forced to buy modems that have sufficient processing power to implement g.fast. Meaning higher fixed and ongoing costs for consumers, even if it's partially hidden in their quarterly electricity bill.

- Vectoring, effectively fancy noise cancellation, only works when the 'noise' is constant or predictable. It doesn't work when the noise is variable or unpredictable. So when it rains, and Telstra's wonderfully maintained copper pits have water sloshing around in them, you'll start to see drop-outs or significant reductions in bandwidth as increase error correction is required. Or when a bird lands on your copper line. Or when there's a strong gust of wind...

- Bandwidth gains from things like time-division multiplexing come at the cost of increased latency. This is very very bad, and puts Australia at a big competitive disadvantage when it comes to distributed computing, hybrid cloud etc. Simon, ask some of your colleagues who cover 'data centre' and 'networks' why this is bad if this doesn't make sense to you (especially in the context of the cloud computing and financial industries, who are increasingly 'latency sensitive')...

So yes, it does squeeze more bandwidth out of our ageing copper, under certain conditions and with some bad trade-offs (and increased long-term costs). But at some point in the very near future, unless we figure out a way around the laws of physics, we'll hit a hard limit and have to rip out the copper and consumers will have to replace their modems. And in the meantime we'll have hobbled our economic productivity...