* Posts by SecretBatcave

18 posts • joined 14 Mar 2013

Super-slow RAID rebuilds: Gone in a flash?


Real world usage

A raid 6 group of 14 disks, with 4 tb 7200 disks in takes about 72 hours under medium load to rebuild. Depending on how you have it set up, and what controller you are using. (dell/netapp/IBM/LSI/3460 60 drive 4u jobby)

The problem with spinning disk is that the heads take ages to move. So if you are having to serve real data during a rebuild, your throughput per disk goes from 100+ megs a second to tens of megs.

But that's because its got to blindly copy the data from all disks to rebuild a disk *image*. If your raid/EC scheme is content aware there is no need to rebuild the zeros from the unused part of the disk. Thats partially how XIO and GPFS do their super fast rebuild. because they are vaguely aware of where the data is, they can just re-build the parts that matter. Crucially they can in some cases pull good data from the dieing disk

Reader suggestion: Using HDFS as generic iSCSI storage


Yes, but what do you want to do with it?

I mean, lets be honest, HDFS isn't a generalised block storage system. Its not ever particularly well designed for the job it intended to do.

If you want to cluster bits of block storage into one name space, there are many, many better ways of doing it.

For a start HDFS only really works for large files, large files that you want to stream. Random IO is not your friend here. So that makes it useless for VMs.

If you want VM storage, and you want to host critical stuff on it you need to do two things:

* capex some decent hardware (two 4u 60drive Iscsi targets) and let VMware sort out the DR/HA (which it can do very well) 60grand for hardware plus lics. That'll do 300kiops and stream 2/3 gigabytes a second.

* capex some shit hardware, ZFS, block replicate, and spend loads of money on buying staff to support it. and the backups

Seriously there is/are some dirt cheap systems out there that'll do this sort of thing without the testicle ache of trying to fiure out why your data has been silently corrupting for the last 3 weeks, and your backup has rotated out any good data.

So you want a custom solution:

1) GPFS and get support from pixit (don't use IBM, they can hardly breathe they are that stupid) <-fastest

2) try ceph, but don't forget the backups <- does fancy FEC and object storage

3) gluster, but thats just terrible. <- supported by redhat, but lots of usersapce bollocks

4) lustre, however that a glorified network RAID0, so don't use shit hardware <- really fast, not so reliable

5) ZFS with some cheap JBODs (zfs send for DR) <- default option, needs skill or support

Basically you need to find a VFX shop and ask them what they are doing. You have three schools of thought:

1)netapp <- solid, but not very dense

2)GPFS <- needs planning, not off the shelf, great information lifcycle and global namespacing

3)gluster <- nobody likes gluster.

Why VFX, because they are built to a tight budget, to be as fast as possibly, and reliable as possible, because we have to support that shit, and we want to be in the pub.

Stale pizza, backup BlackBerrys, payroll panic: Sony Pictures mega-hack



I work in VFX, so its a bit funny to hear sony bleat on about security. The consensus here is that its an inside job. The person that did this *hated* sony. To me it sounds like someone wanted to bring the house down.

But the Nub on the matter is this: Sony appears to have failed to follow its own advice for security. When a VFX house applies to work on certain shows, they have to be audited to make sure that no footage will leak. Since Expendables 3 leaked (which couldn't have come from a post house, as its the full movie, with sound. Something none of us have) They've gone super Nazi on the requirements. Segregated data and management networks Airgaps between the internet and internal networks. All data in and out of the building must be moved by hand. All USB/DVDs disabled.

All internet access is done via terminal services. We had to battle to allow copy and paste...

And yet depending on the narrative you subscribe to, either someone stole HR/email backups/restricted file services via USB, or malware.

Either way should be impossible if they'd implemented their own guide.

This ofcourse assumes that it wasn't a rouge sysadmin. From the noise I've heard about the malware, it uses brute force to guess passwords. Do they not have account lockouts? (another requirement...)

Either way, they couldn't have given a shit about security, well not in any meaningful way. Judging by some of the chacaters I've met from that neck of the woods I can imagine that the higher ups were extremely resistant to even the most basic of security measure.

From what I understand they had byzantine VPN authentication, but yet people appear to have been able to gain access to the email server/backups

Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'


"very hard"


There is more to that than meets the eye. All the major video wrappers have metadata that tells the play what fps its encoded at. Seriously its in the spec of mpeg1,2,4(including h.264) and h.265 as well, even if the container doesn't have it.

Shrek-as-a-service: DreamWorks and Infosys team up


Looks like The foundry missed a trick there

This is the sort of thing the foundry tries to do, Come in, takes over in house software, commercialises it and supports it.

The original company get free software, and free support/updates/trained potential employees, the foundry gets money from commercialisation.

Quite how inventsys are going to monetise the dregs that are left in dreamworks, I don't know. Most of it is off the the shelf software, with badly written glue code.

It's tough at the top: Yet another hybrid startup knocks EMC


HAving testing the nibmle

they have good support, however there were two problems for me:

1) at the time it was iscisi only

2) If you are have parallel streaming reads, and try and write, performance collapses like a sack of shite (9/1 read to write ratio, 1gig ethernet, to a 10gig switch.)

However I can see that its grand for VDI and the like. However for us it wasn't a good fit.

I, for one, welcome our VMware VSAN overlord


But is the vSAN any good? is it fast, is it cheaper than alternative block storage?

or even a NFS share exported from GPFS/ZFS/StoreNext/*

Why not build a cluster out of WORKSTATIONS?


If you're using HP workstations its proberbly cheaper to use the dl380s than a beefy z820.

I know you can get at least one k5000 into the dl380, I can't see why you wont be able to get two. Super micro can defiantly do it.

As for "servers aren't optimised for graphics" that's patently bollocks. What do you think most workstations are? server motherboards in a fancy box.

Tech today: Popular kids, geeks, bitchfests... Welcome back to HIGH SCHOOL, nerds


Gah, I just want stats

I've just bought a new storage system, and the hardest part is cutting through the sheer magnitude of bullshit.

For example; my requirements are fairly simple: 14000 iops peak for 10 ESX host via FC and 1gigabyte a second bursting stream 75% read 25% write. around 100TB of total storage.

Firstly I don't take too well to salesmen telling me that I can replace my currently array (75*15k sas) with 2 100gig SSDs in raid 1 and 12 10k SAS drives. "we've modelled your dataset" You've done fuck all sonney jim, you don't even know what we are doing in ESX. I take even less well when I specifically tell them I'm not going to by their rack, and they quote me for it anyway.

After testing 6 different suppliers we gave up on the unified block/filer and went balls deep into GPFS/v7000 combo.

Google will barge into enterprises as IT titans squabble, Apple snoozes


They might be, but only if they stop randomly changing stuff every 5 minutes.

Seriously, trying to support a google apps domain, in anything other than a small company is a massive arse wedge. we use google hangouts to run meetings, in the space of three months, the layout changed no less than 5 times.

Did we get a warning? did we fuck. I can understand that if I was a freetard then yes, we'll just have to shut up and swallow google's milky goodness. But we're not, we pay for the privilege and get fuck all support.

As soon as I have time I'm migrating us the hell away from google.

Don't bother competing with ViPR, NetApp - it's not actually that relevant


EMC may have good kit

But they are expensive and the sales team don't listen.

I'm currently in the middle of a "storage off" 6 vendors, only one will succeed. EMC failed at the first hurdle.

The brief is/was clear: 150TBs of unified storage, 1tb or there abouts of flash, 15tbs of 15k (currently we have 75 spindles of 15k) the rest NL-SAS. NFS/CIFS at 10gigs block on 8gig FC

What was the spec they recommended? 3x100gig ssd and 15x600tb 15k drives. The rest of the space with ~40 spindles of 3tb NL-SAS.

The cost? ~£300k

plus I have to buy their stupid branded rack, even after I expressly told them no.

When I quizzed them they effectively shrugged

Autogyro legend Ken Wallis hangs up wings at 97


He is/was the only person I've met to have a scanning electron microscope *and* missiles in his garage.



this one:

Its not that powerful, however its fairly immune to network based attacks:


How NSA spooks spaffed my DAD'S DATA ALL OVER THE WEB


Your dad rocks.

Local heroes == *awesome*

HP's plunging storage revenues could yet be saved


I was looking at the midrange "converged" offerings from HP, and they really aren't that compelling.

Don't get me wrong, if you want a fast lump of block storage thats solid and reliable, the HP all the way. (we have two fully loaded p2000s) However, if you want to do file exports then it, on the face of it all falls apart. You have the choice of the "storevirtual" stuff, which when we looked at it at the beginning of this year looked pretty weak.(especially compared to netapp fas32** and VNX5***)

If I was going to risk things then I'd plump for a nextenta or similar. ZFS is starting to look really strong. Once the AD integration is sorted then It'll be a killer app. (I'm planning on using it for our nearline to see how good it is.)

The other issue is HP software, if they seriously want me to buy software(or "appliances") from them they need to actually *test* it first. The prime example is dataprotector. It would choke on a million files. Bare in mind it was shipped with a 48 tape LTO5 library, you'd have though it would have been able to handle such things.

If I was HP, I'd be thinking about a strategic linkup with nexenta to flog pre-configured d6000 or the like. It'd be nice to get software support from people who actually appear to *test* and proper hardware support that HP normally delivers.

Reg man bested in geek-to-geek combat - in World War 3 nerve centre


Re: Where it is?

Top tip for correct pronunciations is to avoid the west country oooaarrrrrrr

for example westcountry norwich == narrrRrrrisch

norfolk == Nar ich with a short sharp r

another example is roof. Westcountry == RooooOoooooooof

norfolk its closer to ruff as in dog noise. like woofter and change the w for a short sharp r (and no ter)

Also I've been here, as a kid. whilst it was still active (they'd moved out of the control room about 5 years earlier) The amount of power it used was/is mind boggling something like 10Megawatts

Paying a TV tax makes you happy - BBC


I know its not fashionable

But I'd rather pay the £120 a year for reasonable TV and radio with no adverts, that £25(minimum) a month to get millions of channels of repeats, imports and cheap knockoffs.

Choice is rendered pointless if you only have the choice of crap, more crap and repeats.

I spend a lot of time in the US, and frankly TV there is abysmal. The actual quality of the picture is utter rubbish, the amount of advertising is ridiculous. Four advert breaks for a 22 minute cartoon, really? No wonder why things like mythbusters have so many plot recaps in them.

Yes there is choice, but its the lowest common denominator. The discovery channel for example has descended into a reality TV channel, following fishermen, moonshiners, bike "builders" and people that buy stuff from other people. The best part is, they actually charge people to watch that crap.

Having said that, the BBC are not perfect. However BBC not perfect is far superior to American "premium" tv

Ten pi-fect projects for your new Raspberry Pi


I made a project with the Pi

Its measures how often the shard is in cloud. From that you can then work out how much cash the

bourgeoisie* have lost through rubbish views.

The comedy website is http://www.whatcaniseefromtheshard.com and the explanation is here: http://www.secretbatcave.co.uk/electronics/shard-rain-cam/



*No Marxist leanings intended.


Biting the hand that feeds IT © 1998–2022