Ahhh... as simple as a hand drier eh?
Personally think OCS needs to take a look at how IBM Sametime 7.5 was like, which had a lot more features, but admittedly less integration.
25 posts • joined 11 Apr 2007
in the t&cs ....The Unlimited Wi-Fi and Web Bolt On is included at no extra cost for customers connecting or upgrading to any iPhone tariff until further notice. O2 reserves the right to withdraw or amend this offer at any time on reasonable notice. Participating customers will receive 30 days notice via text message if changes are made to their disadvantage. Excessive usage policy applies see Data Bolt Ons terms below
And on another page:
Unlimited data on all smartphone tariffs is a promotion until 1 October 2010. Excessive usage policy and terms apply, see o2.co.uk.
So I think even customers with a smartphone or iphone tariff 1-3 years old will also get the heave on 1st October.
Wahey someone who knows their history :)
You Sir are indeed correct, as/400 made the transition later (somewhere in V4rx?) and had the TIMI recompilation lark.
Didn't know he was at the forefront of AIX. Using google skills he appears to be a consultant somewhere and was an ex-ICL man (are they still doing mainframes under Fujitsu?), didn't know he was an IBMer!
Still, I know as much about AIX as I do about fly-fishing. Well maybe a little more than fly-fishing.
Oh I don't know. If you're properly organised you can get a 595 up and built and running within 48 hours with everything prepared for it to arrive in a datacentre. Even less if you've got enough manpower. And the shops that need this sort of clout if they really push for it will get it before GA I would guess.
I'm just wondering how many large shops will really want to commit to buying a 570/595 sized box on Power6 and not instead at least wait until they can see feeds and speeds for Power7 and it's prices and when it'll GA. Heck some of the Power6 boxes really have to have very specific applications to take advantage of all their CPU power as many will be storage or memory constrained. Some of the 570/575 boxes memory per core count is a bit strange sometimes.
I wasn't considering i/OS or AIX as a single partition on this sort of setup, yes for HPC but no for general commercial applications. I was wondering about systems this size for a purely consolidation of a lot of lpars into one footprint, albeit you'd want another footprint elsewhere for HA or recovery.
I'm glad to see the switching architecture is meaty enough to start giving SAN attached infrastructure a run for it's money. Much of the Power5 and Power6 kit will not drive 4/8Gb SAN equipment to it's true limits even if properly tuned in a commercial environment. At least it's finally shifting the bottleneck off the host with this sort of equipment. I hope that there'll be some sort of performance analysis tooling with the switching kit in these boxes though as working out HSL bottlenecks is of course a little difficult.
Yes I agree the zSeries does have a different processor architecture but I do not think that is the current reason why IBM have not transitioned the zSeries to Power processors. If you look at zSeries the biggest stumbling blocks are the I/O systems that z/OS is built on utilisation HCD and cascaded ficon. I would bet that IBM can already easily run z/OS on Power but it's just shops being able to port the other attached infrastructure over to it. Just like 6r1 from 5r4 yes there were little tweaks even with the TIMI but it's not a huge pain. A fundamental I/O system change would be though. I wonder myself (personal opinion) if z10 will be the last non-Power based Z box.
Ahh didn't mention Linux because once IBM generally have AIX running on it Linux would shortly follow. I bet availability will be AIX -> Linux -> i/OS -> z/OS if at all.
The global address space scheme really does sound interesting, guess it's like the ideas in EMC's v-MAX.
I take it you're not the Peter Gathercole of Fly-fishing fame? But probably the ex-ICL man? I grew up on a set of SY and SX ICL boxes and shockingly I'm under 30!
I'm not sure where that info has come from, but certainly an IBM tape library (3584/TS3500) that can be used with Mainframe and LTO/3592 media does not stop at 7,000 slots. An S54 frame fits 1,000 3592 tapes or 1,320 LTO tapes. Last I checked you could fit on at least 14 expansion frames of this giving 14300 to 18780 approx, maybe more. That's a tad more than 7,000 or 10,000.
Well to simplify my comment then :
Yes, I think there will be multi-vendor LTO-5 support. Tape formats, the biggest being LTO and 3592 (imho) will be required to support de-staging large backups from disk VTLs to longer term storage
Would also think there'll be LTO-6 as backup volumes really grow linearly or greater with amount of addressible storage on hosts.. and well.. that's not going down.
But you did say Customers prefer the faster backup speed of disk-based backup and also the very much faster restore speed from hard drive arrays.' Well yes, we'd love it if we were only doing one backup/restore at any one given time but then that defies the idea of a library. In reality it doesn't scale as tape libraries do. In fact with the LTO roadmap giving 270mb/s capable drives this only exacerbates the issues with disk based VTL access. Tape is going to be around for a long time I think. SSD is too expensive, DASD is too expensive for the lot. People make bigger spindles but then host-attached disk usage just grows.
Viva la resistance!
It'll be the complete end of tape. Even with de-duplication.
With the mainframes you have colossal VTS libraries which do currently have a small disk cache and then a large back end tape cache. Mainframe tape volumes are generally small in size and also yes high in frequency but with a low retention period for most. And if you look at how HSM has evolved on something like IBM's mainframe, the VTS the way it's been designed is very well suited for the OS.
If you move over to open systems and other platforms this then differs. With de-duplication many saves can be made with a good de-dupe rate. But what if you have a library with say 30 LTO4 drives in it and your tape utilisation regarding space on the tape isn't that great? You've got up to 30 hosts writing at 120Mb/s = 3600Mb/s sustained for potentially hours, but yet say the disk array you need for your virtual tape doesn't require the arm/storage ratio because of the size of data required? You'd need hundreds of arms, even though many arrays are looking to use slow-ish SATA compared to something like 15k rpm FC disk. Then pile on replication of your tape/disk cache and you've got a disk array that's going to have a very intense read/write profile, albeit quite a sequential one.
Yes if you exploit good disk technologies you can replicate your storage and back it up completely separate to the host, but not for all operating systems or all environments. And that sort of disk, software and automation is costly to implement.
Also you have to think of large sized backups with a long retention but a long frequency like yearly multiple terabyte backups. You don't want to keep a 10Tb save on a VTS when it's read profile is near zero and the next iteration next year has changed so much that de-duplication hardly claws you back any savings. You want to farm that sort of thing out to tape, preferrably a tape remotely. A 10Tb save could take up 15-18 Tb maybe within a 13 month cycle. Or about 8 tapes with good compaction.
Virtual tape for certain platforms still needs some work regarding the performance for a library serving multiple hosts against the storage/cache it requires to give the thruput when you look at it to replace large heavily used tape libraries, even if de-duplication is not in-line and is an afterwards-esq process.
I will be keeping a keen eye on how vendors are going to deal with tape regarding it's development or if they thing that storage arrays will become so cheap but yet perfom just as well and networking costs to fall that then maybe.. just maybe tape could be redundant.
I agree with your points TPM, I think consolidation of vendors is going to be painful as we do lose the diversity that sometimes creates new / stronger technologies.
I concur that we could see Solaris running on Power. Heck, I'm still waiting for z/OS on power.
Maybe they'd just buy Sun to stop someone else (i.e HP) from buying Sun. Not that HP with the EDS issues at the moment are in a strong position to adopt Sun.
We have got that. For instance the DS6000/DS8000 series of IBM disk arrays run Power5/Power5+ cores that run in their iseries/pseries/linux boxes. You've got a 4/8 core box with up to 256Gb of ram running the disk caching.
I like your latter idea I admit :) You'd want a bit more resiliency though and of course raiding across a cluster brings it's own issues eventually when you scale. Buses in large systems can only span a certain distance and thruput. Once you've exceeded those and going outside of the bus that all the CPUs chat to and have some sort of interconnecting you're going to slow down.
Bring on the idea of a Bluegene/L type of machine with 64/128 bit addressing for storage and then we're talking.
I think you two are both missing the point. We also run big iSeries Power5/power6 boxes. The whole idea of PCIe direct to the switch would bypass your requirement of an IOP or an IOA.
Unfortunately this doesn't solve the issue that IBM have no tooling (or very little) to help you with HSL performance - i.e Bus speeds to the CEC.
And yes, everything eventually does come back to the bigger platforms that consolidate well - mainframe, iseries etc. As it's (sometimes) cheaper per CPU/Gb/etc to run a big footprint with lots of lpars than multiple small cheap footprints then trying to cram more lpars onto them.
Interesting article Chris. I've got my money on one of these formats becoming more pervasive into the datacentre for large scaling non x86 systems. Wanna wager? :)
The PCIe concept is intriguing, I can't see it working at a director level though for storage. It's hard enough now to look at throughput on huge servers running multiple partitions at how hard the interconnecting buses are running, there's literally no tooling for some platforms for bus utilisation. I mean a switch has an ASIC for the traffic, the HBA obviously is an ASIC on a board to pull/push the traffic that hard to/from it's bus. Doesn't this mean that the switching kit inside a switch would have to have a lot more throughput?
Extend that out to storage and sans and I'm wondering what tooling they could come up with, though I imagine it'll lose some latency because you'd be negating the need for an HBA or a NIC. Maybe this would work for small hosts running a single HBA or two but we have some lpars running 20+ HBA with a processor footprint probably having 60+ HBA across 10 high speed busses to the processing unit.
Hmmm I need more coffee. My head hurts.
36TB? Where does that figure derive from?
A full 5-bay DS8300 will support far more than 36TB at Raid-5 or Raid-6 Then again you have to weigh up capacity/storage and also looking at this box you have to have better performing spindles for the meta data for the Diligent box. And then bigger spindles / FC-SATA are not much good with any box if you need a good thruput.
Still it's got an impressive in-line thruput going by the whitepapers of it's theoretical maximum. I think most people will be waiting for a few more key features.
The XIV kit is very interesting and the file system looks great, shame the boxes just don't scale and the raid rebuild times could be better.
When I see it.
If the current generations of de-duplication technology (NA, Diligent et al) can do about 900MB/s on small spindle (150Gb FC) 15k drives with a *shed* load of cache at the front end of a disk array how on earth are they expecting to outperform that with transactional data on SSD?
I mean the logistics of having an SSD array with multiple hosts on it and not having a bottleneck somewhere whilst having a very powerful de-dupe engine working in-line are just staggering.
Whilst it may make SSD more economical eventually I bet it'll mean you'd nearly have as much cache for the processors in the array or the de-dupe as you do storage in SSD.
I am also a bit confused as to why you state de-dupe is done at a file level for archival storage. I thought most companies were looking at a set or variable string length of data at a block level, negating files. I could be downright wrong here though
This is just the next flavour of outsourcing isn't it?
Really though how many large companies will trust MS or many other vendors to run their remote DR etc if fthey've got stringent data retention or security policies in place?
I can't forsee any company who has gone through an outsourcing exercise, then finding out what a **** up it is, insources once again would ever go for cloud computing.
There are companies now who when they have a drive fail in an array don't let the vendor take the failed drive away when it's replaced, but rather spend the $$$ destroying it.
Until I see IBM and EMC wading in, I doubt many are going to look at it. Outsourcing already is cloud computing, no doubt the outsourcer trying to run the solution in the most efficient (read cheapest) means possible. Yes there's virtualisation, yes there's de-dup, but with the holistic policies of many companies regarding data retention and paranoia large shops will just never go for it.
I also find it quite ironic that we have one article on here sounding the death bell of storage management and another with a whitepaper harping on about the growth of storage management. It's like the arguement about the mainframe platform being a dinosaur. Admittedly it's quite complex and outdated in some ways, but it is extremely efficient at what it does and does it with a robustness that many platforms fail to even glimpse at. Storage will not eventually go to the cloud and to just a handful of vendors because the cockups that could be made would be of a monumental proportion leaving many companies to pick up the pieces of what they thought was data being managed with a high level of integrity.
Buy cheap, pay twice.
Interesting article, but is Power7 really going to be 8 core or more like 2 x 4-core? as per the QCM / dual processor cards/books that IBM stuffed out for 595 machines and will probably do for the Power6+?
Good reflection and reaction to your article Ash by TPM...
It's not the 'hipness' of the platform I think, I think it's the opportunity younger people get to work on the platforms are literally nil. Nada. Zilch.
I have been lucky in the fact that I've come into a big IBM shop at a young age, hell, the average age demograph of iSeries or zSeries users is 45+. In my team I am the youngest member by a good 20 years.. hell I'm probably one of the youngest working on Power systems.
Most zSeries people are now 50-55+ and looking for retirement. The iSeries/Power stuff a little younger. Most are now looking for retirement or contracting so people younger are few and far between. And eventually when all the permenant employees of most mainframe/midrange shops retire then contracts will be in big $$$ and businesses will be scrambling for ways to get cheaper people (hmm... any in india or china yet?)
Not many shops would take the risk or the apprenticeship costs to get young people on the z/i/p platforms. Mainframes generally run big systems and you do anything that'll fubar it then you're out the door. Once one employee does that normally the business is very loath to allow un-experienced people on again, even dev/qat environments where there is big pressure to have very high levels of stability and availability.
Big platforms don't have as much flexibility as the distributed platforms for learning. Vendors and big shops don't invest money in ways that could allow younger people more opportunities to get their foot in the door.
My god, disks aren't the only component in a storage subsystem that fail? *gasp*
Cables? Power Supplies? Backplanes? My god, who knew that the other components weren't infallible too.
*sighs* It's taken the 44 months to find this out? Could they not have talked to the top tier vendors (EMC, HDS, IBM et al) and ask what components fail the most?
Or possibly ask most storage administrators, we'd straight away tell you about HBAs, duff cables, dodgy backplanes, replication faults etc.
If quite frankly a dodgy power supply is killing your storage system you really must be buying budget kit.
Is there going to be a paid 44 month study into what fails with tape backups? I'll quite happily get paid to help out on that one... I'm guessing it's not the tape 100% of the time.
An iSeries Windows capable? What's new about that at all? Come on you could attach an x-Series years ago when the i5 originally came out and use the disk assigned to the i-Series, be it internal or san-attached. What's at all new about that? The only way IBM could really push the i5 line is to drop the ridiculous licensing costs of i5/OS and actively promote the OS more.
God knows why you'd want to also run an x-series off an i5 with only 2-8 drives - due to the nature of i5/OS single level storage I would dread to see the i/o performance on anything bigger than 73Gb drives. Yes it's capable but you'd really have to ask yourself... is it worth it?
Biting the hand that feeds IT © 1998–2021