* Posts by Adam 73

19 publicly visible posts • joined 22 Feb 2010

Black Friday? More like Blackout Friday for HSBC's online and mobile banking

Adam 73

The reason most are shutting branches is that after the 2008 crash most of them decided to sell the branch freehold and lease them back. They took "advice" that branches were soon to be dead and so they wouldn't need them, take the sorely needed cash injection to prop up the balance sheet and then watch online replace them. Treble bonuses all around!

Unfortunately someone forgot to tell the customers so they kept going! Branches are now expensive because the lease deals they signed are coming up and the inevitable cost spiral is at play. Its a situation the banks have caused themselves so I have little sympathy when they try to deflect!

Convenience trumps 'open' in clouds and data centers

Adam 73

Convenience to whom?

Interesting article but I think you missed the real driver behind the public IaaS adoption, developers!

Anyone who is using AWS in a big way is most usually a developer who went there because their own internal IT platforms were slow and a bit rubbish. Developers couldn't give a crap about the platform they run on, all they want are programmatic API's, instant response, and the ability to get on with their job. So your absolutely right, its all about convenience for the developer.

Couple that to the fact that most private clouds aren't even clouds, VMware on a bit of tin (even if its "converged") is not a cloud, show me a programmatic API, then I'm listening. The complete shambles of traditional vendors and SI's pushing "private" cloud that's been happening over the past 3 years has helped drive people out.

Now for the "but", there are numerous organisations who whilst starting in public have come across the big problem. Cost! When your application hits mainstream and you have a nice base-load your suddenly hit with the realisation you are paying an awful lot, especially when finance start crunching some numbers.

That's when people start to look at building internal clouds, difference is they now have to build platforms that look and feel to their developers like a public platform (i.e, API driven) and that's when the lack of open becomes a big problem. Ever tried to move an AWS app on-premise?

So now the conversation about open platforms becomes important. Its also true that customers are smart about not being locked in, first "open" Unix killed the mainframe, then x86 killed Unix. Developers themselves are mostly using open source tools and technologies so it makes sense to have the platform open as well!

By the way the whole "OpenStack is hard" argument is a bit old and quite frankly smacks of a lack of research, there are plenty of decent supportable, install-able distributions out there now that means a customer doesn't get their hands dirty and have to worry about how it works.

Hospitals snap up cloudy storage as disk space runs out

Adam 73
FAIL

So what about this is cloud exactly?

I was expecting an article about true cloud based storage and how the authority overcame issues around data security, latency, migration and all the other challanges in that area.

Instead I was presented with an advert for an EMC array, and not even a particularly impressive one at that! (Ooohhh data tiering ....... how very 2009)

Mind you the article came from a public sector magazine so its hardly suprising they dont have a clue!

HP Project Moonshot hurls ARM servers into the heavens

Adam 73
Thumb Up

Re:Hmmmmm.....

These are not aimed at traditional DC/Enterprise workloads.

They are aimed at the likes of true hyperscale customers (read 100,000 plus servers), say Facebook or linked in. They perform HUGE amounts of web facing tasks (hence the mention and drive in this space) and you have to completely rethink your architectures (application design, storage, through networking and now the server). All the intelligence is in the software, read MASSIVE parallelisation, lots and lots of low power, low performance nodes are perfect here thanks!

1. See my first point, maybe an issue when you need to access lots of memory, but web servers and scale out apps dont (the scale out gives you the access to large resource, more than you will ever get with scale up), 32-bit is perfect here thanks! (seeing as most web apps are still 32bit anyway!). These are NOT aimed at traditional workloads.

2. Why not? Cheap and cheerful, all thats needed is a bit of metal to let me rack it. I dont need fancy management or clever interconnects, just a bit of shared power and cooling (dont really need full HA in these subsystems either) and the density. Traditional blade chassis are aimed at solving a different problem (and are very good at that).

3. Why is that a problem? To be hyperscale you need to run 100,000 plus servers, you really think loosing 4 is going to be a problem? You will also note that you cant hot swap boards either, when you want to swap them (simply wait till over half of them are failed), just take the whole tray out (72 servers) and slap a fresh one, chuck the other in the bin.

Also you will notice these dont have storage either, they are truly stateless i.e, my OS is pulled from an image server on boot and run in memory, I need lots and lots of the same thing, why stick it on local disk for every server (also makes tasks like reimaging a complete doodle, change the image on the master server, then just reboot whole racks!)

These hyperscale customers shunned enteprise type architectures (hence no blades) as there is simply too much control and intelligence in them (which adds cost and complexity) and quite frankly things like seperate storage arrays just DO NOT scale big enough. All of that is in their software layers. This is why the likes of Dell DCS and HP SL line were initially bought out. Benefits of shared power and cooling (not quite as eficient as blades, but good enough) and the density, but no excess "crap". However the weakness was always the processor, too good for what is needed.

These things draw about 3w a core, even the best Intel Atom draws about 15w! (now scale this across 10's of thousands of servers and see why people are getting excited!)

Think of it this way, we only introduced virtualisation (hypervisors that is) because CPU's have got too powerful and we needed a way of carving them up. We initially started using virtualisation in a big way on web front ends, where the problem first started to appear. All we did was add cost and complexity to mask the real problem (begs the question that if we can perform much more granular hardware partitioning, ultimately why stick a software layer in the way?)

So why not fix the actual problem. Im not suggesting these will replace computing as we know it, but its a damn fine way of solving a large percentage of the workload in the simplest way possible!

All that was required was for someone to think a little bit outside the box .................

Oracle whips out private cloud with blades

Adam 73
FAIL

Sorry how is this a private cloud?

Im sorry is this really a private cloud, or just a load of hypervisors running on some tin?

Where is the self service portal, where is the resource templating tool and more importantly where is the workflow engine to handle service deployment and lifecycle?

Oh thats right Oracle dont have any of that!

Im sick and tired of vendors and IT departments calling a bunch of hypervisors on some tin a "private cloud" ............ no its not its a pile of hypervisors on some tin!

Also its a bit of a dubious comparison! The major costs are actually in the hypervisor stack, so removing that and doing a straight up comparison of hardware to hardware and HP wins (again)

Why Cisco should merge with Dell

Adam 73

Re: Re: I nearly choked on my coffee here

Actually the drive towards cloud computing/convergance will push all one trick pony companies out of that market (thats Juniper/Brocade/EMC/Netapps you name it). The converged stacks will dominate the enterprise space eventually and those companies building big clouds (truly big 100,000+servers) will buy cheap and cheerful servers and stack em high (which is why Dell DCS has done well) and they do not use current enterprise tin because it just doesnt have the scalability model.

Thats the reason Cicso moved themselves into the server space, they saw the end was coming. Problem is they lack the experiance (IP), know how and more importantly volumes to compete at the moment, Dell might give them 2 out of 3. Time is running out though, we may have another Sun situation.

Biggest problem Cisco have is that their business model is all wrong. It has been built on arrogance (look at their attempt to enter the FC space) and high margins, not easy to change a business like that.

Also they do not do proper research, they like most companies just develop (just because they design out of date ASICS does not mean they research (they still use 90nm FFS)). The only two IT companies that do pure research (stuff that genuinely changes the game) are IBM and HP as you will find if you go through their list of patents.

Its that printer "ink manufacturer" who is the threat (thats currently handing Cisco its ar*e on a plate according to market share figures) as they not only have the IP but also the economies of scale that are required. Remember that storage is going the same way, doesnt matter how innovative companies are cheap is cheap!

Cisco: 'World head over heels for convergence'

Adam 73

Re: If Cisco Had A Clue

You do realise that if you had just taken that one more little jump you wouldnt be far off something that exists today! If someone is going to run and manage your environment then why host it on your site? Why not stick it in their DC's.

If you did then they havent missed a trick they have just become one of the many outsourcing companies our there! Go and speak to anyone who uses an outsoruced I contract to see how that works out for them (clue usually not very well!).

Oh those ready to use DC'? Yeah they've been around for a while now, most of the major vendors will quite happily sell you one! (Really useful for HPC work!).

Most companies DO/ARE and WILL use multiple sourcing for IT, some things that are applicable will live in the cloud, some will be outsourced and some will be self managed. Its not about one or nothing its about choosing the right strategy for that particular service. The trick is to build a service provider model that lets you do that effectively.

(Oh and for the record most of the internets routing doesnt use Cisco gear, they no more built the internet than any one else (thank the DoD for that if anyone!))

HP buffs up its new big iron

Adam 73
Thumb Up

Re: But it will still be rubbish

Why dont you try looking a the technology and understanding it before criticising eh?

The biggest challenge with pushing more cores in a NUMA architecture is CPU and memory locks and more precisely keeping the impact of those locks to a minimum (which is why x86 has traditionally been locked in the 2 socket and 4 socket space where thats easier to control). In layman's terms the more components in a shared architecture the more lock negotiation has to occur to manage those components and often that negotiation pulls resource away from the actual task of executing application code.

We used to call it blue droop after a famous three letter company's (begins with I ends with M) "expansion" technology for x86 (recently launched as v5) that allowed you to link multiple 4 socket nodes together. All used to work lovely till about 3 nodes and above where you would actually get diminishing returns due to lock management.

Now this problem of resource management has been well understood and conquered in the UNIX space (think Itanium/PA-RISC, POWER et cetera) with clever crossbars and management chips (Open VMS actually has the best methodology but thats a conversation over a pint!).

So seeing as HP has some pretty funky technology in that UNIX space wouldn't it be nice if they took some of that and applied it to the x86 space ................. hmmm? (Ill give you a clue they have)

So maybe you want to actually look at the technology in the product before criticising!

Researchers: Arctic cooled to pre-industrial levels from 1950-1990

Adam 73
Stop

@Loyal Commenter

If you have a scientific backround (like most people on here no doubt) then your aware that volume of research is not proof on its own. Everything is a theory which at any point could be disproved by one experiment/research paper.

The whole idea of the scientific method is that everything is fully documented and performed without intent to gain a specific result (other than maybe a guess about what you think will happen). Based on the results you then generate/further validate a theory, this can then be further backed up by additional research (with more advanced methods) or potentially disproven (again with more advanced methods). What that means is that everything is to be considered until it is scientifically disproven (there is no such thing as certain scientific proof, only a theory as yet disproven).

The problem that most of us have is not the actual outcome but instead the political nature of what is happening. I have no doubt that most scientists involved are just trying to perform good science. The biggest problem with studying the earth's climate is that there are so many variables that it is almost impossible to model all of them and their interaction with each other.

However on both sides of the camp (and recently to pro climate change have been worse!) there have been inroads by non-scientists (I'm referring to you Greenpeace and Friends of the Earth to name 2!) who have hijacked the science to generate and validate political statements REGARDLESS of what the science is actually saying (more often than not its actually saying "we dont really know!"). Greenpeace pushing a more socialist (quick everyone run back to the caveman days!) mentality, the westminster politicians sensing fresh taxable blood to fund their expenses et cetera, at which point science is now no longer in control of the debate and will be twisted and bent to the political will.

This isnt helped by the likes of the BBC's so called "scientific" explanations dumbing everything down to the point of near uselesness (not to mention they dont seem to give equal airtime) leaving the masses in confusion (and you know what happens if you spook the herd!)

Pillar and Xiotech whip Exchange competition

Adam 73
Stop

Re: Re: What??

OK Im going to try and be polite here. (takes a deep breath)

You really really need to go and undertand how Exchange 2010 works.

I did not ask for war and peace on how great XIV was! What I asked was why would I pay for an XIV (in fact ANY storage array) rather than a DAS solution (please note that in a DAS solution we use equal to or better technology than you (Midline SAS and 6GB SAS in a fully redundant enterprise class architecture for a start (I digress however!)) when I will not gain anything (other than extra cost)

Im going to try and highlight some areas for you. (Not all, I dont have time)

"rebuild cause on your Staples DAS solution"

Exchange 2010 works with availability on the premise of mailbox copies these are effectively different LUN's presented to different servers but the same data on each one, this is the ONLY model of availability that you can use! A common approach these days is a single mailbox database to a single disk. If a disk fails, obviously that mailbox copy goes offline, however you have multiple other copies being served by different servers (so in fact you are protected against server failure as well) so a disk failure doesnt actually affect user performance as they merely get served by a different mailbox server. A rebuild then happens in the background away from the user between the servers (plus you probably have more than one copy so even multiple drive failures arn't an issue, what happens in yours if you have multiple drive failures in quick succession?) so zero impact with zero impact to response time (which actually cant be said for yours as at some point your array will have to dedicate cyles to rebuilding?).

"Even if you still don't like XIV for Exchange you must give some props for the management UI (which has no licensing fee to use nor a separate server requirement to implement)"

OK maybe Im not being clear. Even if the XIV is a good as you say, you still have to set up the server side AFTER you have set up the storage (so at least two interfaces to play with!). In a DAS model you just set up the server and away you go (which also simplifies troubleshooting as its all server side but I digress).

"So may I ask what type of Snapshot technologies, consistency groups for application integration does the Staples solution give the test/dev and back up teams?"

OK this is the bit where you show your ignorance (Im guessing your either a sales person (not presales) or product management?(Actually the bit about perfomance gave it away, you clearly dont understand what sequential perfomance means for a cahched array architecture)).

ALL of this is provided in the applciation itself it has ZERO understanding of any 3rd party implementation of this, so all you array is doing is providing raw space (you could use thin provisioning but I wouldnt recommend it if Exchange thinks it has space it WILL use it (not particularly a good idea if you overprovision!))!

Please, please please read this before coming back and posting

http://www.microsoft.com/exchange/2010/en/us/overview.aspx!!

PS as a bit of a friendly dig at you have a look at Lefthands P4000 technology, you almost got your architecture as good but you forgot to think outside the box (no pun intended!!)

Adam 73

Re: What??

Reliability, management, and consistent predictable performance would just be a few benefits to XIV.

Errrr, sorry how?

Ill take them 1 at a time.

Reliability. A SATA disk is a SATA disk so drive failure is not going to change. Also as I pointed out all 2010 HA and DR features are specific to the server side, so no benefit from the SAN.

Management. Well what your going to have to do in XIV is 1. go down to the array and carve off some luns 2. then present those luns to the mailbox servers 3. whilst zoning in fabrics and so on. 4. Then you can go to the mailbox server and set up exchange 2010 (where it will treat your expensive carefully provisioned luns in exactly the same way as DAS)! Or with a DAS solution you could just go straight to your mailbox server and perform step 4.

Predictable Performance. Hahahahahahahahah (enough said). Sorry how does an array designed for random IOPS workloads perform better with an application that is extremely sequential? Also as you'll note from the link I sent you the DAS solution outperformed your beloved XIV solution particularly in latency (mostly because DAS was sitting closer to the server)!

Sorry I must have accidentally been put into the SMB realm. XIV handles enterprise class customers

As I'm sure your aware the ESRP is designed for enterprise class solutions that are certified by Microsoft .............. so I suggest you take it up with them why they consider DAS to be just as reliable, more performant and a damn site cheaper than a SAN.

Oh and you raised one valid point about the existing SAN investment, I have many customers (yes I work for a large vendor) who rightfully point out that they only just bought a SAN so can they use that, of course they can and sometimes it makes sense particularly if they have excess capacity sitting around. They all accept however that it isn't the most cost effective or even most performant. However I have yet to meet a customer who when buying storage for 2010 actually buys brand new arrays. I recently had a situation where the cost of just the extra disks for an san (to support 2010) was twice the price of our solution which was the servers AND the storage all in! Guess which one they went for? This was an extremely large FTSE 100 company.

Adam 73
Stop

Re: Because...it works and it is ESRP approved

You obviously dont understand Exchange 2010!

There are plenty of DAS options as part of the ESRP as well

http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA1-5736ENW.pdf for example?

I notice with much amusement that the XIV performs worse than the DAS option! How much did that solution cost by the way?

So explain to me what the difference/benefit is in running a SATA drive behind an XIV and a SATA drive connected directly to the mailbox server?

SATA drives are SATA drives, Exchange 2010 is extremely sequential in read/write layouts which pretty much negates the need for large caching. Exchange 2010 handles its own replication and availability groups, so there is nothing a SAN can do that would improve on that.

So to repeat the question above, why would a customer willingly pay extra money to host his data on an XIV when he will get no benefit over a much cheaper DAS solution?

Oh and the "Staples" approach is being used by large corporations, mostly because Microsoft are telling them that is the approach (DAS directly to the mailbox server)!

Adam 73

And the point of this is ............?

So Microsoft releases a new version of exchange that doesnt understand or even need san storage (no more single instance copies, no ability to use SAN failover). Which means most people are planning on going back to DAS storage for exchange (which is hilarious watching the SAN vendors see that buisiness go down the pan!).

So what is the point about crowing how good your storage is running the previous generation product that very few people are going to be considering/upgrading?

Dell kicks out new blades and racks

Adam 73
FAIL

Nothing new here!

The M610x blade has two full-length, full height PCI-Express 2.0 x16 slots - which is a new thing for blade servers.

No it isnt TPM! HP have been doing PCI expansion blades for years. They sit next to a blade and are connected via the PCI buses and guess what you can plug 2 full height cards into them ........... just like the ones you've been talking about!

Fusion-io's ioDrive Duo flash disk to accelerate calculations or I/O for specific workloads.

Again HP have been doind this for years they are called IO Accelerators (OEM' versions of Fusion-IO), difference is HP managed to work out a way of fitting them on a standard mezz card format meaning a space saving, guess Dell couldnt be bothered to put the R&D effort in!

Intel: best days are ahead for servers

Adam 73
Stop

@Nik; Its not just about using the same parts

Its actuially about using the same platform!

Just to use x86 chips does not make a product commodity.

The P4000 approach is far better, they actually use x86 servers, which means the same box could be used as either a storage controller OR a server platform, that lets HP drive economies of scale far better as they only need 1 production line.

Also Clarrion is not based on commodity hardware, as far as I was aware the disks are still EMC only? Why does a 450GB 15k enterprise disk from EMC cost me 4 times more than a 450GB 15k enterprise disk off of Dell of HP from their x86 line?

HP dons blades to scale Superdome 2

Adam 73
FAIL

Re: FCoE

Actualls AC your missing a quite valid point.

You are right the FCoE standard has been apporoved. Unfortunately it relies on CEE to work, and this standard has NOT been fully ratified, more importantly the multi-hop capability is not there yet which means FCoE throughout the datacenter is not feasable for at least another year (when all the CEE protocals are fully ratified and I can start getting certified kit)! So actually Matt is right.!!

Unfortunately no one seems to have told Cisco this!!

Dell gets flexible with server memory chips

Adam 73
Stop

8GB or 16GB

Actually TPM if you work out the cost per GB on the 8GB and 16GB sticks you'll find they are the same price/GB. So actually youd have to be mad to buy the 8GB, (assuming you need the count) seeing as you would need more 8GB which means power consumption going up and decrease in performance due to filling the DIMM's!! Might as well skip the 8 and go straight to 16!

Cisco beefs up fixed port Ethernet switches

Adam 73
WTF?

Errrrr.............

"to share each others' power supplies through a feature called StackPower. This provides even more resiliency in the network gear, and it will be interesting to see if server and storage array makers offer similar capability soon."

Errrr sorry to break this to you, but isnt this what blade servers do ................?

Adam

IBM flicks out HS22V Xeon blade

Adam 73
FAIL

Shame

Its a shame IBM dont seem to be able to keep up in the blade world.

HP have had a virtualisation blade for a lot longer (ever since nehalem came out IIRC and technically they launched the bl495c about 2 years ago which was also a virtualisation optimised platform (read SSD's, 10GB NIC's as standard and lots of DIMM slots)).

Also they now support 196GB of memory in the box as well (OK admittedly using 16GB DIMMS which are a bit pricy!) so this doesnt even equal whats out there now!

Interested to see if IBM have embedded any 10GB NIC's as standard ala HP, its one thing to give you all the memory another to be able to use it!

Adam