Re: Everybody knows COBOL and FORTRAN
I also did my year out at IBM Havant working on software to control the automated production line using EDL on Series/1.
Our paths must have crossed !
19 publicly visible posts • joined 8 Nov 2007
Unfortunately, the vast majority of IT execs are not looking beyond Unix to see how they can reduce the costs of doing IT business.
There continue to be reasonable and very valid needs for a Unix architecture, but for the majority of applications, the migration to an x64 industry standard platform is now more valid than ever.
With this comes not only lower server hardware, but lower costs of software, management, peripherals and staff availability/rates.
is to sell something that has proprietary lock-in on the design, which in turn means less competition and ipso facto,higher margins on the sale for the vendor.
Why do you think that HP was pushing blades so hard? They couldn't make sufficient margin off the rack dense product alone in the face of stiff competition from Dell.
Similarly, IBM is about 40% higher than the Dell or HP equivalent on their blades (your street pricing may vary) because of the perception of IBM as the blade standard and the speed to which the blue adherents signed up for more IBM tin.
This is not an iSCSI versus FC argument.
1. If your shop refuses to entertain any other protocol than FC, you're doing your org a disservice. if only FC, are you now saying that NFC is not enterprise class? CIFS? How about Infiniband - is this not enteprise class? How about FCoE if/when that becomes a standard? Contstraining one's choice of storage by a protocol is at best naive and worst an indictment of a storage administrator not looking for the optimal alternatives for his business. Dreamworks is an enterprise, but they chose not to look at traditional enterprise storage at all, but it was obvious this was an enterprise solution. http://www.theregister.co.uk/2009/03/29/hp_ibrix_monsters/
2. "Enterprise class" is defined by a number of aspects of a storage solution architecture - redundancy, manageability, scalability, performance, functionality, acquisition cost, TCO. Dell's Equallogic product satisfies these criteria up to about the 100TB level.
3. iSCSI does not today offer potential bandwidth of FC8, but the chances are that your servers are not using this. If you need (not perceive that you need) >FC1 performance and >100TB scalability for a single app, then your choice is unlikely to be iSCSI. However, do your own research and don't take the advice of the storage vendors who have vested interests to tell you otherwise.
Flame, for the firestorm this topic has set off.
I have literally dozens of customers who have successfully implemented virtualisation.
One recently went down from 44 old servers to 4 new ones, using VMware - VMotion on shared storage. Performance is up, the refresh cost less than 50% of a non-virtual refresh and the maintenance costs are a fraction of what the customer was originally looking at.
As for the mainframes and AIX, I'm keeping food on my table and clothing my family by taking these dinosaurs out of customers' data centres and migrating them to X86 platforms. It's old school to think that SAP, Oracle etc cannot run well - and cost effectively on anything but big iron. Replacing Sun/HP Unix with AIX is out of the frying pan and into the fire.
Even Oracle runs Oracle and hosts Oracle on X86. Watch your back...
We use
inches, feet and yards
miles, not kilometres
farenheit, not centigrade
pounds, not kilograms. No one knows what a stone is here
And also:
bushels
cups
fluid ounce
pints (the cheap US = 16 fl oz, not the more generous Imperial measurement of 20)
gallons (8 pints, but the smaller US pints, not Imperial pints)
quarts
With no exposure to the metric system, Americans are lost in the rest of the world. There are exceptions - scientists (not including the NASA scientists who missed Mars a few years ago), engineers, etc, but the regular person on the street is confused. There are only 3 countries that do not use the metric system as the norm - USA, Liberia and Burma. I believe this was Reagan's fault in the 1980's.
Paris, because she still measures certain things in inches.
Intel's moving to 6 cores.
With 4 core technology, that's already 6 X 4 = 24 cores of processing power, before you bring in technologies such as Oracle RAC etc for intelligent clustering and load balancing.
I've come across dozens of customers in the last few years who claim to have a need for 8-ways and invariably it's just ill-informed server techs who want the prowess of a big box to play with [there ar exceptions but ver very few]. We all know that CPU technology does not scale linearly beyond 4 sockets and this is why HP and Dell pulled out of this market in the first place.
I have a feeling that HP has not done its homework on this as if anything, the game has changed even more with virtualization technologies such as VMWare and Xen.
Remember that you are not a Cisco shop but a <fill in the name of your company> shop. Your IT Directors are right to consider alternatives and evaluate them from a business perspective. Unless Cisco is doing something special for you, there's really no reason why you shouldn't evaluate other products - and at the same time save your company a boat load of moolah.
Yes, Cisco does things well, but there are many other extremely good switch products out there frm independents like Extreme, Force10, Foundry, Nortel, Enterasys and the manufacturers like Dell Powerconnect and HP Procurve which conform to the industry standards set out by the ISO (not Cisco!). 95% of those standards are common with Cisco IOS standards, so it's not a stretch.
1. Good luck finding data centres that can provide 15,000 watts of power to each rack. Most are lucky enough to have 3000 watts per rack right now.
2. How ar you going to cool it? Underfloor cooling will be insufficient to keep the top ones cool. Hope you have extra Euros/Dollars/Shekels/Dinars in your budget for some very exotic APC/Liebert cooling solutions.
3. 4 sockets are great, but I already need 64GB memory for 2 socket. For a 4 socket solution, I would need 128GB or even 256GB in this configuration for anything useful (database possibly, VMWare especially).
Assuming you solve for all of the above, this is a niche product for HPCC environments only.