Hmm
Was it the move away from the FusionIO kit that increased performance and lowered their cost.....
19 publicly visible posts • joined 13 Mar 2011
I love object based storage. databases have been using it for years. don't forget some of the enterprise wrappers, documentum, filenet etc. they all used an object identifier to speed reference and search.
moving to a software only model is not that much of a stretch. It is still all about else caution though... perhaps the mark hurd could help dell realize thier potential once all this buyout talk ends soon.
The relationship got off the ground - the challenge was the limitation of BCS server products only, DL980 or HPUX platforms. The HP folks were great to work with.
As for Violin Latency (you get this from the website).
PCIe direct offers the lowest-latency 100µs @70/30 mix up to 250K IOPS for the 3205/3210 that HP re-sells.
FC operates at 200µs @70/30 mix up to 250K IOPS for the same gear.
There will always be a latency overhead for any shared fabric (IB, iSCSI, FCoE, FC) that is not uncommon.
But you can also look at the TPC-E and TPC-C benchmarks @ tpc.org where our storage was leveraged for HP and Cisco postings for SQL Server and Oracle use cases. Or you can look at the VMware VM-Mark scores for data there for HP, Cisco, Dell and Fujitsu. Or any other data on our benchmark link on our site.
As for expectations IOPS is about capacity - what matters for VDI, Virtualization and Data intensive applications is latency. While other Solid State or Flashy solutions may deliver decent IOPS. That's nice. Deliver the same latency across the bored for read/write mixed IO.
That is almost something - considering that clients who are running heterogenous environments that need 28GB/s of bandwidth today...can buy 4 6616s from Violin (32GB/s of bandwidth) they don't have to wait for any software releases from EMC. Heck they can go ahead and install the other 8 6616s in the single 42u Rack and provide 48GB/s of bandwidth to customers who need it today.
And thanks to Cisco - we have already proven our ability to deliver high performance for VMware and to HP, Cisco our ability to deliver that performance for Oracle, MS SQL Server database applications as well.
No EMC Software Licenses, No EMC Support Costs, No Oracle Exadata Software License/ Support Costs, No additional Oracle Database License increase for the newer X3 gear....
Just plane ole simple, fast, reliable, highly available storage....(for density switch out the 6616 for the 6232s..)
Project Sage - Hot Cache was tried and dismissed as not effective. EMC Thunder has is still a bit of drizzle, just not enough energy to create a critical mass to drive a storm.
The old HP Embraced EMC Symmetrix and helped them become the storage company they are. After a few courthouse meetings, HP shuns EMC and Turns Eastward (westward depending on your location) and embraces Hitachi. Including a Deep co-engineering/development effort.
Then as with most things (for go the flames etc), DEC (creator of some cool technology much like Sony) is acquired by Compaq and then later ingested by HP with a little pepto-bismal and bit of pixie dust to boot. The Compaq EMA becomes the EVA....
Then when no-one is looking HP hires people from EMC that have no future growth to shape the future at EMC so the move out - to move 'cept. They cannot seem to determine a direction.... a Vector if you will.
Let's see. ( a short list of technology owned or OEMed by HP)
PolyServe
Ibrix
Veritas (no HP didn't buy them nor did they think to buy them)
LSI
Fusion IO
Violin
TMS
Hitachi
Outerbay
OCZ
..
The engineers at HP came up with a partial Flash Array technology (yep partial if you have 1900 slots but can only fill 512 that is not a full anything..)
HP needs to set a direction or maybe they should get out of the storage business they never clearly understood.
Though I did like the AutoRaid 12H. It was fun to watch it rebalance itself once you set up AutoRaid and then pulled out a disk and re-inserted it before the rebuild was done....
While the information is currently up for discussion, until numbers are readily available the "shrinkage" is merely hypothetical.
However, would anyone really be surprised if Hurd/Fowler and crew really could not figure out sell grow the SPARC and x86 business.
Fowler couldn't do it at SUN, why would he be able to do it at revenue/margin focused company like Oracle.
Hurd, never did it, not at NCR and certainly not at HP. HP server business "growth" stems from the group down in houston... Digital, no Tandem, uh no... Proliant. Well that is close enough. When HP acquired the Proliant company based in Houston, they have been riding the market momentum of that Brand since and will continue into the future.
Tandem, DEC(OpenVMS/TRU64), HP-UX, MPIE - Not since Lou and the few before him have been able to make those groups grow.
Hurd does understand margins, ask any HP investor or HP employee. Investing in engineering and making a product better or coming up with a new product and bring it to the market. Hurd knows little. (NeoView).
I tried listening to the call while driving yesterday.
At Fiscal Year-End last year, Oracle was going to push the bar form 1,000 DATABASE Machines installed to 3,000 by the end of this fiscal year. Then there was a note at the end of 2nd quarter, that stated revised numbers for a combined Exadata (Database Machine) and Exalogic appliance installs. Revised to a slightly lower number.
During the call how many Appliances Database Machines and how many combined systems (database machine, exalogic and exalytics) have been installed at the end of Q3? did anyone hear?
Still not sure how Fowler is STILL employed... of course not sure how Martin Fink is still employed for that matter. Then there is that Hurd guy. Helped the HP stock number by killing R&D and cutting costs rather machete like. How is he going to turn a low-margin business like x86 gear into a raging money maker for Oracle. Forget SPARC (both platforms) Hurd couldn't deliver margin on in house developed products either....
There are two exadata products -
X2-2 (1/4, 1/2 and full) made of 2,4 or 8 x4170 Dual Socket Servers.
As delivered a full X2-2 is 96 Database Cores and 768GB of Ram (12 Processors and 96GB of RAM).
Max bandwidth of Gen2 PCIe Slot is ?
How many QDR HCAs are in each server?
X2-8 (FULL Only) 160 Cores and 4TB of Ram (80 Cores and 2TB of Ram per X4800).
Now the Storage -
Each storage cell has ...
12 cores and 24GB RAM, 12 Disk Drives (SAS or SATA) and 4 96GB SunFire F20 PCIe Cards.
A full Rack then has .....
168 Additional Cores, 336GB of RAM
A full X2-2 ( 264 Intel Cores, what @ 35 watts per socket), 1TB of DRAM.
or
A Full X2-8 (328 Intel Cores) ~5TB of DRAM.
Each with 5.3TB of NAND Flash albeit from SUN.
Check out the latest top ten TPC-C (yes it is generic benchmark forego the obvious) A two socket Server is driving 1 Million TPM ( A lot of 8K IO), all in what 8-12 Rack Units. all drawing less power/cooling and performing better than NEC with 10 Vrident cards or Exadata.
When it comes to scale keep in mind that Oracle Exadata is three clusters - (how much complexity do you need).
RAC for Internode Database Cluster, Oracle Grid Infrastructure (ASM is not required for RAC there has always been other options), Exadata Storage Grid.
Look at all the specialized hardware up above all to make disk drives go faster or to scale as you stated.
What if you just started with a better design -
hm. seems like if you buy a company that excels at loosing market share (SUN). then hire a president to be in charge of that group that excels at cutting costs and r&d efforts to improve the margin if you sell any... and don't forget that the same president's secondary skill of losing market share..
Seems like the numbers are right on target. The question is can either fowler or hurd do anything different.
In the words of the farmers cat (watership down) " I think not"
Benchmarking is a game absolutely. Most know this. And often times the environments are tuned for maximum performance and not specifically scaled to run SAP R3 or even Quarter-End Close for E-Business Suite.
However it does provide us a measure - the TPCC code is unaltered. The OS, the DB and the drive configuration have to be disclosed etc. sometimes a bit of yawner.....
The key take away. A two socket server with the appropriate IO and memory configuration can take Oracle's lowest-cost database (standard edition 1) and drive 1 Million Transactions per-minute.
As configured - $600K.
In a give data center, is there a customer that could chunk their massive M9000 platform with Hitatchi USP storage for a 2U server from Cisco Running Linux and a Violin array and run their entire Back-Office application.
I believe it is possible. That is what the benchmarks are for - though some of us in the HW business would prefer you bought the whole kit. In reality it shows that the OS Kernel/Memory Management, IRQ Management and occasionally a TOUCH of NUMA, handle massive through put and work load.
What exactly is exadata?
I get more and more confused each day. 8 Node Database Cluster, with Spinning Media, and Flash.
I am not discounting the Exadata Software, just looking at what the elements are.
You go to Oracle's website and you buy all of these items, you build this Hardware mirror to the DB Machine. Your clustering skills are bit rusty, so you hit the RAC SIG to get the latest tweaks.
Is this Exadata?
You say no, you have to have the Software, ok you get the software and install it.
What do you have?
As the owner of the report, the website, the application, the Business unit et. al. I just see a database, not the bandwidth, not the latency, or cluster cost or parallel execution, smart scan offloading, etc.
I just notice that what took an 1hour now takes less time, maybe 2x, 10x faster etc.
Do I really care what delivered it faster?
Take a look at the HP gear, it is not reinventing the storage tier, is is simply offering something that goes faster, that delivers that report, that web session, that response faster.
Since Sun went the way of American Motors Corporation, the leadership team managed to drive most leadership from the company prior to the Oracle BORG process. Once that was under the way Oracle manged to push what was left of the leadership out the door.
Search Linkedin see how many former Sun employees work for some company other than Oracle.
What is interesting is Oracle does need a storage strategy similar to HP. If NetApp aquisition is under way, who at Oracle or NetApp for that matter can drive a storage strategy?
That is not Mark Hurd, he knows how to wring cost out of every process to drive better profit. But he does not integrate or drive strategy.
And if Oracle buys NetApp would anyone other than current NetApp clients who will be directed to Metalink for support.... know?
Almost "Who Cares"
Oracle knows how to acquire a revenue stream and maintain it, look at how long business have waited for Fusion Apps to arrive.....
top 10 things for Hurd to work on?
1) Expense Report Management - it's always the pesky details that get in the way.
2) Program managing the next sail boat race in San Francisco
3) Working his rolodex and meeting with clients and telling them that what he said at HP is wrong and they really should be Exadata even it only sort of works.
4) Develop an in-memory analytic engine that will help drive better results from john fowlers HW group
5) Stay out Safra Catz's way
6) Stay out Larry Ellison's way
7) look at acquiring Teradata
8) start marketing program with outside consultants to great executives at Oracle events
9) read up on bill clinton biographies to better handle those difficult questions
10) maybe hold on to that HP stock a tad longer.....
it's a working list - feel free to edit or add to it
Most TPC runs last 120Minutes. Since the audited run starts from a clean platform, and you cannot pre-stage data in the Smart Flash Cache prior to the run sooooo. You would see how fast 8 dual socket (6 Core) DB servers with 96GB of RAM each would run Oracle and the back-end is all JBOD (Grid Disks).
What is more interesting, since they added Flash Cache to support mixed mode workloads. They should still be able to go and run a TPC-H test. Though there seems to be no interest in that.
Now Oracle should have kept the same montra as that of the Former Sun. Instead of proving how poorly CMT and SPARC processors run compared to the others bits of Silicon out there. They should stop running benchmarks.
The 30Million number is based on 27 SPARC T3 servers delivering 1.1Million per server. An HP DL580 running SQL Server peaked at 1.8 Million TPC-C.
Oracle's license agreement prevents anyone (users, corporations, independent auditing groups) from publishing any performance related data benchmarks etc. without the written permission of Oracle Corporation.
What would does Oracle gain from not signing off on the TPC audit report? Here is a system that shows Oracle database running very fast and at reasonable price?
I can only think of Two reasons, a petty spat with HP over Mark Hurd and/or You cannot expect to close a 10 year $23 Billion Dollar pipeline for Exadata Database Machine if there are solutions that run Oracle Faster and CHEAPER than Exadata.
My guess is that it is both.......