Wildfire did work
The wildfire interconnect did make it into out of Sun and it did work. I clustered 8 Sunfire 15Ks and a ran software across 800 cores. We were not the only site using it.
The word coming out of the Sun portion of software giant and seemingly enthusiastic hardware supplier Oracle is that the axe has fallen on the company's HPC group. While details are scarce — just the way Oracle likes it — El Reg hears from sources familiar with the matter that layoffs took place last week, with most of the HPC …
Yes, I can confirm Wildfire / WildCat / Sun Fire Link making it into the wild and actually not being that bad. To be fair to Sun, they weren't the only vendor struggling to make a profit from a high-speed interconnect solution. Remember hp's Hyperfabric? Sure, when hp announced it Hyperfabric sounded fantastic - it was going to take over the World! - but by the time they managed to get it sorted we had cheaper and almost-as-good gigabit ethernet and fibre channel, and they were multi-vendor standards. Instead of using Wildfire or Hyperfabric or other proprietary solutions to build big, multi-node RAC instances with lots of cheaper, little servers, we ended up just using bigger SMP frames with giagbit and fibre SANs. Hp stubbornly went on to repeat the mistake and developed Hyperfabric to a version 2, which can be spotted hiding in the odd supercomputing list entry and very few other places.
The problem for all these proprietary interconnects is that the individual vendors can't put as much development into them to keep them ahead of multi-vendor standards, which have been racing along at a gallop. I can remember when we were sorting out negotiation problems between the then new 100Base-T products and thinking that was fast, yet it seemed like only minutes later we were buying gigabit products. The vendors realised it was easier and safer to back developing industry standards, which mean development of those standards happened faster and in a more concise and accurate way (the 100Base-T standards were so loose you could get two "compliant" bits of networking kit that just would never talk to each other at a rate higher than 10Base-T, yet nowadays I can virtually guaratee two vendors' gigabit kit will work out-of-the-box). I think the big problem was Sun had gone overboard on the promises it made around Wildfire (like it did with UltraSPARC V, Rock, Niagara.....) and didn't seem able to stop shouting about it even when it was obviously not going to take over the World. The reason most people haven't heard of Hyperfabric was because hp used to be just down-right awful at marketing!
I think one thing you can gaurantee is that during the webcast there will be a lot of talking but not much will actually be said.
I don't think Oracle have much of a clue what to do with the hardware business they bought from Sun and in the meantime IBM are eating their breakfast, lunch and dinner.
At this moment in time, it's pretty unclear exactly why Oracle bought Sun at all. They supposedly wanted Java, but not badly enough to keep the guy who invented it. They supposedly wanted Solaris, but not badly enough to keep its top developers, or encourage the OpenSolaris community to help them develop it. They supposedly wanted a server business, but not badly enough to keep its HPC division.
What exactly *do* they want?
I don't see any future SPARC64 chips in the market after the lame 65bm 3GHz. Power7 is 8 times more powerful and Nehalem is 4X, nothing Fujitsu can do will solve the technology gap.
Even if Oracle changes the pricing so Fujitsu SPARC64 is on par with x86 it will do nothing for hardware sales and will only hurt Oracle's maintenance stream. Power7 is 4X more powerful per core than SPARC64 and is only 33% more expensive. A .5 multiplier for SPARC64 will not change the game. They would only do it to try and stave off the migrations as they trick customers into ULA's which cement the maintenance stream.
>> What exactly *do* they [Oracle] want?
> I'd be willing to wager that a not-insubstantial portion of what Oracle wanted from Sun is leverage over MySQL.
Oracle owns the dominant relational database engine for MySQL already - not to mention MySQL was open-sourced. Buying Sun was too large a price to pay for that.
Oracle was a large database vendor without a completely vertical solution competing in data warehousing space without control over the entire stack... not to mention a huge portion of Oracle's revenue was coming from Sun's hardware & OS stack.
With Sun at-risk of being purchased by a competitor, Sun was an easy buy for Oracle.
Power, Itanium and SPARC are the last 3 RISC/UNIX survivors. With the Oracle/Sun buyout there may eventually be only 2 long-term UNIX system houses as Oracle still is having problems convincing long-time Sun customers that hardware's more than a hobby. HP and IBM are going after those customers with real gusto.
Oracle is successful because of their 'shark' mentality drilled in to their sales folks and the fact that if it doesn't make the company money, the project is killed off.
IBM is a 'services' company, Oracle is a 'sales' company. Oracle became the 800lb Gorilla not because of the product but because of the sales team.
Steve Jobs and Larry Ellison are best buddies. There was once a port of Oracle to Apple's Mac OS/X machines. Now the port probably exists in some fashion, but it was killed because it didn't make money ... The point is to drive home the attitude that they will try things, but will just as quickly kill it off if it doesn't make a profit.
This "per core" argument is getting tiresome to say the least. Scalability and overall system performance is the important thing, and I do not see high-end P-series systems coming in the market for some time now due to known scalability issues. On the other hand, while not relevant to the HPC-related article, businesses do need the RAS features of the M-series, where P-series has little to show for itself. Even HP's Superdomes fare better than P-series on the RAS arena.
Biting the hand that feeds IT © 1998–2021