* Posts by David Halko

468 publicly visible posts • joined 4 Aug 2008

Page:

China picks MIPS for super-duper super

David Halko
Black Helicopters

MIPS re-emergence; Trade Decicit

MIPS has been pretty much considered dead - with the major advocates migrating to other processors - this is offers a good opportunity to employ more Chinese citizens, with their master & doctorate degrees, and really move an old technology (MIPS) forward again!

I do find this interesting, not that China may necessarily get into the system design & building business, to compete with Intel/AMD/SPARC/POWER- but because they can choose to spend the money on systems which they can design from the ground up, by their own citizens, and keep more of the money that is flowing into China and give less of it back to the West.

Keep in mind, China is a big market. If they standardize on MIPS with some level of recompilation to get Intel compatibility, they can choose to write their own ticket. They don't need to necessarily sell their ticket to the outside world.

Microsoft finally cuts Bing data retention time to six months

David Halko
IT Angle

With billions of people, this is all about money...

With billions of people, the reduction of historical data means a serious reduction in cost.

Cut your data retention period, smaller long term storage is required, smaller database engines are required, less power is required for the long term storage, movement to SSD storage (cutting down in Electricity and HVAC costs) becomes more feasible.

Less data retention means less accurate ads for consumers & marketing - meaning people become more annoyed with useless adds and marketing gets lower takes on their advertising click-throughs.

This discussion of anonymity is more about finding a positive spin for the user & advertising communities.

Sun, Fujitsu juice entry Sparc box

David Halko
FAIL

RE: all a bunch of sales drones

> Nice to see the usual suspect sales people from the large Unix vendors all flaming each other anon.

Yes, nice to see the anonymous suspect sales drone asdf!

Just got another SPARC about 2 weeks ago...

David Halko
IT Angle

A little reason.

Anonymous Coward posts, "We have put SPARC64 on the divest list..."

Neil Davis posts, "Sign a NDA with Sun and they will gladly show you the SPARC roadmap. Roadmaps from Sun are never public."

Anonymous Coward posts, "I don't believe/trust/give credibility to roadmaps that are not public."

Anonymous Coward posts, "find us a Fujitsu roadmap that has a SPARC64 VIII that does not say 'uncommitted'"

Several points of reason here:

1) I don't remember someone saying to Intel that they were going to drop HP when they did not commit to building systems on the Intel Pentium Pro socket/platform from Intel. Such a statement is clearly as silly as Sun not committing to support a socket profile for 5 years.

2) You can get your own NDA if you are really a customer and see the roadmap. Clearly, you are not a customer, divesting hardware, or you are such a low level analyst that you don't have access to the information.

3) Providing an NDA road map of SPARC64 that you do not consider credible is a waste of time your time and the providers time.

Serious now... An anonymous poster, making silly statements, who appears to be a low level analyst (or someone who can't tell the truth), who asks for information which has been clearly communicated that he will not consider credible... is not going to make a significant decision regarding platform architecture.

David Halko
IT Angle

Anonymous does not "believe/trust/give credibility" to his/her own statements

Anonymous Coward posts, "The current roadmap shows the SPARC64 will cease to exist after the current generation."

Neil Davis posts, "Sign a NDA with Sun and they will gladly show you the SPARC roadmap. Roadmaps from Sun are never public."

Anonymous Coward posts, "I don't believe/trust/give credibility to roadmaps that are not public."

Smells like FUD. Let's think about this exchange...

If Anonymous Coward does not "believe/trust/give credibility to roadmaps that are not public" and "roadmaps from Sun are never public" - then Anonymous Coward does not "believe/trust/give credibility" to any of his/her own statements regarding "the current roadmap shows the SPARC64..."

If Anonymous Coward does not "believe/trust/give credibility" to his/her own statements, then no one else should, either.

Best to get an NDA to listen to the vendor, instead of believing an Anonymous Coward who does not believe himself/herself.

Sony BDP-S760 Blu-ray disc player

David Halko
Go

@Brian 6: BLU-RAY PLAYER

Brian 6 posts, "But it IS A PREMIUM PLAYER..... Did u even read the article ?"

uhhh... yea... that's why I commented. Even did word search in the article looking for key features I was interested inold...

Brian 6 posts, "Sony describes the S760 as its new ‘top of the line’ model"

That's why one should expect more. When people have bought and continue to buy premium content on commercial media, one should expect that a top of the line player will play it back.

Brian 6 posts, "Top of the line...How PREMIUM can u get ???"

The Sony player has composite outputs for video - I have not seen any composite signals carrying crystal clear 1080P lately. With your line of suggestion, the player should not have composite output.

Brian 6 posts, "And it WILL play CD's and DVD, Even VIDEO CD's."

Thanks for clearing up what the reviewer did not cover, regarding Video CD's!

VideoCD was pretty much the standard in China and South East Asia for a decade. There could be close as many VideoCD players as there are DVD players in active global use today that need to be replaced with a player like BlueRay.

Good to hear Sony did not abandon half the global premium media market!

David Halko
FAIL

VideoCD, DiVX, PhotoCD, etc.

Author writes, "Yet it’s disappointing that the S760 doesn’t support playback of digital music and video file formats other than the obvious audio CD, DVD and Blu-ray."

Scott Mckenzie posts, "Criticising a top flight player for not being able to playback shoddy home recordings or divx etc misses the point so much"

I want to know whether the VideoCD's that I have created over the past 10 years will run on it.

How about the Kodak Photo CD's, from the weddings I attended and wish to remember with friends on anniversaries?

When people had video taped and burned to CD weddings, graduations, and various other events in the lives of their friends, families, and church - they pull them out to show others, the same way they pull out photo albums. Now a days, we burn MPEG2 to DVD, but before DVD, that was not an option.

I purchased legal video from Asia as well in the North America on VideoCD format. Phillips had released VideoCD's that I have in my collection. Sometimes I want to watch these videos with friends. In particular, I have entire video seasons recorded in Japan of some of my favorite shows! Some of the older media is re-released on DVD, but will occasionally have poorer translation or poorer subtitles, so many prefer the former VideoCD releases.

If a top-flight player can not play back former standard (VideoCD) media, then it is not a low-end player.

If a "top-flilght" player can not play back not-so-standard (DiVX) videos, then it is certainly not "high-end" (doesn't a high-end car offer more options than a low-end car???)

If I can't even transfer the old media to shared network drives to play them on this player, then it is useless.

Sony misses the point - a Blu-Ray player was supposed to be backward compatible with former media. If they can't charge more on the "top-flight" players for more features, then they are not worth spending a dime on.

It is not about the player, it is about the media - the media costs people WAY MORE than a player does. I would rather spend an extra £100 on a player that will play back my £10,000 in media!

Ion add-on to equip iPhone with full Qwerty keyboard

David Halko

OK - Where's the Blue-Tooth Keyboard

Hey Apple! Where is the Blue-Tooth Keyboard Profile?

This is a really far-reach for a partner to have to walk in order to get past a lack of blue-tooth profile!

Blu-ray capacity to increase by a third

David Halko
IT Angle

Blu-Ray 3D could be a hit, if done right!

Avatar on the IMAX was THE BEST 3D I HAVE EVER SEEN!

Most other 3D seemed somewhat cheesy, but if other 3D movies could be made to the quality of Avatar, they could have a hit!

The glasses required in Avatar were not cheesy red-blue glasses and the movie glasses will fit over other glasses (not requiring contact lenses.)

If a special TV/monitor AND special optical player will be required for 3D, it could be a big flop - unless glasses are not required.

Lack of standards adoption by the Blu-Ray hardware vendors will slow the adoption of Blu-Ray media. If the industry want to sell more Blu-Ray media and players - the Blu-Ray 3D players better make sure that former standard media (CD-Audio, Photo-CD, Video-CD, DVD, Blu-Ray etc.) will play back in those newer players!

People will not keep around armies of multiple players (i.e. Blu-Ray 3D, Blu-Ray, DVD, VideoCD, PhotoCD, CDText, etc.) and multiple television sets (i.e. Blu-Ray 3D, HDTV, NTSC/PAL, etc.) in the same room - the intention is to replace the older with the newer and maintain the former investments.

Newspaper e-reader launched

David Halko
Go

Newspaper is almost dead - could be a way to resurrect them!

The newspaper industry is almost dead, it seems. They are consolidating into larger entities that have declining readership.

What they have not figured out - there is a reason why papers have different readership... it is because of the content. Consolidating the different papers into a unified paper, to save costs, wind up alienating the readership.

If newspapers could find a way to distribute the content, without the printing & distribution costs, this could keep them from having to consolidate and lose readership... especially if they can see some reoccurring revenue from it. These readers may be the trick.

Of course, someone astutely suggested that a laptop is the answer. I think that is the answer now, since they are much more portable than an old PC. It is still inconvenient to pick up off of a chair, plug it in, boot it up, supply a password, deal with virus software, software upgrades, crashes, etc. A laptop is just a nightmare, in comparison to an embedded system.

A flat machine the same form factor (and weight) of a book is really what is needed - something you can easily turn on and off (proximity sensor, like an iPhone while on a call?) An inductive pad to charge through would be perfect. No connectors.

Let's see if they do it right.

iSlate? I spy more control from Cupertino

David Halko
Unhappy

Missing Bluetooth Profiles on iPhone, iPod Touch, and hopefully not on iSlate

Author writes, "Apple vets every application, through its obscure and sometimes inconsistent approval process, and who wouldn't want a desktop computer free of malware?"

This process is very tedious, does not need to be as stringent, but it does offer the ability to yank back malware upon discovery - which is a very good thing. People run the service patches from Microsoft, which has the malware scanner & remover inside of it. If Apple added a little "yes/no" dialog saying "Apple detected possibly malicious software titled... please press 'Yes' to remove it" - I think this would make everyone happy.

Author write, "Stick a Bluetooth keyboard on the iSlate and it's a laptop replacement, extending the manufacture-controlled model into desktop computing."

A bluetooth Keyboard, bluetooth Mouse, and bluetooth Headphones were the profiles missing from the iPhone (and presumably from the iPod Touch.) This delayed people purchasing the iPhone (until they just broke down) and is delaying me from buying an iPod Touch. Apple should release them for all iSlate, iPhone, and iPod Touch.

I want to use one of those nice aluminum Apple bluetooth keyboards on some of these portable devices - but Apple has to get their act together.

Apple angling to transform TV?

David Halko
Go

Internet TV - will Apple get the consumer desire correct?

I purchased a PC & equipment which I tried to run scanners, printers, and post-script to PDF rip engines on - just to find out that Microsoft Windows NT only supported one rasterizing engine at a time, meaning I had to de-install my normal printer in order to run my post-script rip... not to mention that my scanner would not work at the same time as my laser printer, which meant I needed to unplug one before using the other... I also had zip drives at the time, that were also required juggling - even though I had enough ports, due to silly driver issues. Well, I bought a Mac and all my previously purchased components (with the exception of the Windows NT PC) all worked out-of-the-box. Never looked back at a PC.

I had 2 different phones, which had internet connectivity, and subscribed to data packages. They required some special markup language that made regular web sites look terrible. I started the process of building my own web site in order to get some business applications working. I canned it all, because the portable telephone vendors and network providers could not make it easy. Bought an Apple iPhone (first generation), it worked GREAT, out of the box. No special markup languages.

Cable companies and Satellite providers both provide hundreds or thousands of channels I don't want, charging an outrageous fee for what accounts to about 3 channels that I am interested in. I shut all that junk off and moved to purchasing DVD sets of the seasons of the shows that I want - because it is cheaper. Telephone companies started offering television packages, but they use the same scam to give you content I do not want to watch. If Apple does just the channels the consumer wants for 1/2 the price (not hard to do)... or adds decent content control (scan for PG-13 or lower) at a slightly lower price - they will give what the consumer wants and do well, just as they did with other markets.

The industry has not been able to get it right, yet, with many individuals feeling like they are held captive to whatever service they are participating in. (If this was not the case, many advertising campaigns like "ditch the disk" or such would not be so effective.)

If Apple does not get the streaming video right, well - someone else will.

iPhone gets a decent keyboard

David Halko
Happy

Re: Why?

Richard posts, "Whatever next? HD camcorder and external display?"

The iPhone already has an external display adapter. It plugs into the doc adapter.

The Bluetooth keyboard with a doc display adapter makes it a portable media center!

A little silver apple Bluetooth keyboards on an iPhone would be perfect with a Bluetooth mouse.

Data Robotics CEO change no big deal

David Halko
Happy

Maybe they'll leverage ZFS...

Maybe Data Robotics will leverage free ZFS - so their system can get the features that none of their market competitors are offering: compression, dedup, bundle the ethernet with the USB & firewire, support iSCSI natively, kernel based NFS & SMB file sharing, get correction of silent data corruption, flash read or write acceleration...

Free open source software in conjunction with management "change" could offer Data Robotics a lot of free features which would make them best in class.

IBM's XIV roadmap includes multiple frames and InfiniBand

David Halko
Thumb Up

IBM XIV looks like a mini-version of Sun's Open Storage...

The IBM XIV looks pretty promising!

This looks a lot like Sun release of Open Storage years ago (leveraging large form factor SATA drives with the ability to tack on additional frames.)

http://www.sun.com/storage/openstorage/

With Sun's Open Storage offering: no RAID write hole, full GUI (showing real-time graphs of drive, link, frame performance), Flash Acceleration (on reads and writes), multiple failed drive support in a striped LUN, block level de-duplication... how close does IBM XIV come to the maturity of Sun Open Storage?

'Steve Jobs' dupes blogosphere with AT&T protest hoax

David Halko
Unhappy

@Jeremy Chappel: Re: AN HOUR!

Jeremy posts, "run a data intensive app for an hour? Will the battery in an iPhone do an hour like that?!"

Yep! One hour is easy with the iPhone, multiple hours are OK, too.

Applications associated with TV.COM allows users to watch television over the network... they are not the only streaming video application user on the iPhone (i.e. YouTube, uStream, etc.)

Of course, if everyone turns on TV.COM, I highly suspect that TV.COM will crash before AT&T's network will!

This is really a silly idea.

Microsoft urges Flash makers to pay fat dollar for exFAT format

David Halko
Unhappy

Re: ext2/3/4 solution

It seems there are some pretty significant limitations with the ext2/3/4 file systems under other OS's.

For example:

- ext2 is only available under windows, here are the current limitations:

http://www.fs-driver.org/faq.html

- Other OS like Solaris only have read-only visibility to the ext2:

http://blogs.sun.com/pradhap/entry/mount_ntfs_ext2_ext3_in

- not a lot of visibility of ext3 or ext4

Considering how old ext2 is compared to the other ext file system revisions and the move to modern file systems which leverage flash for what what they do best and avoid the limitations of flash, I don't see ext2 getting much development under alternative operating systems other than Linux.

David Halko
Thumb Up

Re: UDF

Dan 55 posts, "Windows, Linux, and Mac all support UDF out of the box, right back to Windows 98. Not sure why device manufacturers are reluctant to use it, unless they're scared of being the first one."

Out of everything that I have read in this blog, this makes the most sense.

Is there a reason not to use UDF with flash or other portable media, that is not optical???

Apple said to snub Intel's next-gen mobile chip

David Halko
Dead Vulture

Apple: Motorola, IBM, Intel, and Self Suppliers...

Rik Myslewski writes, "Apple has asked Intel to build for them an Arrandale equivalent without the offending integrated GPU... Apple dumped integrated Intel graphics in late 2008 and moved to NVIDIA's GeForce 9400M for their integrated systems... Apple's current reliance on NVIDIA integrated graphics adds an odd twist to today's rumor, seeing as how NVIDIA and Intel are currently involved in a legal dispute over NVIDIA's right to produce platform chipsets for upcoming Intel processors..."

It sounds like Apple is defending their supplier (NVIDIA).

Apple seemed to defend their long-time CPU supplier (Motorola) through the AIM relationship when PowerPC for Apple was first released... instead of justs dumping 68K processors, they seemed to work in a process to give Motorola the ability to provide PowerPC processors to Apple and not just dump them for 100% IBM supplier.

At the same time, there has been very little discussion (lately) about that CPU designer (P.A. Semiconductor) that Apple purchased some time back... that seems vaguely related to this article and position that Apple has taken with Intel.

http://www.computerworld.com/s/article/9079918/Apple_to_buy_processor_designer_P.A._Semi_says_report

The history of suppliers, supplier relationships, and recent Apple acquisition seems to be a much more interesting story here than a decline to use one of dozens of chips that Intel is slated to produce.

Intel puts cloud on single megachip

David Halko
IT Angle

Sun SPARC 64bit T2 with 64 Threads Today or Intel IA-32 with 48 Cores tomorrow...

'The SCC's 48 IA-32 cores were described by Rattner as "Pentium-class cores that are simple, in-order designs and not sophisticated out-of-order processors you see in the production-processor families - more on the order of an Atom-like core design as opposed to a Nehalem-class design."'

The in-order design means the IA-32 cores will sit idle more time then they are working. Sun attacked that issue with Niagra by having 4 and then 8 threads per core - giving the CoolThreads T2 processor 64 threads of fully compliant SPARC instructions.

I am curious to see what the throughput performance difference is between a 64 thread T2 and 48 core Intel SCC.

Will people be interested in lots of 32 bit cores when the world has been moving to 64 bit for a decade?

I don't see the benefit...

Sun VirtualBox gets live migration

David Halko
Dead Vulture

Sun VirtualBox and Sun xVM Hypervisor

Timothy Prickett Morgan writes, "Considering that VirtualBox came from a German company (Innotek) that Sun bought in February 2008 because its own Xen-based virtualization efforts were woefully behind..."

ummm... no... that is untrue... personal pet speculations should not be conveyed as fact by a reasonable writer.

There has never been any published statement from Sun confirming this writers opinion. To make a truthful statement , one would need to have a reference from Sun, none has been presented, and I have personally never seen such a statement from Sun.

Let the reader try to understand this odd line of thinking - VirtualBox was purchased last year, Sun VirtualBox just gets Live Migration, and Sun xVM Hypervisor had live migration for some time... Sun xVM Hypervisor is not "woefully behind".

Sun xVM Hypervisor, is bundled with OpenSolaris, paid production support is available, and Sun xVM Hypervisor has been able to do live migration for some time, and Sun xvM Hypervisor hosts Solaris 10 x64 operating systems... Live Migration is even available with the Xen volume sitting on top of an NFS file share on top of ZFS - that is certainly not "woefully behind"!

http://hub.opensolaris.org/bin/view/Community+Group+xen/virtinstall

Sun announced development is continuing on xVM Hypervisor under x64 servers with more features due... OpenSolaris will continue to be the place to get it.

http://netmgt.blogspot.com/2009/11/opensolaris-next.html

Sun VirtualBox runs under MacOSX, Linux, and Windows - the OpenSolaris or Solaris Operating System support teams is the wrong place to put this product development. Is Sun VirtualBox "woefully behind" because VirtualBox was not bundled in Solaris 10? Doubtfully...

Contrast this to Hardware Domains, Logical Domains, and xVM Hypervisor all being consistently supported at the OS level. Since these features are not offered under MacOSX, Linux, or Windows - this is the right place to put this product development. is Sun xVM Hypervisor "woefully behind" because it was not released as a separate product? Doubtfully...

If a company is thinking about deploying Linux with an x86 Xen hypervisor to run Solaris 10, there is far less risk with considering Xen under OpenSolaris with Sun xVM Hypervisor - paid production support are available directly from Sun for OpenSolaris.

Clearly, Sun xVM Hypervisor is out, is being developed, and offers very nice features that Type 2 Hypervisors like Sun VirtualBox are starting to include. The xVM Hypervisor for x64 was never billed as a Solaris 10 feature. Contrast this to ZFS, which did not make the first cut of Solaris 10. Clearly, Sun xVM Hypervisor is not "woefully behind" if it was never scheduled to be in Solaris 10.

The Sun xVM Hypervisor for x64 was merged into the OpenSolaris source code base. One would logically conclude the xVM Hypervisor is being groomed as a feature in the next major release of Solaris (i.e. perhaps Solaris 11?) for those companies who don't want to mess with pure Open Source operating systems like OpenSolaris.

Let's watch vitualization progress!

Apple sues over knock-off power bricks

David Halko
Flame

Hypocrisy of Earning...

Author asks, "If you're wondering why Apple would go through the time and expense of dragging a small company into court for a design infringement..."

Perhaps it is because Apple had spent a great deal of time paying designers to come up with a unique design.

People always want to be paid for their work, then they get upset when someone else gets paid for their work - the hypopcrisy of humanity...

Drobo restrings boxes to double-up product range

David Halko
Welcome

Drobo + & -

I honestly love the Drobo... especially being able to plug in SATA drives directly without extra chassis housings! It is cool how you can use drives of any size.

I would like to see the eSATA and FireWire compared - FireWire beat USB hands-down in the benchmarks. Trying to use eSATA drives have been an absolute nightmare for me, on some of my equipment. When I speak to people, everyone I spoke to tells me their external hard drives which use eSATA are fast, but have issues with requiring powercycling of storage units or computers every so often - I wonder whether eSATA is really prime-time yet, for Linux & Windows.

Adding pairs of drives to my Solaris ZFS RAID1 file system server seems to have worked better for me: no 16 Gig Drobo limits, I can actually read from the OS how much capacity is really available with ZFS (in contrast to the Drobo), it is cheaper, and other neat features are available:

zfs read flash acceleration, zfs write flash acceleration, and deduplication with the latest open solaris compilation.

I hope Drobo upgrades it's internal infrastructure to use the latest zfs - then I will consider the product again - I would really like to use the Drobo if it had enterprise & managed services solid ZFS on board!

Cisco pumps out iPhone security app

David Halko
Happy

Most iPod Touches have Microphones

I believe only the first generation of the iPod Touches do not have microphones - most of the newer units include the microphone or at least the ability to use microphones that you buy!

QLogic gets vendor quartet for quad data rate gear

David Halko
Dead Vulture

Sun with quad-rate IB is seeing competition...

Chris Mellor writes, "possibly Sun's IB product development is in limbo due to..."

What does "in limbo" refer to? Sun had been moving ports, cards, and switches to quad-rate InfiniBand since April 2009.

http://search.sun.com/main/index.jsp?qt=quad+rate+IB

The common reader just can't comprehend how competitors ("Dell, HP, IBM and SGI") reselling quad-rate equipment from a third party vendor ("QLogic"), to start the process of catching up to Sun, means that "Sun's IB product development is in limbo".

That being said - it is nice to see other vevndor join the party! The more, the merrier on Quad Rate IB!

IBM gives big discounts on Power engines

David Halko
Happy

Hint of a new POWER chip coming early next year?

Dropping the cost of activating cores may be the sign of a new POWER chip coming soon... get people to spend money on old technology before getting them to buy the new stuff!

I can't wait to see the new POWER systems!

Author writes, "The PowerVM hypervisor (program product 5765-PVE) sells for $2,099 per core on this box."

Dang... a customer could buy a old T1000 OpenSPARC platform with hypervisor for less money than you have to buy hypervisor for an existing POWER platform.

US carrier in shock 'wireless pipes make money' claim

David Halko
Thumb Up

A good life?

TMobile Wireless Pipes + Google Android + Sun Java + Sun Java Sore = Good Life?

http://www.t-mobile.com/ + http://www.android.com/ + http://java.com/ + http://store.java.com/

Time will tell!

ScaleMP cuts InfiniBand out of virtual SMP clusters

David Halko
IT Angle

I think this is really neat technology!

The question in my mind is... how tolerant is the hypervisor to node failure?

Blade servers are hot!

David Halko
Dead Vulture

Redundant Power Supplies on the Sun blades...

Hermes Conran, "Why are these things still running their own power converters?"

Captain TickTock ,"Redundancy and resilience, my dear boy. A single power converter would be a single point of failure, much like putting lots of servers in one box. Oh, wait a minute..."

The Sun platforms offers redundant power supplies with their racks... but El Reg decided to ignore Sun blades, again, in their articles.

http://www.sun.com/servers/blades/6000chassis/gallery/index.xml?t=1&p=1&s=1

PayPal opens 'embed everywhere' APIs to world+dog

David Halko
Happy

Sun has adopted PayPal X for the Java Store

Paying for Java applets on Java Phones via PayPal is one of the new customers for the platform!

http://www.reuters.com/article/pressRelease/idUS154305+03-Nov-2009+BW20091103

"Java Store Beta Payment Mechanism Powered by PayPal"

"Sun now supports for-fee applications submitted by developers for distribution

in the Java Store Beta. Developers can price their offering anywhere from $1.99

to $200.00 (USD)... Developers will receive 70 percent of any for-fee application sold

through the Java Store Beta. Utilizing the new Adaptive Payment API from PayPal,

consumers can authorize the Java Store Beta to bill against their PayPal account

so they can simply click the "Buy" button and never have to leave the store. In

addition, when a customer makes a payment in the Java Store Beta, the

application owner also gets paid at the time of the purchase."

ZFS gets inline dedupe

David Halko
Thumb Up

@AC: ZFS - Sweet mother of Buddha

Anonymous Coward Posted 21:52 GMT post, "Isn't the whole freakin' *PURPOSE* of backups to duplicate data? So in case the "original" data gets deleted, destroyed, overwritten, etc, *YOU HAVE ANOTHER COPY?!?!!*"

Robustness and Data Integrity are at the core of ZFS !

- data CRC checking

- silent data corruption is corrected

- user selectable RAID1, RAID5, RAIDZ, RAIDZ2 redundancy

- user selectable [virtually] unlimited snapshots, to take as many historical backups as you want

All of these mechanisms take care of original data which may get: deleted, destroyed, overwritten, etc.

Now - all of this is possible with speeding virtualization for dozens, hundreds, or thousands of virtual machines off of a very small storage system, for any operating system, using a relatively small quantity of memory with dedup in ZFS... leave the user data on an external ZFS server, but with dedup, hundreds of disk images can reside on a very small piece of local or remote storage.

With compression, [virtually] unlimited snapshots, double parity, no RAID5 write hole, [virtually] unlimited volume size, native iSCSI support, native CIFS support, flash read acceleration, flash write acceleration, dedup nearly here, and the Lustre clustering in 2010... Absolutely nothing production quality is lining up to seriously compare with ZFS in the industry.

Anyone running virtualization under any other file system other than ZFS is really at a disadvantage...

Intel aims 30W Nehalem at 'microservers'

David Halko
Happy

Virtualilzation and Moving Virtual Machines From a Dead Server with Internal Storage

Brian 62 posts, "Ahem, you do not NEED central storage for virtualization.."

Anonymous posts, "One of the major plus points of virtualisation is the ability to move running VM's between hosts. To do this you NEED shared storage."

Joe Greer posts, "No, you can have shared storage and just use it when you Svmotion from local SCSI/SAS to FC/iSCSI and then you can do your work on the physical host."

Can Svmotion may move a VM between 2 hosts where the source host (and storage) had dropped dead?

Virtual Machine redundancy and migration can be done (for free) under Solaris with internal-only storage even if the source machine is completely inaccessible using ZFS and COMSTAR.

http://netmgt.blogspot.com/2009/08/multi-node-cluster-shared-nothing.html

David Halko
Alert

I can't wait...

I can't wait to see the price/performance/form-factors of the 30W Nehalem processors on these little boards!!!

IBM: Power7 to rollout throughout 2010

David Halko
Happy

RE David Halko on Ellison whips out his Sparc TPC-C test #

Hi Jesper,

You know, I think we are just about done with this line of discussion - I think there is one last piece left in regard to the conversation from this thread which carried to this commentary:

http://www.channelregister.co.uk/2009/10/12/oracle_sparc_tpc/comments/

David earlier posts, "I don't think I made fun of multi-chip modules, at least I didn't try to. I merely said it was a business-man's approach. MCM's require less engineering, get faster to production, uses older technology effectively, reduces risk, but does not perform as fast if engineered it onto a single piece of silicon."

David later posts, "Doubtless, there are many benefits to multi-chip modules, as well as drawbacks"

Jesper posts, "And the drawbacks are ?... I suspect that the only reason you say it's not innovative is cause SUN doesn't do MCM's."

David later post, "I covered the main drawback above in a previous quote. I can add additional arguments (production line manufacturing of a single-chip solution is normally less expensive than an MCM - the consumer market size drives the most profitable implementation.)"

Jesper posts, "No your original comment was 'Some may argue that the MCM's were innovative engineering, disagree and suggest MCM's are a pragmatic business-man's short term solution to a technical problem.' That is not 'technology is what it is and I appreciate it as it is.'"

I copied the relevant posts, in the order of my making them, into this message. Let me highlight the first example of a disadvantage that I mentioned:

"MCM's ... but does not perform as fast if engineered it onto a single piece of silicon."

I added an additional drawback, since I could still not understand what you were trying to get at.

"production line manufacturing of a single-chip solution is normally less expensive than an MCM - the consumer market size drives the most profitable implementation."

I am not sure what you are trying to get at, honestly. My discussion was surrounding how multi-chip modules were a pragmatic solution based upon benefits and drawbacks that required less innovation at the engineering level than a single chip solution.

This does not mean that I believe innovative ideas are devoid in multi-chip modules. I did not mean my position to be mutually exclusive, merely a generalization to give an impression of my leaning.

Have a good day Jesper!

David Halko
Go

It is nice to see IBM...

It is nice to see IBM coming to the Octal-Core show!

That means there are 2 major vendors in this area:

Sun - first to the octal core, engineering everything in a single piece of silicon

IBM - second to the octal core, cobbling their cores on multi-chip modules

With the runners up, still building their projects:

Sun - first to the hex core, engineering a single piece of silicon

Intel - second to the hex core, cobbling their cores via 3 chips onto a multi-chip modules

AMD - third to the hex core, engineering a single piece of silicon

On the horizon...

AMD - possibly be the first 12 core in a socket, using 2 chips in a multi-chip module in 2010???

Sun - possibly be the first 16 core in a socket, using a single piece of silicon in 2010???

I look forward to the benchmarks of the Power7 and the system lineup from IBM!

It is GREAT to see the competition and innovation!

Ellison whips out his Sparc TPC-C test

David Halko
Happy

RE:Afara, Multi-Chip Modules, Linux #

Hi Jesper,

Jesper posts, "So you are saying that you don't feel that one of more of Itanium, POWER, Xeon or AMD based servers are not general purpose servers ?"

(I think one would be hard-pressed to suggest Itanium VLIW as General Purpose. I think I would be willing to make the case for AMD as bring truly general purpose, but it is REALLY far off the original topic of OpenSPARC and TPC-C benchmarks!!! LOL! Another day, perhaps!)

I am saying that the different architectures do offer different advantages and I am quite happy with the competition on a heterogeneous computing environment based upon standards. A general purpose computer is not optimal for all situations. An optimized architecture is not optimal for all applications. Good applications, however, are not always available for the architecture that may suit it best. The market takes care of the latter situation, over time.

David earlier posts, "I don't think I made fun of multi-chip modules, at least I didn't try to. I merely said it was a business-man's approach. MCM's require less engineering, get faster to production, uses older technology effectively, reduces risk, but does not perform as fast if engineered it onto a single piece of silicon."

David later posts, "Doubtless, there are many benefits to multi-chip modules, as well as drawbacks"

Jesper posts, "And the drawbacks are ?... I suspect that the only reason you say it's not innovative is cause SUN doesn't do MCM's."

I covered the main drawback above in a previous quote. I can add additional arguments (production line manufacturing of a single-chip solution is normally less expensive than an MCM - the consumer market size drives the most profitable implementation.) The AMD-Intel battle for quad-core illustrated it best, in the most recent near-term. If I need me to explain this, I can. I am really not trying to pick an argument - technology is what it is and I appreciate it as it is.

Jesper posts, "But it still doesn't change that the original innovation was done by another company."

Sun worked the Majc architecture a decade earlier successfully with similar concepts and was productized. Afara brought another (i.e. SPARC) implementation where the former multi-core and multi-threading implementation lessons were leveraged. To suggest that the innovation originated solely from the external company (Afara) would incorrect.

Jesper posts, "My problem is that when people say things like this, it treated as a sacrilegious act, by the followers of the SUN. Sometimes I feel like Solaris and SPARC are religous icons, when speaking to the followers of the SUN."

Sacrilege is not the issue, accuracy is. Also, Solaris and SPARC are historically based upon community efforts, as Open Communities, which have been guided by Sun, external companies, and external organizations - so inaccurate information used in a slanderous way offends many people who invested their university, research project, masters, phd, and/or life works into it. Offense is to be expected when inaccurate information is used to slander large groups of people.

Jesper posts, "And I can say {various slang terms}. It kind of softens things up between the different fractions."

Some would suggest that the behavior merely objectifies a group of supporting individuals and slanders their life work. There are lots of people who feel they can slander groups of people calling them "blood sucking..." (fill in the blank) - it is just the same behavior. It shuts down inter-group dialog instead fostering healthy inter-group competition and later teamwork.

Jesper posts, "So it's not like SUN has invested a great deal of effort in Linux...Still doesn't stand. It is simply not true, period."

Top 30 is pretty good to me... out of thousands of companies benefiting and billions of possible contributors. I bet Intel Linux people are not terribly interested in IBM Mainframe POWER contributions (actually, the bloat may upset more of them - Linus is very concerned about kernel bloat!) Top 30 out of thousands or billions sounds like pretty good investment to me, I would expect that many others would agree, even if you don't.

Linux is more than the kernel. The kernel must have a surrounding ecosystem to be viable. When you look into the areas where the Linux community benefits, Sun is also top-tier. (i.e. OpenOffice, VirtualBox, NFS, Lustre, Xen, etc.) When you consolidate it all together, Sun contributes a lot.

Jesper posts, "I think it's very sad that SUN has to let more people go."

I agree. I don't like it when any company (i.e. HP, IBM, etc.) has to do this. I prefer a lot of companies driving innovation in the market.

Nice chatting with you Jesper!

David Halko
Happy

Afara, Multi-Chip Modules, Linux

Jesper posts, "We were talking about inovation. So you would agree with me that the actual inovation behind the TX000 and came from Afara. SUN did what you should do as a big company that buys a small inovative company, they turned the inovation into a product."

Afara had an idea, they did not have silicon, nor did they have the expertise to gain the funding - because people with the money were not certain if they could make it past technical hurdles. They did bring some old Sun SPARC expertise in, before they were purchased, to work past those hurdles.

Anyone can have a good idea, but when there are technical hurdles, it takes innovation to overcome them, to bring an idea to fruition. It also takes an innovative spirit in order to select a good idea to bring it to fruition.

Jesper posts, "I am tired of having fanatics trying to get me to use these servers in my designs where it isn't appropriate."

Join the club. I feel the same way about various other servers.

Jesper posts, "if you have read some papers on MCM technology, you would see that there are quite a few benefits."

Doubtless, there are many benefits to multi-chip modules, as well as drawbacks.

Jesper posts, "And you might make fun of MCM's but for example the 505Q servers were just as fast as the T2000."

I don't think I made fun of multi-chip modules, at least I didn't try to. I merely said it was a business-man's approach. MCM's require less engineering, get faster to production, uses older technology effectively, reduces risk, but does not perform as fast if engineered it onto a single piece of silicon.

David posts, "Had Sun been trying to assault Linux, code contributions to Linux from Sun would have ceased contributions"

Jesper posts, "what code contributions to Linux?"

Sun is in the top 30 corporate contributors, according to the August 2009 update from the Linux Kernel Development. They are listed on the printed hard-copy, pages 11 & 19.

Have a great day Jesper! Always great chatting with you!

David Halko
Happy

@VirtualGreg,@Jesper Frimann: Afara & Innovation

DavidHalko posts, "That's OK - Sun pioneered in heavily multi-core'ed and muti-threaded CPU architecture... and the rest of the market continues to emulate with Intel being the fastest to catch up."

VirtualGreg posts, "calling Sun an innovator with respect to Niagara - do some fact checking - Sun bought Afara to get Niagara technology - its no Sun innovation!"

Jesper Frimann posts, "Well it wasn't really SUN, SUN bought the company that made the TX000 servers, it was AFAIR one of the 'original' SUN guys who made that company."

VirtualGreg and Jesper Frimann, you may not be aware, but I wrote the original article with most of the content and references on Wikipedia for Afara!!! ;-)

http://en.wikipedia.org/wiki/Afara_Websystems

Afara did not have a piece of SPARC silicon when Sun invested in them. Sun brought SPARC CoolThreads into fruition. Almost a half-decade later and Sun is still the most significant octal-core CPU vendor on the market.

DavidHalko posts, "Sun = Innovation ; IBM = Business"

Jesper posts, "I think you are very wrong."

Your opinion is very fair - I can appreciate your point of view, I think we just disagree.

While Sun was doing the heavy-lifting by engineering heavily multi-core and multi-threaded processors into a single piece of silicon, others were cobbling together multi-core using multi-chip modules.

Some may argue that the MCM's were innovative engineering, disagree and suggest MCM's are a pragmatic business-man's short term solution to a technical problem.

David Halko
Welcome

@Jesper Frimann--- cutting teeth, SMIT, FACE, FMLI, XFMLI, Volume Mgmt, ZFS, and Linux

Jesper posts, "I lobbied for a HP or SUN box, but actually I got to like AIX with it's logical volume manager, SMIT menu interface, and easy to use cli... us in the AIX admin department could "outadmin" the other admins, cause while they were editing files, and sending signals to deamons we just did a chxx and then a refresh -s subsystem. Or when they were formatting drives, and tying in sectors and sh*t, you simply did an extendvg and then that was done."

I think the SMIT interface is great! It was just like the FACE interface bundled in AT&T SVR3!

I cut my professional teeth under AT&T UNIX SVR3 with FMLI based system configuration and it was awesome. I thought NCR UNIX SVR4 with XFMLI was a great addition. Sun's X based "admintool" was pretty neat for awhile, but it was never extended sufficiently (I think this is a failing of Sun - slowing the uptake of new user in the community.)

I really wish the Open market and Linux would have standardized on character based menu interface for administration (i.e. like FMLI), extended it to include X Windows (i.e. like XFMLI), and extended it again to "HTML" (no such FMLI extension that I am aware of) - to provide a standard way for all platforms across all protocols.

With FMLI and Admintool leaving the Solaris source base, it would be awesome if IBM open-sourced SMIT! woo hoo!

I agree that the un-unified configuration file bit is pretty ugly. The movement of Solaris 10 to Services (from inetd, inittab, /etc/rc, and varios cron jobs) makes a very nice unified cli. The XML basis makes it very open.

I am honestly glad I don't have to deal with Volume Managers any longer. I really like ZFS and the benefits it continues brings - it may not be perfect, but it is really helpful. Storage Management operations is becoming unified under Solaris ZFS. It will be nice to see ZFS appear on other operating systems, much like NFS had, due to Sun's friendly licensing model.

Jesper posts, "Simply great, buy the rights to make Solaris code public"

Sun has been continually going back to MANY Solaris contributors to buy or ask them to release their rights to licensed code to open source Solaris... or re-writing the code that could not be released, from scratch - it was not just SCO. The C-Net article you cited also indicated, "Sun needed the software for its version of Solaris that runs on Intel servers". When Linux code could not be used for Intel Solaris, there were very few commercial vendors left to go to, to get more code.

The Linux community looking at Sun in a negative light for "buying the rights" to open-source Solaris seems rather myopic. Sun has always been a significant Linux contributor, contributed A LOT of their own code & engineers into Linux, and ships a significant number of Linux platforms. From the C-Net article you cited, Sun paid to protect their Solaris & Linux's customers, "Sun's complete line of Solaris and Linux products...are covered by Sun's portfolio of Unix licensing agreements. Solaris and Sun Linux represent safe choices for those companies that develop and deploy services based on Unix systems."

The conclusion that Sun was the "very first to finance SCO's attack on Linux" from purchasing their own rights appears sensationalist to me. Great headlines for journals. Great propaganda for Sun competitors. The people who gain from that propaganda refused to go through the pain of open sourcing their OS's or move their proprietary OS's to Intel/AMD.

Had Sun been trying to assault Linux, code contributions to Linux from Sun would have ceased contributions (instead of continuing to increase Linux contributions), and Sun would not have paid to protect their Linux customers.

Have a good day, Jesper!

David Halko
Happy

@Jesper Frimann RE:Kebabbert #

Jesper posted at 09:36 GMT, "The reason why oracle's price end up being lower for the whole solution is that they use flash drives, something that wasn't really available when the power 595 benchmark was done."

IBM used racks of drives in an unorthodox way, that would never typically be used in a production environment, to gain a benchmark advantage.

I think your defense is quite good, in this case, indicating that Sun & Oracle may be using new technology in an unorthodox way, to gain an advantage.

I think your argument weakens in one space: other vendors could have used racks of hard drives in a similar way to IBM to compete, but they did not -- Sun & Oracle have invested in innovate software (Oracle 11g, OpenSolaris, and ZFS) technology to bring a new technology (Flash) to existing markets (Databases through Oracle 11g), as well as general application performance enhancements to operating systems (Solaris and OpenSolaris), as well as performance enhancements to other non-native applications and competing vendor operating systems (through OpenStorage.)

In short, the technological way that IBM used to achieve it's benchmarks could have been emulated by others, but it was not because it was cheezy... and the technological way Sun/Oracle used to achieve their benchmarks will be emulated by others application vendors, can be leveraged by applications which run under Solaris, and can even be leveraged by applications which can not run under Solaris!

Sun pioneered in this arena, Oracle saw the investment possibility (by trying to purchase Sun), and now the rest of the market will need to play catch-up.

That's OK - Sun pioneered in heavily multi-core'ed and muti-threaded CPU architecture... and the rest of the market continues to emulate with Intel being the fastest to catch up.

Sun's innovation in benefited and will benefit the entire global computing community for years to come while IBM's innovation benefited only themselves and a few customers for a short period of time.

There is good reason to argue that this behavior is a primary reason why Sun was not consistently profitable and IBM typically is.

Sun = Innovation ; IBM = Business

David Halko
Thumb Up

RE: T2+ is a "network facing" chip not for databases <=Sun quote

Anonymou 2 posts, "T2+ is a 'network facing' chip not for databases <=Sun quote"

Well, if the T2+ scores the highest TPC-C benchmark recorded in the world - I guess Sun was underestimating the capability of their own processor.

Refreshing to see a vendor who does not over-hype their product!

Anonymous 1 posts, "Nearly the same number of sockets - who would have thought a Sun socket would out-perform an IBM socket."

Anonymous 2 posts, "A p595 can easily beat 12 sparc systems with SSD"

I think it will be nice to see the IBM benchmark! I am more interested in seeing how IBM licks the write limitations of flash - I expect it will be different from the way Sun architected around the limitations of flash. Competition is good for all system customers!

Anonymous 1 posts, "Items 5 & 6 are probably related. Implementing ZFS mitigates flash reliability issues."

Anonymous 2 posts, "Flash has a very limited write capability so is not good for high transaction rate systems...unless it is a benchmark"

This is true, unless you have ZFS. ZFS architects worked around this issue. This is the beauty of ZFS, and why ZFS makes flash so disruptive, even in a high transaction rate system.

Anonymous 1 posts, "If T2 out-performs IBM POWER and Intel Itanium, socket per socket, why wouldn't someone run T2, especially in an Oracle Standard license?"

Anonymous 2 posts, "Isn't this about EE? Standard only goes to four sockets and has very limited software which is available"

Clustering is available using Standard licensing with up to 4 sockets. The T2+ with 4 sockets is able to cluster without EE. This means, you can get a substantial performance boost via clustering with T2+ without EE, for most standard Oracle applications.

http://netmgt.blogspot.com/2009/03/oracle-database-license-change-ibm.html

The limitation of the software products is not that substantial for common Oracle database users.

1) Few people use partitioning - some enterprise applications that I have used will actually do the equivalent of partitioning, at the application layer, so you don't have to buy EE for that option.

2) Compression can be done at the ZFS level, instead of the Oracle level (to save I/O, flash, and disk utilization).

3) Scalability can be done by boosting 1x4 socket servers to nx4 socket servers, etc. Under Solaris 10, Capped Zones can be legally used to limit the number of CPU's used by the Oracle database, offering other processors for applications on the same platform. Multiple capped zones can be used on the same SMP platform in a RAC cluster, to avoid EE pricing penalty, consolidate hardware, and add a level of RDBMS redundancy.

I have not been forced to run any applications that require EE in any enterprise under Solaris, unless there was some company restriction forcing people not to use capped zones.

Sun, Fujitsu crank Sparc64-VII clocks

David Halko
Happy

RE: mattie's opinions matter little in the real world until he provides references #

Matt Bryant posts, "the majority of today's business applications don't work they way Niagara wants them to"

A couple dozen says otherwise:

http://blogs.sun.com/BestPerf/entry/bestperf_index_14_october_2009

Care to show us a reference to a couple benchmarks? ;-)

Matt Bryant posts, "if you want to compare benchmarks, we can have some fun. After all, we've been here before with Novatose's cherry-picked SAP benchmark results"

No one named Novatose posted the benchmark. You incorrectly accused someone else of posting that benchmark, too. You couldn't understand that Sun ran the newer, more CPU intensive, Unicode benchmark when you compared that score to the older, less CPU intensive, non-Unicode benchmark.

;-)

David Halko
IT Angle

Matt Bryant doesn't understand multi core technology! HA HA HA HA! #

Matt Bryant posts, "...between SPARC generations is still true, you just lose 30% performance when you run an old binary on new cores unless..."

Anonymous 1 posts, "this has typically been the case with intel chips"

Matt Bryant posts, "Sorry, but that's complete male bovine manure."

David Halko cites Intel web site which explains Intel muti-core technology, "...each execution core in an Intel multi-core processor is clocked slower than a single-core processor..."

Anonymous 2 cites Redmond Magazine article explains, "In fact, the cores in today's multi-cores generally run more slowly gigahertz-wise than advanced single-core processors. Unfortunately, many Windows-based applications are built serially with..."

Anonymous 3 posts, "poor n00b matt troll bryant - intel chips lose per-core performance in multi-core chips - duh!!!! better get your mommy to teach you the facts of life!"

Matt Bryant responds to Anonymous 3 with, "Oh yeah? So where's the bit that says the Intel cores go 30% slower then, you moron?"

It seems Matt Bryant finally passively concedes with Intel technical article and Microsoft advocate magazine that Intel core in multi-core's are slower than the single core performance. The only question is now percentage.

Matt suggested the 30% decline in performance associated with SPARC when running SPARC software on new cores. Can you find a SPARC (or SPARC64) web site (i.e. Sun or Fujitsu) and a Solaris advocate publication that suggests a SPARC core from a multi-core chip is 30% slower "when you run an old binary on new cores", related to the SPARC64 processor, which was the object of the article and comment you targeted?

Ball is in your court... give the reader something reasonable to consider your original speculation.

David Halko
FAIL

RE: @Matt Bryant -- RE: But what compilers were you using?

Matt Bryant posts, "...between SPARC generations is still true, you just lose 30% performance when you run an old binary on new cores unless..."

Anonymous posts, "this has typically been the case with intel chips"

Matt Bryant posts, "Sorry, but that's complete male bovine manure."

http://www.intel.com/technology/advanced_comm/multicore.htm

"...each execution core in an Intel multi-core processor is clocked slower than a single-core processor..."

While Anonymous Coward did not cite external data, he seems to know more than you on this topic.

This has been a well known fact in the computer industry for many years. The addition of more Intel cores helps to increase overall throughput, to compensate for the decrease in core clock rate with Intel.

It has more to do with thermal envelopes & physics than "male bovine manure".

A neat new feature in the latest Intel processors try to mitigate the problem and provide short clock rate boosts to busy cores when other cores are idle. Intel can be pretty creative, sometimes!

David Halko

RE: hmm....

sT0rNG b4R3 duRiD posted, "Always one to look at other architectures, what specifically is the problem with the sparcs with regard to the stated problem of performance compared intel stuff? Is it something intrinsic to the actual cpu? or is it more to do with the supporting stuff ie not being able to handle bandwidth? or is it a question of code?"

Actually, there in no problem with the SPARC architecture. People are just not that familiar with how computers work.

Every CPU architecture from every vendor has optimizations where some code will run faster than other code, depending on which platform compiler optimizations were done. It just happens to be that sometimes, unqualified people are doing the comparisons.

For example, you may get significant difference in performance against Intel chips, depending on the family of Intel chip, due to their optimizations.

http://en.wikipedia.org/wiki/Intel_C%2B%2B_Compiler

The compiler knows the chip, cache, instruction sets, and nuances in the pipelines - and when code is compiled for the correct family, the resulting code will run optimized.

This is the same for every computing architecture and every operating system, from the beginning of time, including Intel and Linux.

Sun slashes (another) 3,000 jobs

David Halko
Dead Vulture

There is another way to earn $1.5bn...

Author writes, "And once Oracle does acquire Sun, you can bet the numbers will get even lower. There is no other way for Oracle to extract $1.5bn in operating earnings from Sun in the first full year of the deal"

Unless Oracle believes their position as an approved vendor on all of the existing Oracle customers will be good enough to sell new SPARC or Intel iron in those companies... especially with the Exadata2 systems.

After all, no one else has released any real Flash based systems yet, nor had they released any real competing operating system enhancements to compete with Flash added to ZFS. (The closest company was Apple, but they de-released their ZFS port, just before the last MacOSX release!)

I think Oracle thinks they will be able to ride Flash on Sun Intel/Solaris ZFS into next year!

The benchmarks seem to indicate it...

Sun tunes its VirtualBox

David Halko
Go

Virtual Box, xVM Server, LDoms, Zones, Xen, and such...

Author writes, "Like the commercial xVM Server was supposed to be in relation to commercial Solaris 10. And before you send comments to El Reg, I know that xVM has been embedded in the latter releases of OpenSolaris and that technically you can get tech support for OpenSolaris."

OpenSolaris is the production operating system shipped in Sun's Open Storage appliances. If people are happy with compiling & running Linux in their production environments, OpenSolaris should be OK. For companies that don't run Linux, I agree that waiting for xVM Server to be released in Solaris 11 may be a reasonable plan. Nearly everyone is running Linux, though - not sure who really need to wait until Solaris 11.

Author writes, "Does the world need another bare metal hypervisor? IBM certainly needs one for the X64 platforms it sells, and so does Hewlett-Packard, which has software aspirations. Dell doesn't seem to want a software business, but having its own low-cost (or free) hypervisor and services to sell might be an attractive idea."

If IBM, HP, and Dell want to compete in the commercial market place for Intel/AMD based proprietary servers on price point, then yes, they may be interested in their own bare metal hypervisors. Without bare metal hypervisors, their prices start to rise when comparing their systems to competitors.

Xen is free, not sure what they need with the exception of a few resources on the sub-continent or in the far-east. They could always bundle OpenSolaris with xVM.

Author writes, "Novell needs a free-standing and open source hypervisor that is not Xen or KVM. Maybe Oracle can sell it for a few bucks?"

Sun Virtual box is free - not certain why Novell would want to buy the product group. Not sure what they need with the exception of a marketing engine. Kind of like how Sun OpenOffice, Sun Java, and Sun MySQL is just about everywhere in the Open Source world. Sun is doing the heavy lifting for their competitors, right now.

- - -

I am really surprised that people have not realized the benefits of a hypervisor running under [Open]Solaris yet.

The benefits of xVM Server on OpenSolaris are absolutely astounding in comparison to the competition - ZFS support for unparalleled virtual machine cloning speed, ZFS snapshort for unparalleled (try doing on-line hourly backups for a couple of years) virtual machine backups, ZFS with flash support for terrific disk performance, ZFS & COMSTAR for no-cost (and no additional management software) replication of the entire environment to another standby system with internal disks, DTrace for superb resource debugging on production over-laying operating system support, no-cost acquisition for initial experimentation, paid support for production deployment.

Running VirtualBox on Solaris 10 or OpenSolaris gives most of the same capabilities as Sun xVM Server - perhaps the supported disk sizing is capabilities are different between VB and xVM, I am curious about the performance differences.

LDom's and Zones in conjunction with Sun OpsCenter seems fairly nice. Automatic load balancing of applications across clusters using pools of LDom's is very nice, it would be nice to see xVM Server support in the future. I was unaware of how extensive the cross-platform integrations possible with third-party vendors with OpsCenter until I read:

http://blogs.sun.com/barrettblog/entry/oracle_openworld_2009

Oracle revs Xen VM to 2.2

David Halko
Go

Oracle VM Server similar to Sun xVM Server

Author writes, "Oracle VM Server 2.2 is distributed for free, as is the new V2V conversion tool for VMs created to run on Virtual Iron hypervisors... But support for the Oracle hypervisor is not free."

Bundling Xen into Oracle Enterprise Linux and charging only for the support is a similar model that Sun took with bundling Xen into Open Solaris and only charging for OpenSolaris support.

Things will be interesting if the Sun purchase by Oracle is approved!

Page: