@Jesper Frimann
"...Ehh ? What the f word are you talking about. The POWER server platform actually has good scaling. Well at least compared to anything Oracle can muster..."
Sure, you talk about one benchmark; SPECint. And you prove that POWER scales well on SPECint, which is an easy parallellisable benchmark. Does this mean you have proved that POWER scales well in general? I dont agree with you:
I read here on this site, that AIX needed to be rewritten to handle P795, with 256 cores. So IBM could not handle 256 cores until just recently? Isn't that bad scaling?
I also heard that IBM can not scale the TPC-C clusters to counter Oracle's TPC-C world record. Isn't that bad scaling? IBM will never be able to break Oracle's TPC-C record?
I also heard that IBM's biggest mainframes only has 24 cpus. Why not bigger? Problem with scaling? I just asking.
If we talk about Solaris 11, it has been rewritten to handle big Oracle servers with 16.384 threads. Even the old Solaris 10 handled 256 threads. Sun sold old SPARC servers with up to 144 cpus.
.
.
"...You do know that POWER have managed to do all three things. Put more cores on a chip AND increase the per core throughput and socket throughput..."
No, that is not that I am talking about. I am not talking about if IBM increased throughput and needed to lower GHz to stay in a reasonable wattage.
I am refering to when IBM explained that the future is in 1-2 super fast cores beyond 5GHz and higher, because databases prefers on strong cores with good single thread performance. When I look at POWER7, I dont see 1-2 cores clocked higher than POWER6, at 6GHz or 7GHz. Instead, I see many cores, lower clocked than POWER6, going under 5GHz. I wonder if POWER8 will have more cores than POWER7, and even lower GHz? And thus, stray even further from the "1 super fast core at 6-7GHz"? Dont you agree that IBM has abandoned the "1-2 superfast cores" and followed Sun's "many lower clocked cores"?
"...Again you have absolutely no clue what so ever..." - does this mean you think that POWER7 is more similar to a single core cpu at 6-7GHz, than a cpu with many cores at 3-4GHz?
.
.
"...I think the guy that put this best was Linus, when he asked why does a filesystem have to do that ?... And cool as it is.. I have to say I agree with Linus, this is perhaps taking the role of the filesystem one step to far..."
I certainly dont agree with you. As ZFS creator Jeff Bonwick explained:
"The job of any filesystem boils down to this: when asked to read a block, it should return the same data that was previously written to that block. If it can't do that -- because the disk is offline or the data has been damaged or tampered with -- it should detect this and return an error...Incredibly, most filesystems fail this test. They depend on the underlying hardware to detect and report errors. If a disk simply returns bad data, the average filesystem won't even detect it."
I also know that several large instituitions such as physics centre CERN (who stores large amounts of data for their big Hadron LHC collider) are very concerned with this. If CERN stores experiment data and if the data is silently corrupted, maybe CERN will not detect the Higgs boson. You know, there are thousands of researchers spending years on this project. That is the reason CERN is very concerned with silent corruption:
http://indico.cern.ch/getFile.py/access?contribId=3&sessionId=0&resId=1&materialId=paper&confId=13797
Or, if you encounter a silent corruption in your database. When did the corruption take place? How long will go for backups? Half a year? One year? Database admin talks about silent corruption
http://jforonda.blogspot.com/2006/06/silent-data-corruption.html
Or in this case: a flaky switch is corrupting the data. ZFS was the first one to notice, because ZFS protects it's data.
http://jforonda.blogspot.com/2007/01/faulty-fc-port-meets-zfs.html
"As it turns out our trusted SAN was silently corrupting data due to a bad/flaky FC port in the switch. DMX3500 faithfully wrote the bad data and returned normal ACKs back to the server, thus all our servers reported no storage problems."
If you and Linus don't agree that the stored data should be intact, then you can not really trust your data, I hope you realize this? ZFS does what ECC RAM does: protects your data against power spikes, hardware problems, etc. I really do hope you have ECC RAM in your servers, but maybe you think ECC RAM is not necessary, just as you think that protective filesystems are not necessary?
Modern Enterprise SAS disks has 1 irrecoverable error in every 10^16 bits, just look at the spec sheet. Protection such as ZFS is necessary, in my opinion. But of course, you and Linus may have differing opinions, that is fine with me. But I would be careful, and suggest you read more about ECC RAM and Silent corruption. The study by CERN above is a good place to start. If you want, I have plenty of research papers on this. Just ask me if you want to start worry about protecting your data. Here is one link on ECC RAM, in case you are not familiar with ECC RAM:
http://en.wikipedia.org/wiki/Dynamic_random_access_memory#Errors_and_error_correction
To exemplify it's importance, for instance, Microsoft found out that many of the Windows crashes depended on non ECC RAM, which is why MS wanted everyone to use ECC RAM when running Windows. So yes, ECC RAM is important. Read the above link.
.
.
"...As for putting computers into a container... well... I would hardly call that innovation. We have them.. it's stupid in most cases IMHO, but as a hack to have variable capacity that you can move from location to location it's ok..."
I am just pointing out yet another case where IBM copied Sun/Oracle. And the BlackBox has its use, which in that case the blackbox is perfect.
.
.
Kebabbert: "What techniques has IBM created, that Solaris copied? You talk about "recent years". Can you give an example?"
Jesper Frimann: "...You have to be kidding ?..."
No, I am not kidding. Let me repeat my question: In RECENT years, what has Solaris copied? I know that IBM did great things in the 1960's etc. But in recent years? To me it seems that IBM is copying from others, but maybe you have some counter examples?