IBM is supporting Brocade CNA
IBM is supporting both QLogic and Brocade CNAs for its System x servers. That makes Brocade equal to QLogic and Emulex in server vendor CNA qualification terms.
48 posts • joined 22 Sep 2008
The story originally said IntelliPower varies spin speed. Some commentators said it doesn't. We checked and Western Digital doesn't actually say it does that although the impression has been given that spin speed variation is the secret sauce in IntelliPower.
In fact, WD is not transparent here, not even supplying a spin speed range or number at all for drives with IntelliPower. It appears that within a WD model range which has IntelliPower the spin speed might vary with the drive's capacity.
We've asked WD to clear this up. If it tells us what the IntelliPower score is then we'll pass that information on.
Good comments. Here's a comment on the comments:
1. The story says: "Each core is expected to be an x86 core, and each will will be paired with a vector processing unit." A comment says: " "Each core is expected to be an x86 core" No, each core *is* an x86 core, plus a *wide* vector unit (16-way SIMD) with predication and scatter/gather load support. This is all in the SIGGRAPH paper you linked to.
Here's a quote from an Intel paper (Larrabee: A Many-Core x86 Architecture for Visual Computing): "Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit"
Can't see the difference here; plus a -, paired with - augmented by - they all mean each x86 core gets a vector unit alongside it.
2. A comment says: ""Larrabee will have a shared pool of cache memory" Not quite. It has 256KB of dedicated L2 cache per core and cores can read each other's L2 caches. Each core has 32KB of dedicated L1. This is in ... the SIGGRAPH paper you linked to."
The Intel paper again: "A coherent on-die 2nd level cache allows efficient inter-processor communication and high-bandwidth local data
access by CPU cores." We're in the same ball park again here surely? The cores share a cache memory resource.
3. It's not an Atom CPU, as a comment points out: "We wonder if Intel is using its Atom processor design as the Larrabee core to meet the chip real estate limitations" No, they are not, they are using the P54C processor design, as they have previously stated at their GDC talks."
Yes, granted, but I wanted to play with the Atom and molecule idea and you spoiled my little game :-( ... That'll learn me. I changed the text.
4. A comment says "And after all that the only actual *news* is that Intel have released a die image ... so where is the link for that?"
The only news is that Intel has released a die image. Er, not quite. As the intro says: "Intel has opened up a corner of its kimono and shown a picture of the upcoming Larrabee chip, indicating it will be a 32-core graphics processing engine." The 32 cores confirmation is newish. We also get to hear that the ship date is now the first half of 2010, and hear bit about Intel's software development efforts to help Larrabee. Since the Reg hadn't covered Larrabee since December last year adding the background info seemed reasonable.
Yes, the die image reference should have been there, it got lost somehow, and is there now.
I dunno if Larrabee is the right way to go, having a combined X86 standard app and graphics apps execution engine rather than separate multi-core X86 and GPU combo. It sounds sexy enough but will it be fast enough and will software development technology keep up with all its attributes?
Sent to me:
Actually Chris, we firmly believe that the Dynamic Cube is strong enough to take on the competition in data centers – especially because of our clear lead in dramatically lowering lifetime power and cooling costs. During an operational lifespan of several years, the savings alone from a PRIMERGY BX900 system in comparison to hungrier machines from competitors provides a compelling reason to switch. This is a good example of where we think customers will have an issue with HP!
Bernhard Brandwitte - Fujitsu Technology Solutions.
Sent to me by mail and posted here: It is not correct to say that you don't need to defrag SSDs. Mapping fragmented files consumes pool and in extreme cases with 32 bit Windows this can be fatal to the OS.
Reading or writing a fragmented file is inherently less efficient than with a contiguous file because each fragment has to be accessed via a specific IO operation. More IOPs, more system overhead.
The point I think you were trying to make is that SSDs have zero seek time and do not suffer from rotational delay and missed rotations etc. Crappy applications or a badly fragmented filesystem can still make an SSD based system look bad. The example there being Vista.
Sent to me by a QLogic spokesperson: "While at Hitachi, Genereux headed up over 60% of the company’s resources, including Product Management, Global Marketing, Global Channels, Channel Marketing, Service & Support, Product Marketing and other key functions in addition to Worldwide Sales operations. Any executive responsible for managing over 60% of the company’s resources surely has significant operations experience."
Interesting. Is Genereux a potential Desai successor then?
(Sent to me anonymously - Chris)
Cost cutting likely???
We have been asked to take a 5% pay cut until the end of December.
Compulsory where they can enforce it, voluntary where they can't.
Still 1400 jobs to be let go worldwide as a result of the "restructure" that took place at the beginning of the year.
Happy employees we are not.
Do you realize what you are saying?
First, PCM writes a 1 Megabyte per second (based on inadvertent performance disclosures made public by certain Numonyx's officers), while modern DRAM writes at GIGABYTES per second (i.e, 3000x or so performance hit)!
Second, despite Numonyx's claim, they DO NOT offer any PCM commercially (just try to buy a chip or 10,000, or even try to request a datasheet). PCM was supposed to be commercial by the end of 2007 (if we were to believe Numonyx's management, previously with Intel). Never happened. PCM will never be commercialized in volume, for obvious reasons - it is inferior to current Flash in terms of write speed, density, and costs. It probably has power consumption issues in write as well. It is a technoPonzi.
Interesting view.... Chris.
I wish you were wrong about eSata being killed off by USB 3.0. eSata is so much better when dealing with external hard drives because you don't get the "delayed write failure" problem that you get with USB. USB was never meant to be a solution where a device like an external hard drive used for backups was hooked to the computer for long periods. It's mainly for short use like with thumb drives. With eSata the external drive is connected the same as if it were installed in the computer.
Xyratex said it actually announced its 2U24 drive SP1224 about a year ago. Yesterday's announcement is about Xyratex being the first to qualify and support Seagate's new Savvio 15K2 SFF drive that others have not qualified for release. The slower Savvio HDD was available but Xyratex is the first to release the faster 15K2.
The original Barracuda drive firmware failures were concerned with ST31000340AS drives and then ST3500320AS ones, Barracuda 7200.11 desktop drives.
This IBM story involves ST31000340NS, ST3250310NS, ST3500320NS and ST3750330NS drives - high-capacity, business-critical Tier 2 enterprise drives, Barracuda ES.2 enterprise drives. There was no suggestion originally as I recall that enterprise Barracudas were affected.
Also, the IBM note says "IBM strongly recommends applying the firmware update to prevent data loss" suggesting that data loss can occur - IBM did not say apply the firmware update to prevent data unavailability but to prevent data loss.
Here is a Seagate statement on the issue:-
"Seagate dismissed its patent infringement case against STEC and STEC dismissed its counterclaims against the company and Mr. Watkins. The economic conditions today are drastically altered from those that existed when we filed the litigation and the impact of STEC's sales of SSD's has turned out to be so small that the expenditures necessary to vindicate the patents could be better spent elsewhere."
The spokes person added that it's also worth noting that this settlement doesn't preclude Seagate from taking action in the future to defend its intellectual property.
What an enjoyable set of angry, cross, bemused, and ironic comments have come from the story. Yes it was not precise enough about fractals and hadrons and bosons and mesons and, yes, thank you CERN for the WWW which is priceless compared to the US space program's non-stick frying pan spin-off, but, well, it's almost as if fundamental physics research is a religion and shall not be questioned. Why ever not? How much does a research experiment in fundamental physics have to cost before it should be stopped?
Sent to me by Dan Conlon:-
Thanks for your call, we read your article.
Just to let you know - Part of our service is a desktop/laptop client. The software syncs users data between their local disk and the cloud storage.
This means that users using this software, still had access to all their data. Changes they have made to it will sync up to the cloud when they are next online.
The cloud will never be there 100% of the time, whether that be because of the user's Internet connection being down or our data centre being down. Equally, a user's local hard disk will not be there 100% of the time either - hard disks fail, computers get stolen.
So we think that our sync based solution brings the best of both worlds. Your local hard disk is there when the cloud is not and Humyo will be there for you when your local hard disk is not.
This was sent to me by Bob Thibadeau:-
The TCG specifications are designed for all non-volatile storage devices. We have something called the "core" specification, and the "security subsystem classes (SSCs)." The core provides the generic means of capturing flash and tape, just as was demonstrated by the SSCs for hard drives and removable optical drives. While recently you are correct that the flash drive and tape drive vendors have not been active, several that you mention did contribute and participate in the core specification.
... On the topic of astro-turfing / paid for user reviews, I think the user discussion in this Tech Republic column might be of interest to you:
Ho ho ho ho ho ........
On June 30 2008 Sun's shares were $10.88 and the market capitalisation was $8.04bn. Red Hat shares were $20.69 and the marcap was $3.93bn. The coming together has been happening over many months and, at the moment, Red Hat has actually passed Sun in markcap value.
Sent to me: "Unfortunately, the tools that Seagate provides to obtain the firmware rev level do not run under Windows Vista 64 bit. Not being able to speak to a person at Seagate, nor wanting to tear apart one of my other systems to test a 1 week old drive, I returned it to the vendor (Best Buy). They agreed that it was best just to let Seagate resolve the issue, and happily exchanged the drive for a WD Caviar."
A poster sent tis to me anonymously:
"Re: Seagate Firmware issues.
This will interest you greatly I suspect. Think of it as me helping out in whatever little way I can.
My email is invalid, by the way. Sorry, but it's necessary."
It's an interesting post from a Seagate employee (or facsimile of) about the firmware process inside Seagate.
Sent to me:-
You reported on the 18th that Seagate was offering free data recovery services for their dead drives. Apparently that's no longer the case. When they issued that firmware update that bricked all their 500GB 7200.11 drives (apparently they don't believe in testing) they also amended their firmware update page: http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207951.
"As Seagate does not warrant the data on your drive, In addition to regular back-ups, if possible, your data should be backed up before upgrading the drive firmware."
This is echoed in this forum post: http://forums.seagate.com/stx/board/message?board.id=ata_drives&thread.id=4457.
"This is a good place to reiterate part of Seagate's warranty policy: -- What Does Our Warranty Not Cover? Our warranties do not cover any problem that is caused by (a) commercial use; accident; abuse; neglect; shock; electrostatic discharge; heat or humidity beyond product specifications; improper installation; operation; maintenance or modification; or (b) any misuse contrary to the instructions in the user manual; or (c) loss passwords; or (d) malfunctions caused by other equipment. Our limited warranties are void if a product is returned with removed, damaged or tampered labels or any alterations (including removal of any component or external cover). Our warranties do not cover data loss – back up the contents of your drive to a separate storage medium on a regular basis. Also, consequential damages; incidental damages; and costs related to data recovery, removal and installation are not recoverable under our warranties. -- This is not unique to Seagate. We know of no storage company that includes data recovery as part of their product warranty.
Again, please make sure that you always have a backup of all important data. A backup is defined as a copy of data in a second, separate storage media of whatever kind. More information on backups found here.
Feel free to discuss do-it-yourself fixes. Please be aware that some, many, or all may void the drive's warranty, so if you have any questions about the method you see here, your best bet is to contact Customer Service directly whether by phone, email, or chat."
Looks like they're not offering it after all. Which I suppose is logical from a business standpoint. Providing data recovery for all those bricked drives wouldn't be cheap.
Sent to me:-
I've got a couple of the ST31000340AS drives and the original upgrade wasn't working for them at all - it couldn't detect the right drive when you tried running the upgrade util. I checked back to the same page on Seagate's site and saw they've released an updated version of the fix at 22:15 UK time today (19th Jan).
Thought you might be interested to know.
The updated version appears to have worked successfully with the drives now showing the updated firmware version.
Cheers .... Chris.
Sent to me:-
You mentioned in this article that Vista doesn't have a real backup. However, Vista Ultimate (and I think the enterprise version) definitely has an outstanding backup ability.
The backup in Vista goes to any location you want: HDD, network share, or DVD. I'll ignore DVD due to the limited size.
The backup uses a single-instance-store of any given sector. Thus, the first backup takes a bit of time, but the remaining backups are quite fast. Also, the restore process actually works. You can restore your machine from a standard Vista installation DVD.
Please update your article, and try it out. It's already saved my bacon a few times.
Caveat: Yes, I work at Microsoft. No, I did not work on the backup solution. Yes, I know there are shortcomings. But, at least give partial credit... :)
= = = = = = =
Happy to do so with this comment.
Before criticizing Poland, ask yourself why Limerick got that plant in the first place: Because Irish labor used to be the cheapest in the Union!
This it why the EU exists. I have observed "enlargement" since the Six. Each time, the new members have benefited from investors looking for cheap labor. Each time, those labor costs have risen faster than the EU average, until they (almost) catch up with the "core countries". Economic growth in the core countries has been a bit slower than it might have been, while economic growth in enlargement countries has played catch-up.
Now it is Ireland's turn to feel dismay at the end of 6% growth, while Slovakia takes off.
That's what the Union is SUPPOSED to do!
In reply to 'You've confused me' - the four drives being replaced will, I understand, be short-stroked ones and the used capacity will be less than the nominal capacity. Don't know what Samsung thinks the used capacity wil be but, if one 100GB SSD can replace 4 HDDs then the used capacity would be 25GB/HDD. That's my understanding and we all know that Samsung is putting its absolute best case forward.
This came as a mail to me from a Register reader:-
Right after reading the article, the following appears in email, and only 4-days notice:
This is to notify you that Centera Developers Portal (Lighthouse) is moving to the new EMC Community Network site. The new site has improved functionality, features, blogs, discussion forums and enhanced search engine.
Please note that the following will no longer be available after December 19th, 2008:
* Lighthouse website: http://lighthouse.developer.emc.com
* Centera Developers Portal Forums http://lighthouse.developer.emc.com/developer/devcenters/CAS-Centera
Hmm, End-of-life [EOL] for Centera? If all the software developers are gone, the project managers and top Centera people too, what does this mean? It would seem that EMC is getting ready to announce an EOL for Centera? They say one thing but actions speak louder than words, corporate staff gone, developers being let go, website discontinued, err, moved, ...
No crack about solace - why not? Thought about it but seemed too obvious for the sophisticates of the El Reg community. Out of consideration for the feelings of said highly-valued community I thought we'd better give you some solace from the great quantity of Quantum of Solace jokes......
My understanding, from NetApp, is that the 3040 and 3070 were replaced by the 3140 and 3170. The 3020 was kept on as the entry level to the 3000/3100 line. It is going away leaving a 2-model line-up and the previously presumed gap for a mid-range 3100 model. Now along comes the 3160 and we have a 3-model 3100 line.
Christopher Scholz writes: I enjoyed the article on extending microlithography into the sub-10 nanometer range, but I feel I must correct you on a few things. Firstly, in the article you mentioned that light is used to harden a photoactive, etch-resistant film. While so-called "positive" photoresists do exist, they are very infrequently used in micro-fabrication industry, being reserved for larger items like circuit boards and lithographic prints. The problem with them lies in the fact that their primary mode of action is polymeric chain linkage. As the polymer chains grow in size and begin to intertwine, the resist film begins to swell. As this happens, it changes the ultimate size of features due to swelling laterally, and alters the focus and exposure dose characteristics non-linearly in the up-down direction. As I'm sure you can imagine, a change of only a couple nanometers is a huge swing when you're talking about <11nm features.
In the industry we use "negative" resists. Their primary mode of action is a softening of the film where light hits by "chain-scission". This makes the film easier to dissolve and wash away with a developing compound such as TMAH (Tetramethylammonium hydroxide). In order to keep the film in place during exposure, it is first soft baked. After exposing, a hard bake firms up the film even more, and advances the reaction in areas exposed to light to make the TMAH (or like developer) more effective.
Secondly, while reducing feature size below 11nm is theoretically possible, quantum effects begin to take over and the microchip as we know it will no longer be possible. The problem lies in quantum effects of electrons. At 11nm, it's no longer possible to predictably contain an electron within a gate channel. Tunneling effects become significant enough to make the narrowing of transistors a futile effort.
It's interesting technology, but I personally don't believe it will be useful in extending Moore's Law. Perhaps "Otellini's Maxim?" Even if this does allow us to double the count of transistors per die unit area, it will probably take more than 18 months from the previous node reduction.
This came to me anonymously: Another way to look at this is that during hard times NetApp is going to try to force its customers to upgrade to a system they don't need. "He wouldn't confirm a rumour that a new FAS 3160 model would be announced soon to fit between the 3100 entry-level FAS 3140 and high-end FAS 3170. The 3020, a smaller and older array than the 3140, will be end-of-lifed soon though, reducing the number of models in the FAS range by one and, we could read it this way, creating a space in the range for a new mid-range model."
Many times in the past NetApp has superseded Filers with newer models that have lower performance than the older model had. The best example of this was the FAS980, after it was EOL'ed it took NetApp almost two years to market a system that had superior performance and capacity to the FAS980.
Biting the hand that feeds IT © 1998–2022