back to article Intel: tweak Nehalem's nobs to hit high notes

Despite a tightened budget, Intel's information chief, Diane Bryant, claims the company saved $19 million by upgrading its older servers this year rather than deferring a hardware refresh until 2010. Chipzilla's pitch for its Nehalem-based chips during a year when most IT budgets are pancake flat is focused squarely on …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    But but but

    Hyperthreading is all about parallelisation, and all that Intel can bring to the table in terms of performance increases is more more more parallelisation. Now Facebook tell us that parallelisation isn't delivering for them.

    If we can't trust Intel and Facebook, who can we trust?

  2. DrZarkov

    Developers need faster cores, not just more of them.

    For the last several years Intel seems to have thrown in the towel on making individual cores significantly faster, instead simply relying on process shrinking to cram more of the same into a chip package. Unfortunately for the rest of us, most software on planet earth does not take much advantage of increasing numbers of cores, infact some stuff actually slows down marginally as more cores are added!

    Taking multi-core out of the equation, looking at single cores only, Moores Law ran out of steam at Intel somewhere around 2004.

    That said, how a platform like Facebook with its thousands (tens of thousands?) of concurrent users can fail to take at least some advantage of hyperthreading within a core and a shift from 2 to 4 to 6 cores is beyond me. Perhaps they need better developers? Consider a move to developing in Erlang?

  3. Richard 22
    Thumb Down

    @DrZarkov

    No, developers don't need faster cores, they need to get better at efficiently parallelising their software and using multiple threads, or just make their algorithms more efficient. Only lazy devs would assume that the only way to speed something up was to run a single thread faster.

  4. peter ashworth
    Megaphone

    so when are we going to see the manual?

    if Intel set the chips a certain way, that may not be providing the best performance, then what the user needs is a manual to suggest beneficial tweaks

    so......... where's that manual?

  5. DrZarkov

    @Richard22

    You are seriously suggesting developers re-code to take advantage of new hardware/architecture? I am not a developer, I'm more a sysadmin, but I cant remember a single time in history when developers radically changed their method/practices to better fit with the hardware.

    Everyone was going to re-code for Itanium weren't they? Too hard.

    How about the incredible power of cheap, fast GPUs, even software that would suit their particular kind of performance doesnt get re-coded. I do know that parallel programming is very hard, and most languages dont make it any easier... (hence the Erlang ref) We have had multi-core in the x86 market for 5-6 years and still Microsoft and Apple are announcing "some special multi-core features coming soon to the next version of their OS". At this rate we should have multi-core/multi-thread applications in widespread use by about 2050...

  6. Anonymous Coward
    Anonymous Coward

    @Richard22

    Look Richard, I don't know what your experience is, but my multiprocessor experience dates back to the 1990s. See, the thing is, good designers aren't common, but bad coders are. If you increase the complexity of things by adding multiprocessor synchronisation to the coder's challenge, benchmark performance may appear to increase when the circumstances are right, but real-world productivity (which is harder to measure) and customer satisfaction (ditto) will generally decrease because the apps and systems will crash more frequently than they already do.

This topic is closed for new posts.

Other stories you might like