back to article FIFTEEN whole dollars on offer for cranky Pentium 4 buyers

Intel will fork over fifteen whole American dollars to folks who feel that it and HP misrepresented the performance of Pentium 4 CPUs released way back in the year 2000. The reason for the payment is a class action that alleges Intel knew the Pentium 4 was a dog, worried AMD would eat its lunch, and therefore cooked up new …

  1. John Tserkezis

    Oh well, at least the lawyers got rich.

    <sarcasm>That's the important thing.</sarcasm>

    1. Mark 85

      Re: Oh well, at least the lawyers got rich.

      and some said that justice wouldn't be served. <rolls eyes>

    2. ecofeco Silver badge

      Re: Oh well, at least the lawyers got rich.

      Lawyers in Love

      - Jackson Brown 1983

      (the original music video. funny how it's even more relevant)

  2. Destroy All Monsters Silver badge
    Mushroom

    Trusting Intel on anything ever

    Sadly these people don't get paid with greenbacks sporting trollface.

  3. Mage Silver badge

    Rats

    I got one in 2002, someone else paid for it and I don't live in America. Foiled again.

    At least it still works.

    1. chivo243 Silver badge

      Re: Rats

      I'm not in Amerika either, had half of our windows environment on HP/Compaq P4's maybe 300 in total...

  4. Ralph B

    Mass Protest Opportunity

    > The claim form doesn't require a receipt.

    $15 doesn't sound a lot, but if the entire population of the USA (except Ohio) could be pursuaded to make a claim ... ?

    1. Jeff Parker

      Re: Mass Protest Opportunity

      ... if the entire population of the US could be persuaded to make a claim ... the figure wouldn't be too far off Intel's reported profit for 3 months of this year. Scary numbers there.

      ref: http://www.bbc.co.uk/news/business-29622482

  5. Gordan

    Pentium 4 didn't suck.

    It is merely the case of developers (including most compiler developers) being too incompetent to leverage it's capabilities efficiently.

    See here for relevant performance comparison data with well written C code (no assembly) of P3 vs. P4 using different compilers:

    http://www.altechnative.net/2010/12/31/choice-of-compilers-part-1-x86/

    Note that with crap compilers the P4 did indeed perform relatively poorly. OTOH, with a decent compiler (which annihilated a crap yet ubiqutous compiler on any CPU), P4 shows a very significant per clock throughput increase over the P3.

    The point being that software is written for hardware, not vice versa. Don't blame the hardware manufacturer if you are too incompetent to use the equipment it to it's full capability.

    1. Peter Gathercole Silver badge

      Re: Pentium 4 didn't suck. @Gordan

      You make a very good point, but you ignore that compiling for a particular processor, using all of the features of that processor breaks the "compile once run anywhere" ubiquity of the Intel x86 and compatible processors.

      If this class action lawsuit is providing relief for home users, these are people who will buy a system and install code that is compiled to a common subset of instructions for the processors it is expected to run on. They are certainly not going to re-compile the applications they buy, let alone the operating system and utilities (you have to admit that dominant players providing x86 operating systems do not make it easy for a user to recompile the code even if they wanted to).

      Imagine if when buying a program, you had to check not only which versions of Windows it would run on, but which processor (I know, some games did, but they are a special case).

      I also know that it is perfectly possible for an application or OS provider to provide smart installers that identify the processor at install time, and install the correctly compiled version for the processor. Or even put conditional code in that detects at run time which libraries to bind, or which path through the code to select.

      Each of those last alternatives lead to significant bloat in either the install media, or even worse, the disk and memory footprint of the installed code. And that is not to mention the support nightmare having several different code paths to do the same thing on different processors.

      No, the shrink-wrap application providers will write their code for a common subset of features, and that is what the Pentium 4 was weak at. The same binaries often ran slower on Pentium 4 than on Pentium III processors at the same clock speed (and when launched, the Pentuim 4s did not run at the high clock speeds they later achieved). And later processors such as the Pentium M and Core architecture processors, which used more of the Pentium III architecture, with the 'good' bits of the Pentium 4 grafted on show that Intel eventually got the message that Pentium 4 was a dead end. I'm surprised they contested this, although I guess that this case is all about benchmark deception rather than the ultimate speed.

      1. Nigel 11

        Re: Pentium 4 didn't suck. @Gordan

        You make a very good point, but you ignore that compiling for a particular processor, using all of the features of that processor breaks the "compile once run anywhere" ubiquity of the Intel x86 and compatible processors.

        Not broken. It'll run, just less efficiently.

        Most programs that seriously tax a modern CPU or even a decade-old one, have a small fraction of the code that accounts for a large fraction of the CPU usage. So it's very valuable for the 90% of the code that isn't executed so intensively to run, albeit inefficiently, on any CPU with that architecture.

        As for the other 10%, distribute it as a separate library, containing something that determines which processor it is running on, and multiple compilations of the same code with different optimizations. Then dispatch to the appropriate one. As CPU and compiler technology advances, you can ship an update that replaces the library with better code (yet derived from the same source, on the unlikely assumption that no bugs needed fixing).

        These days you may well also bundle versions that don't use the CPU at all, but a GPGPU if it can find one and if the gain is worth having.

        1. Peter Gathercole Silver badge

          Re: Pentium 4 didn't suck. @Nigel 11

          I don't claim to be an expert in Intel x86 architecture, but I believe that some of the more specific features may have led to additional instructions being added to the ISA. That is certainly the case in other processor families I have used.

          In order for code that uses these instructions to run on processors that do not implement the instructions, it is necessary to be able to trap the 'illegal instruction' interrupt, and do something appropriate.

          If you did not trap the illegal instruction, the OS would at best kill the process, or at worst, crash the whole system.

          In the case of the MicroVAX and early PowerPC processors, you would call code that emulated (slowly) the missing instruction, which had to be part of either the OS, or the runtime support for the application. I've not heard of that happening in the Intel/Windows world, although I'm not discounting that it may be there.

          In the s370 world, instead of emulation code, it was possible to trap such things in alterable microcode, this being the method that IBM used to 'add' additional instructions to the s370 ISA for specific purposes to allow application speed-ups.

        2. Aitor 1

          Re: Pentium 4 didn't suck. @Gordan

          ¿Are you serious?

          That would be VERY expensive, as you would have to test all the binaries.. and support them.

          When I still made C++ and VB programs, we just compiled for the minimum processor we expected as target. That would be a 486, but with Pentium div bug "fixed". And I'm talking up to 2001, compiling for 486 or Pentium.

          This is also the reason why now, at the end of 2014, many programs are still compiled in 32bit: It will work in both 32 and 64 bit worlds, and you just haver to check one of them.

      2. Gordan

        Re: Pentium 4 didn't suck. @Gordan

        "You make a very good point, but you ignore that compiling for a particular processor, using all of the features of that processor breaks the "compile once run anywhere" ubiquity of the Intel x86 and compatible processors."

        This would be an excellent point if it were the case - but it isn't. When I was doing the above testing I found that code in question built with only P3 optimisations using Intel's compiler performs near identically on the P4 as the code optimized for the P4.

        P4 was more sensitive to really bad binary code, which happens to be what most compilers produce even today, but if the developers had done their homework during the years of the previous generation of CPUs (P3) it wouldn't have been a problem. Unfortunately, the disappointing reality is that the vast majority of software sucks, compilers included.

  6. Doctor_Wibble

    That quote...

    For those like me that recognised the 'movie' quote but couldn't remember where from, or the original wording...

    Trading Places - a response to getting a Christmas Bonus:

    http://en.wikiquote.org/wiki/Trading_Places#Dialogue

    Randolph Duke: Ezra. Right on time. I'll bet you thought I'd forgotten your Christmas bonus. There you are.

    Ezra: Five dollars. Maybe I'll go to the movies... by myself.

    Mortimer Duke: Half of it is from me.

  7. rblythe

    Thank You El Reg!

    I still run an HP-EPC42 connected to a network test lab at home. HP/Intel will be buying my monthly tea tin.

  8. This post has been deleted by its author

  9. Rol

    Intel, from the very beginning took the x86 architecture and made it their very own. Instruction sets that should have been agreed on a universal basis were constantly being added to by Intel without any consideration for the future.

    The result meant newcomers like AMD had a very unfair fight on their hands as Intel would purposefully use up unallocated instruction code to frustrate competitors attempts to challenge Intel's dominance.

    As consumers, we have been very poorly served by the uncontrolled dominance of one chip flinger to the point x86 architecture is now exceedingly inefficient in the way it handles instructions.

    I hope future processor technology, that looks to be gaining market is quickly tied to an overseeing body that stops the likes of Intel from "weaponising" the instruction set to defend their dominance.

    1. unitron
      Boffin

      Wasn't Intel the reason...

      ...that there was an x86 architecture in the first place?

    2. Nigel 11

      Intel, from the very beginning took the x86 architecture and made it their very own. Instruction sets that should have been agreed on a universal basis were constantly being added to by Intel without any consideration for the future.

      Really? Sun, MIPS, IBM, Digital, DG, and numerous others all consulted their customers and competitors before adding to or changing their instruction sets?

      I've speculated before that Intel is a smart company and recognises the danger of becoming a regulated monopoly. It therefore needs AMD to remain in business, so it has competition. Once, Intel tripped, AMD got ahead (with the Opteron and the 64-bit instruction set, and Intel had a few tough years catching up (including some unfair marketing). Now Intel is so far ahead it is in danger of putting AMD out of business. If I'm right, Intel is about to find some way to give AMD a break. Perhaps it might purchase the rights to use ATI Graphics technology? Intel's graphics architecture is most definitely not up there with its CPUs. (Edit) Hardware-wise, that is. Intel's graphics driver softwre and Linux support are excellent.

    3. P0l0nium

      Inefficient ???

      "x86 architecture is now exceedingly inefficient in the way it handles instructions"

      So ask yourself ... If its so damned inefficient then why is it the dominant server architecture??

      Why is it on a par with all but the most exotic ARM based tablet SOCs??

      Why are 80 percent of Chromebooks X86 based??

      Why are 85 percent of the worlds top500 supercomputers X86 based ??

      Answer : because its NOT "exceedingly inefficient" by any reasonable measure.

      1. the spectacularly refined chap

        Re: Inefficient ???

        Answer : because its NOT "exceedingly inefficient" by any reasonable measure.

        No, the real answer is much simpler than that: Intel's chequebook. Look at how many companies adopt the latest generations of semiconductor fab - there used to be a dozen or more at the cutting edge, now it is essentially Intel and everyone else is at least a generation behind. No-one else can afford to roll out a number of $10 billion dollar plants. All that manufacturing tech is going on one thing: redressing the huge disadvantage that the x86 architecture brings.

        This is old news: consider what is now almost 30 years ago, Stanford MIPS and the Intel 386 were released in the same year. On the one hand you had the dominant commercial player with the deepest pockets, on the other you had a small university research team. Guess which of them produced the processor ten times faster than the other? If x86 is so efficient how is that even possible?

        Of course things have moved on since then, Intel have gone down the route of ever longer pipelines, ever smarter branch prediction and ever larger caches. A lot of that silicon area isn't being used in a particularly desirable way, it's simply engineering their way out of a corner. If you have a pipeline 20-odd stages long that isn't good, it's insanity - as the pipeline gets longer the number of difficult corner cases that need to be addressed (branch prediction misses, values of operands that are unknown etc) increases exponentially. Fixing them all requires yet more silicon and yet more engineering, resources that given a more efficient architecture could be used to greater effect elsewhere.

        Again, that isn't news. Cast your mind back now only ten years. The engineers at Sun looked at how pipelines, branch prediction and caches were steadily becoming more and more complex and they asked themselves if that was really a sensible approach. They decided not and went for a much simpler design with a large number of threads and a large number of cores. At the time of its release the result, Niagara, was the world's fastest microprocessor bar none. Again, from a design team with a small fraction of Intel's resources. Why can't Intel tell a similar story?

        Intel have some of the best VLSI engineers out there. Year after year they manage to make processors that perform reasonably well essentially by throwing money at the problem, both the design budget and investment in first-rate manufacturing facilities. That does not make the design itself efficient: give them a clean slate and the same level of resources and just imagine what they could come up with.

        1. david 12 Silver badge

          Re: Stanford MIPS ???

          "This design eliminated a number of useful instructions such as multiply and divide ... the chips could run at much higher clock rates."

          In practice, the advanced compiler design, much higher clock rates and cheaper silicon didn't translate into a fundamental advantage in end-user speed.

          It turned out, firstly, that you could get the same clock speed on silicon that did include "multiply and divide", and secondly, that you could implement in silicon the compiler techniques that Stanford MIPS pionered to work around the limitiations of their simplified-instruction, deeply-pipelined design.

          1. the spectacularly refined chap

            Re: Stanford MIPS ???

            "This design eliminated a number of useful instructions such as multiply and divide ... the chips could run at much higher clock rates."

            Is Wikipedia the best you can quote? Especially when it is wrong? For reference I cite this instruction set summary, specifically page 59 (page 5 of the PDF). Do you notice how multiply instructions are included?

            In practice, the advanced compiler design, much higher clock rates and cheaper silicon didn't translate into a fundamental advantage in end-user speed.

            Err.. no. You have also bypassed the greatest single argument of the RISC/CISC debate: the argument for RISC isn't that compilers are smart, but that they are dumb, and indeed still are as the Niagara example demonstrated. Give me a single example of a C language statement that will be compiled down to an XLAT instruction. If you can't what is is still doing there except for backwards compatibility?

            And yes it was that much faster in real world conditions. MIPS wasn't appreciably faster than 386 in terms of clock speed, the difference was a simple 1 clock/1 instruction rule as opposed to an average of 7 or 8 clocks per instruction on 386. The Programmer's Reference Manual is still out there if you want to look that up.

            It turned out, firstly, that you could get the same clock speed on silicon that did include "multiply and divide"

            Which MIPS did.

            that you could implement in silicon the compiler techniques that Stanford MIPS pionered to work around the limitiations of their simplified-instruction, deeply-pipelined design.

            Do you actually have any clue about what you are talking about here? MIPS was a five stage pipeline. How is that deeper than 20+ stages? You've used a clearly incorrect reference to "debunk" one example and in the process shown yourself to be pig-ignorant of the entire discipline.

            Once again: reality does not change according to what you want to be true. Next time try coming up with some valid arguments.

          2. This post has been deleted by its author

            1. david 12 Silver badge

              Re: Stanford MIPS ???

              >It blows the 386 out of the water

              For certain values of "blows out of the water". But the points I was trying to make are repeated in the article you reference:

              >RISC processors couldn’t tap into the huge, non-portable software installed base except under emulation

              That is, they ran slower.

              >This allowed the Pentium Pro to reach a clock speed of 200 MHz

              The simpified die design did not, at expected, allow the MIPS machine to clock faster than CISC machines

              >The Pentium Pro combined an innovative new out-of-order execution superscalar x86 microprocessor

              Those were the compiler ideas which were the other half of how RISC computers started out faster than existing CISC computers.

              1. the spectacularly refined chap

                Re: Stanford MIPS ???

                That is, they ran slower.

                Which confirms the original point - the design of the instruction set is not an abstract decision, it has real world consequences. That is true whether instruction decoding takes place in silicon or in software. It's better to eliminate that overhead with an ISA that doesn't require such complex decoding.

                That is, they ran slower.

                So, in illustration of your point you use the Intel processor that was slower than its predecessor on that same installed software base. Even the Pentium before it failed to shine on legacy code that was not Pentium-optimised because of earliest issues of that long pipeline, i.e. the ver issue raised in my initial post.

                The simpified die design did not, at expected, allow the MIPS machine to clock faster than CISC machines

                Clock speed was never a claimed improvement of RISC, rather the optimised metric was instructions per clock. Clock speed is only partly down to architectural design, a lot of it is the process used.

  10. John Savard

    Pipelines

    Of course, nowadays more people understand pipelined computers better. The fact that a Pentium 4 had twice the megahertz of its predecessors, but its instructions took twice as many cycles to execute, did not mean that it was no more powerful than they were. Instead, it could still be executing twice as many instructions in parallel at the same time.

    But this was before "Hyperthreading" and multi-core processors, so a recompile might well have been needed to take advantage of the extra power.

    1. Gordan

      Re: Pipelines

      Very deep pipelines were the very reason why hyperthreading (and more generically speaking SMT) were invented. Context switching requires a full pipeline flush. That means that any instruction that hasn't completed gets reset and stacked away. If the pipeline is deep, that means many instructions could have been in it, so resetting is very expensive.

      Adding an extra hardware thread means that you halve the number of context switches.

      The other reason is memory latency. With internal speeds of the P4 the wait for RAM became very expensive in relative terms. With twice as many processes scheduled to run, there is twice as high a chance that the data for at least one will be in the on-die cache.

    2. Solmyr ibn Wali Barad

      Re: Pipelines

      "Instead, it could still be executing twice as many instructions in parallel at the same time"

      Could, but with generic i386 code, it was just idling twice as fast. And no amount of marketeering could change that. Blaming the lazy developers didn't work either.

      What a comedy it was. I'm almost tempted to pay them $15, for having watched the show over many years.

  11. Valerion

    Is there a limit?

    I definitely recall buying 1,000,000 of these. Can I claim $15m?

    I've lost the receipt but you say it isn't needed.

  12. Shane McCarrick

    I've half a dozen chips sitting here in a box.

    What do you reckon Intel would give me for them if I bring them back (and I mean bring them back- Intels factory is just up the road from me here in Leixlip, Co. Kildare, Ireland........)

    15 quid a piece- would be my groceries (for a family of 4) sorted for the week........

  13. Crazy Operations Guy

    If you buy something because of benchmark figures...

    ...then you get what you deserve.

  14. JustNiz

    This is bullshit. You have to have bought a pre-built PC, it does nothing for people like me that actually bought Intel P4s and other compnents separately to build their own PCs.

  15. ecofeco Silver badge

    $15 dollars? WOOT!!

    Real money you say? Let's see, they should get the check... sometime next year.

    Maybe. If they call every month to remind the bastards to send it. If they can prove they actually bought said hardware 15 years ago.

    ...and weren't born on a day that ends in the word "day." And own a white buffalo.

    Nope. No fascism here!

  16. Steve 129

    Better than the $0.12 I received from an ATT class action 'payment' !!! Postage was $0.35.

    Class action lawsuits are a complete waste of everyone's times except the lawyers representing the case.

    1. chris lively

      I agree.

      Total of fees should be capped to like 10% of the bounty whenever a lawsuit reaches class action status.

    2. Aitor 1

      Disagree

      It means the consumers can fine badly behaved companies.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like