back to article Core blimey... When is an AMD CPU core not a CPU core? It's now up to a jury of 12 to decide

A class-action lawsuit against AMD claiming false advertising over its "eight core" FX processors has been given the go-ahead by a California judge. US district judge Haywood Gilliam last week rejected [PDF] AMD's claim that "a significant majority" of people understood the term "core" the same way it did as "not persuasive …

  1. Hans 1
    Boffin

    I think the single FPU unit might pose a problem. How about Ryzen cores, do they each get an FPU?

    1. diodesign (Written by Reg staff) Silver badge

      Re: Ryzen

      Ryzen CPU cores have their own FPU. AMD's Zen architecture is like Intel's in that the cores are fully featured and separate. AMD's approach with Bulldozer to pool some resources within a module has upset customers.

      C.

      1. 9Rune5

        Re: Ryzen

        Ah, but the largest Threadrippers do not have their own memory controller..?

        When I was young... I remember being excited at the prospect of buying a 486DX -- the first Intel CPU to include a FPU. As we all know, prior to the 486DX, the FPU (x87) had its own socket (assuming the mobo OEM had even bothered putting in a socket in the first place). And even then there was soon a 486SX version with the FPU disconnected.

        While I sympathize with the plaintiffs, I can't help but think that if you have certain needs (e.g. decent FPU performance) then you find some relevant benchmarks and use that information to determine what you buy.

        Each manufacturer is going to push the one number that makes them look good. We all remember the megahurtz wars that culminated with an ill-designed P4 that, despite an astronomically high clock, performed like sh--. Yes, Intel "won" the clock race, but...

        Caveat emptor.

        1. fredds

          Re: Ryzen

          As I recall, AMD won the race to get a working 1Ghertz chip to market. Some fuckwit Oz PC mag tested it, and then had the gall to say " do we really need such fast PCUs?" I kid you not. This was back in1999/2000

        2. Anonymous Coward
          Anonymous Coward

          Caveat emptor.

          Caveat emptor is today just another word for legalized fraud. A similar claim was made by the financial industry a decade ago when the world was made to pay for their fraudulent activities.

        3. admiraljkb

          Re: Ryzen

          @9Rune5 - They ended up with some QC issues early with the 486DX line. To add insult to injury - the "487" was actually a full 486DX that pretty much took over.

          For the Threadrippers, they have their own controllers for current gen. The new Gen Ryzens separate out what is effectively the Northbridge (that AMD integrated into the core cpu with K8, and Intel did with Nehalem) into a separate unit and shared across the compute units. It makes sense with ever expanding core counts, but we'll have to see what it does in real life. Since it's on die, it's faster than an external northbridge, and reduces complexity of the cores, but could introduce some extra latency for RAM, and I/O.

    2. Anonymous Coward
      Anonymous Coward

      Its only a problem if you do a lot of stuff that uses the FPU. For most uses, it won't matter.

      The lawsuit is silly because "core" is not a term with a precisely defined meaning or measurement. You have grounds to sue if you are sold 4GB of RAM but the CPU installed only allows accessing 2GB of it, because it is understood you are buying the use of that RAM not its presence. You could sue if you were sold an SSD but it was a hard drive with a small NAND cache, because it is understood that "SSD" stands for "solid state drive" and the hard drive you got has moving parts.

      There was nothing preventing Intel from selling a dual core CPU with hyperthreading as a quad core CPU - you could have four active tasks simultaneously. Today they sell CPUs with up to 28(?) cores, but there are a lot of potential resource conflicts for stuff like cache access, DRAM access etc. and power/heat prevent all 28 cores from running at once at full speed, so it is never close to 28x faster than a single core. AMD's "8 cores" isn't 8x faster than one core either, the difference is simply a matter of degree.

    3. Spazturtle Silver badge

      "I think the single FPU unit might pose a problem. "

      Modules only shared a FPU in 258bit workloads, each module had 2 cores, each core contained 2 ALUs and an 128bit FPU, so during 128bit workloads they operated as normal cores with ALUs and FPUs of their own.

      During 256bit workloads the 2 128bit FPUs in a module would combine into a single 256bit FPU, so you would have 2 cores with 2 ALUs each and a single 256bit FPU shared between them.

    4. NoneSuch Silver badge
      Joke

      Is it a jury of twelve?

      Or only six jurors sharing the same table?

      1. ccc13481

        Re: Is it a jury of twelve?

        6 person with multible person disorder...

        1. drewsup

          Re: Is it a jury of twelve?

          My uncle had multiple personalities, it's ok though, he was good people !

      2. Tridac

        Re: Is it a jury of twelve?

        No, diining philosophers, where ony one gets to use both tools at once..

    5. E_Nigma

      Sun UltraSPARC T1 had, if memory serves, 6 cores (with 4 logical cores each) all using one shared FPU. Nobody sued them and I don't remember anyone saying that it wasn't a 6-core CPU.

      It's worth saying that Sun was fairly up-front about the limitations and I believe that official info said that performance suffered if the share of floating point instructions in the code exceeded something like 6%. In intended application scenarios - the so called enterprise loads that mostly just shuffle data around for a bunch of concurrent users, it ran circles around Xeon and Itanium competition with a comparable number of sockets, and that was good enough for people.

      But it wasn't that different in AMD's case either. The Bulldozer (and Piledriver) CPUs performed very well under specific workloads and so and so in others and that too was well known, as a huge array of benchmarks and reviews was widely available.

      It's also hard to claim having paid a premium for the chips when they were cheaper than Intel's mid range (i5), not to mention higher end (i7) CPUs. They were pretty much budget CPUs, some of them even had a launch price as low as $110.

      The only possible exceptions are the 9000 series models, as those were expensive, but it's hard to claim that buyers didn't know what they were getting: they were merely factory overclocked models which launched almost 2 years after the first Bulldozers, end they were also reviewed fairly extensively on their own.

      Additionally, FWIW, with AMD's share in pre-built system being what it is (and what it was at the time), the people who bought FX-8000 (and 9000) series CPUs were generally the people who build their own systems, not some uninformed poor souls who bought a box because AMD slapped it and said "This bad boy can fit so many threads!", so, IMO, this is either some buyer's remorse, or someone smelling free money.

  2. Anonymous Coward
    Anonymous Coward

    So when CPUs didn't have dedicated FPUs and you had to buy it and plug it in, were they zero core processors?

    1. Anonymous Coward
      Anonymous Coward

      They sure felt like it when you had to use them :)

    2. Fading
      Thumb Up

      Ahhhh the famous intel 8087

      Showing my age a bit and I suspect the survey results may also reflect the age of the contributors.....

      1. Dave K

        Re: Ahhhh the famous intel 8087

        I certainly remember the incredible performance boost in certain applications when my dad fitted an 80387 to our first PC...

        1. Mage Silver badge

          Re: Ahhhh the famous intel 8087

          Almost none used by ordinary mortals on DOS. You practically had to write a floating point application using a compatible compiler.

          Also even later DSP and games cunningly used fix point arithmetic mapped to Integer because integer math on the CPU was faster than FP on the FPU on a 486.

          However even if not wanting FP, the AMD definition of a core on bulldozer does sound a bit like part of a cpu, whereas traditionally licencing and users might have regarded a 4 core chip as being approximately like the old server MoBo with four CPU sockets. Never mind a board of Transputers each with its own everything inc DRAM and four communication links. A wonderful design considering that Intel was on 386 cpus then.

          1986 - 1987 has maybe the peak?

          https://en.wikipedia.org/wiki/Transputer

          The 80386DX came out late 1985 (with a bug) and mainstream but very expensive 1986-1990, the 486 came out in 1989, but not widespread till 1990-1991.

          I don't remember MoBos with multiple 386, but such for 4 off 486DX existed. My only multiple CPU system had two Pentium Pro. The chip that Win95/98 killed because no real swap to 8086 mode, basically a 32bit only CPU. The NT4.0 used NTVDM and WoW, so ran 16 bit code on Pro MUCH better than Win9x could. Also Win9x itself had some 16 bit code. Not a true 32 bit OS like NT.

          https://en.wikipedia.org/wiki/Intel_80386

          Sadly most 286 and 386 computers only ran 8/16 bit software in 8086 mode, the original 8088/8086 and choice of it by IBM held back most PC based computing for nearly 10 years with its evil almost 8080 8 bit architecture and segment registers for more than 64 bit addressing. Made it easy to port CP/M and CP/M applications to CP/M 86 and the cloned MSDOS.

          MS did offer Xenix on 286 and later 386, though not much interest in that outside servers and education.

          1. Tom 7

            Re: Ahhhh the famous intel 8087

            I was lucky enough to be into chip design and I remember getting a numeric co-processor to speed up circuit simulation (PSpice on a 286/287) and boy did it speed it up! ISTR it was faster than on our VAX780. Agree with you on segmentation though I think it put back PC computing by 13 possibly 15 years - I played with CP/M 68K around the same time and it was a dream to code for, I still wonder who the twats at IBM were that wanted Kildall to sign a non-disclosure agreement and buy CP/M outright and yet didnt do the same to Gates.

        2. BebopWeBop
          Headmaster

          Re: Ahhhh the famous intel 8087

          As did my kids with theirs....

      2. GX5000

        Re: Ahhhh the famous intel 8087

        You think you're the only one over fifty-five?

        Retirement is for the young...LOL

        1. mladoux

          Re: Ahhhh the famous intel 8087

          I'm less than 40, The 486 DX was released in 1989, and the 386 was continued to be produced until 2007, with the prices of computers back then, people didn't immediately upgrade their processors to the latest in greatest, hell they still don't. Windows 95 would run on a 386 CPU with as little as 4 ( though sluggish, 8 is better ) megabytes of RAM. It hasn't even been 30 years since the idea that the FPU wasn't necessarily part of the CPU. IMHO, these people got what they paid for, but they failed to understand what they were paying for. The FPU cores and the CPU cores are two different units that are now commonly burned into the same chip. AMD said they got 8 CPU cores, they never said they also got 8 FPU cores.

      3. Ian Michael Gumby

        Re: Ahhhh the famous intel 8087

        I'll see you 8087 and raise you an 8080A.

        Yeah, 8080A, 6502 and 6800 back when men were men and 8 bits was all you had.

        But I digress.

        While the term 'core' doesn't have a specific meaning unless you're talking about core memory, you have to remember the context. Intel has cores and hyper threading. Now AMD uses the term Core, however their cores are different in design and the way they are described, they aren't exactly the same as how Intel marketed its core.

        Had they called a pair of their cores a core... so it was a quad core... they would have smoked Intel in terms of performance. But they didn't do that. Hence the confusion.

        Will the lawsuit be successful? Who knows. Its up to a judge to decide, however the money is on AMD settling. There's a good chance that they would lose. And its all because of some silly marketing.

        1. Jedipadawan

          Re: Ahhhh the famous intel 8087

          >"Now AMD uses the term Core, however their cores are different in design and the way they are described, they aren't exactly the same as how Intel marketed its core."

          And yet the Intel Pentium-D had two physical cores but shared memory access bottlenecking the CPU handling something awful!

          Originally Intel DID market 'Core' the AMD way.

          Marketing is marketing. Read the reviews and not the advertising. Same for everything.

    3. Remy Redert

      Definitions change over time. For the past 2 decades it's been a given that a CPU includes an FPU because other than the issue with Bulldozer, all of them did.

      1. DuncanLarge Silver badge

        "Definitions change over time. For the past 2 decades it's been a given that a CPU includes an FPU because other than the issue with Bulldozer, all of them did."

        That is only applicable to certain use cases. There are plenty of CPU designs in wide use today that dont need or have an FPU. You can do floating point maths using integers just fine, an FPU just lets you do it faster.

        Plenty of microcontrollers and low power devices dont have an FPU. And before anyone mentions it, a microcontroller has a CPU. Its just part of the chip that includes the other bits that make a microcontroller such has onboard RAM/ROM and IO.

        As an example, a car from 1950 is still seen as a car even if it does not come with seatbelts, heating, electric windows, a ECU, ABS brakes etc. Its still a car. But modern cars tend to have more stuff, but only tend to, its not a requirement.

      2. Brewster's Angle Grinder Silver badge

        And the CPU did. (Four, in fact.) But the issue is whether an FPU is part of the central "core" or part of the outer periphery.

        This could go either way, based on the quality of the lawyers. The AMD argument is it had eight units that could execute instructions. The plaintiff's argument is that a bunch of instructions he reasonably expected to execute without contention had to contend with those from another execution unit. I think that's a slightly harder case to make; there are many instructions which could be contended and if it mattered that floating point instructions were uncontended then the plaintiff should have read the small print. Still I could easily see a jury viewing that naivety as being reasonable, particularly as even a majority of El Reg readers seem never to have had to issue an WAIT/FWAIT instruction in their lives.

  3. KD_

    A trial should not decide what a cpu core is. A cpu core is very different now from what it was 15 years ago and it will continue to change. This is nonsense.

    1. defiler

      The Clinton connection

      Is that like defining "sex" when Bill Clinton was on trial?

      1. jmch Silver badge
        Devil

        Re: The Clinton connection

        "Is that like defining "sex" when Bill Clinton was on trial?"

        It's like defining what "is" is when Bill Clinton was on trial

        1. 9Rune5
          Mushroom

          Re: The Clinton connection

          Wait, did you just say "ISIS"?

  4. Anonymous Coward
    Anonymous Coward

    I'm not sure why the focus is on the FPU, plenty of important bits are not per-core, but per module. I'm not an x86 expert, but things like branch prediction and ifetch/decode are used by all programs, only some are FP-intensive.

    "Within each module, alongside the two x86 cores, is a single branch prediction engine, shared instruction fetch and decode circuitry, a single floating-point math unit, a single cache controller, a single 64KB L1 instruction cache, a single microcode memory area, and a single 2MB L2 cache."

  5. Anonymous Coward
    Anonymous Coward

    That description of the 2 cores does make it sound more like Intel Hyperthreading than two independent cores

    1. YARR

      They're different but they both look the same to applications.

      Hyperthreaded cores are virtual - 2 HT cores are really just 1 physical core that switches state quickly between 2 virtual core states. This is done to keep the physical core occupied when there is a pipeline stall as a branch prediction fail.

      The Bulldozer architecture has two full integer execution units that run two x86 threads in parallel. They share an FPU unit on the basis that in general use a single thread would underutilise a dedicated FPU unit. If both threads need to run a FLOP at the same time, one will have to wait. A shared FPU saves on CPU die area, which could be used to make a better FPU that executes FLOPs in fewer clock cycles resulting in better overall performance than 2 independent FPUs for optimised code.

      All Intel x86 chips preceeding the 486SX had no integrated FPU, so would not be a valid "core" by the definition of these plaintiffs. In fairness, AMD probably should have called them integer cores to avoid confusion.

      1. katrinab Silver badge

        I'm just surprised they didn't call it a 12 core chip - 8 integer cores and 4 fpu cores.

      2. Anonymous Coward
        Anonymous Coward

        "All Intel x86 chips preceeding [sic] the 486SX had no integrated FPU"

        At the risk of being downvoted into a pedant's ball of flames, that's not quite right. The original 486 had an integrated FPU. The 486SX came soon after which did not. (Well, initially had a busted and disabled FPU.) The original 486 was then rebranded 486DX. The 487 was just a 486DX in a different pinout that disabled the 486SX it sat next to.

        1. Jedipadawan

          >"The original 486 had an integrated FPU. "

          Correct! And the FPU was near utterly useless! Really just a marketing gimmick which brings us right back to where we started!!

    2. E_Nigma

      It's something in between. AMD's module (2 cores) has two execution pipelines, Intel's hyperthreadded core has one. So, assuming a four stage pipeline, in Intel's case, that would be

      thread1_instr1 | thread2_instr1 | thread1_instr2 | thread2_instr2

      whereas AMD's case would ideally look like this:

      thread1_instr1 | thread1_instr2 | thread1_instr3 | thread1_instr4

      thread2_instr1 | thread2_instr2 | thread2_instr3 | thread2_instr4

      The problem appears when both threads need a shared resource at the same time, forcing one of the threads to skip a beat. That's not always too bad. If 10% of instructions are conditional, on average, there will be a collision every 100 "steps", due to both active threads' need for branch prediction, so in those 100 "steps" one core will be utilized fully, and one 99/100, making it a very small loss and still practically a lot closer to 2 cores than one hyperthreadded. The problem was that both cores needed the fetch and decode unit pretty much all the time, and that apparently did hurt the performance, but not to the point that it got reduced to hyperthreadding. Indeed, in well threaded tests of the time, the eight core AMDs compared well to Intel's quad core, hyperthreadded i7s, despite a generally significantly lower single core performance (and especially if price to performance ratio was considered, although that is an economic and not a technical parameter).

  6. Nate Amsden

    what kind of apps were impacted?

    From the previous article https://www.theregister.co.uk/2015/11/06/amd_sued_cores/

    "it claims it is impossible for an eight-core Bulldozer-powered processor to truly execute eight instructions simultaneously – it cannot run eight complex math calculations at any one moment due to the shared FPU design"

    This article seems to be referring to desktop processors though I assume the Opterons at the time were affected as well ? (I have several Opteron 6176 and 6276s in service still as vmware hosts - though checking now at least Wikipedia says only the 4200/6200 Opterons were bulldozer).

    So if desktop processors were affected I am curious what sorts of apps would be impacted seriously by this? I mean I expect in most games and 3D rendering type apps that GPUs are far more important than FPU for math calculations. Perhaps media encoding ? I think that is often accelerated by the MMX/SSE type instructions.

    I would assume that CPU(FPU) based math would be more common in the HPC space (even with GPUs), and I can certainly see a case for an issue there - however at the same time I would expect any HPC customer to do basic testing of the hardware to determine if the performance is up to their expectations regardless of what the claims might be. Testing math calculation performance should be pretty simple.

    I want to say I was aware of this FPU issue years ago when I was buying the Opterons, and then, and even now I don't care about the fewer FPUs, I wanted more integer cores(for running 50-70+ VMs on a server). I really have had no workloads that(as far as I am aware at least) are FPU intensive. Though it certainly would be nice if it was possible to measure specifically FPU utilization on a processor, much like I wish it was easy to measure PCI bus bandwidth utilization( not that I have anything that seriously taxes the PCI bus(again that I am aware of) but having that data would be nice.

    I think back to when Intel launched their first quad core processor, or one of the first, I think it was around 2006-2007. They basically took two dual core procesors and "glued" them together to make a quad core. I remember because AMD talked shit about Intel's approach as AMD had a "true" quad core processor.. fast forward a decade and it seems everyone is gluing modules together.

    1. diodesign (Written by Reg staff) Silver badge

      Re: FX, Opteron, etc

      FWIW, the processors specifically covered by the class-action lawsuit are the FX-8120, FX-8150, FX-8320, FX8350, FX-8370, FX-9370, and FX-9590. No Opterons are named.

      The mention of Opteron in the previous article was purely to indicate how widely used the designs were.

      C.

    2. Dave K

      Re: what kind of apps were impacted?

      Well, the Intel Core 2 Quad was 2 separate dual-core dies mounted in the same CPU package, AMD's quad core offerings were a single die with four cores on it. That was the basis of AMD's comments at the time. Of course at the time it meant diddly squat to most users as both CPUs contained four processing cores at the end of the day. Current CPUs are all single die to my knowledge, the issue is more about which internal components are shared between cores on the die.

      Of course, where do you stop though? Intel's Core Duo had a shared L2 cache between the cores for example. Similarly, modern CPUs usually share the memory controller as well (otherwise you'd need separate sticks of RAM for each core). It'll be interesting to see how much sharing is allowed before you're no longer allowed to call each execution unit a "core"...

      1. Jedipadawan

        Re: what kind of apps were impacted?

        Well, I'm not an engineer but it sounds like, kinda as you say, that we are talking eight core, just not very efficient ones.

    3. ivan5

      Re: what kind of apps were impacted?

      With all this clawing for definitions etc. I can't help wondering where Intel stirred the pot and how responsible they are for the case.

      In other words, if AMD lost the case how far back would it put their R&D and would it allow Intel to catch up?

      1. Jedipadawan

        Re: what kind of apps were impacted?

        On the one hand I can see your point and it does seem a bit suspiscious.

        But, at the same time Intel have been far WORSE in their claims in th epast. If the courts finds against AMD then Intel can be sued for 16 bit chips that weren't true 16 nbit (8086) duel core chips that 'weren't dual core' - Pentium-D, FPU's that were not FPU's - the i486, the bottle necked atom chips, I could even sue for my atom/celeron not having hyperthreading by the 'logic' in this case.

        This is a crazy suit in which, in essence, anyone can sue for what they expect from a chip and not what has been supplied and reviewed. "I expected hyperthreading and hardware video encoding - you didn't give it to me so it's not a real CPU - pay up!"

        ???!

        If you can see eight cores on the die it's 8 cores. Sharing resources is allowed. Research before you buy.

        So I don't get this lawsuit as Intel would be next. Unless the courts have been paid off already...?

  7. cb7

    The proof

    Firstly, wtf have they waited 8 friggin years to think they'd been shortchanged?

    Anywho, the proof of the pudding is in the eating as the old saying goes.

    So just run some benchmarks and whilst "8" cores might not yield exactly twice the performance of 4, if it gets remotely close, that ought to be enough to get the case simply thrown out.

    1. Anonymous Coward
      Anonymous Coward

      Re: The proof

      Except they optimised it to do well in the standard tests; it was in real life that the FX series turned out to be a bag of shit.

      I know, I "upgraded" to a 3.6GHZ FX6300, and found it far worse than the 3.2GHz Phenom II chip it was replacing.

      I had to o/c the FX to a frankly insane 5.01Ghz to get anywhere near the real life results of the older PII chip running at a modest o/c of 3.6GHz.

      In the end I went back to the PII chip - and am still running it today, although if I get a share of the money, AMD will get it back for a Ryzen cpu.

      1. wasptube1

        Re: The proof

        I still have an old 1055T Phenom II 4ghz Black Edition somewhere, it was Epic, I remember live streaming a game on it that AMD insisted wouldn't work, streamed DooM2016 in full maxed HD1080P with my old Phenom II CPU and AMD tweeted me some really colourful curse words. Lol.

        1. Anonymous Coward
          Anonymous Coward

          Re: The proof

          AMD, the company, cursed at you on Twitter because you said one of their processors was actually better than they were advertising it?

          Why do I have some doubts about this story?

      2. Brewster's Angle Grinder Silver badge

        Re: The proof

        Yeah, I'm still running a Phenom II 3.2GHz. It's lovely. With a modern graphics card it has no trouble playing games on my modest screen. (And Doom 2016 was a lot of fun.) If I resume doing serious amounts of compiling, I'll get something newer. But I don't see the need for webdev.

        And I'd've avoided the FX chips precisely because I'd expect real world work loads (*cough* gaming *cough*) to perform as badly as you suggest. Maybe compiling could use all eight cores, but I'd expect most modern apps to lean heavily on the FPU. Case in point: javascript -- the kids today barely know what an integer is, let alone how to ensure javascript doesn't devolve into floats, or worse, denormals.

    2. jmch Silver badge

      Re: The proof

      "wtf have they waited 8 friggin years to think they'd been shortchanged?"

      That's just the 'speed' of the court system

    3. herman
      Devil

      Re: The proof

      "wtf have they waited 8 friggin years" - That just shows you how slow these processors are.

      1. Michael Wojcik Silver badge

        Re: The proof

        Has it really been 8 years, or just 4 years that sometimes seemed like 8?

    4. Jedipadawan

      Re: The proof

      But would even that prove anything?

      The original atom chips would be outrun by any other dual processors of the time and even some high end single core machines. Performance does not denote core. I know my humble n3350 atom/'celeron' laptop is totally outclassed by any i3 or any dual core i7 but that does not make my n3350 a 1 core processor.

      It seems to me we're talking 8 core machines in the same way the Intel Pentium-D was a 'nearly' two core processor in it's day and nobody sued Intel then!

      1. Tom 7

        Re: The proof

        I had a 50Mhz 486 system that was faster than 70Mhz Pentiums in almost all the applications I was interested in - which wasn't word processing.

  8. kaseki

    None of the above

    A computer core is a single ferrite toroid used as a single bit of random access memory. This type of memory was in use in the 1950s on such machines as the Remington Rand 1103A. When something went wrong, a "core dump" could be performed, printing out the data in all of the addresses in the core memory, called "core" for short. As memory moved to semiconductor devices, and the size of the memory grew, it was still possible to do a core dump, but eventually impractical to use in print-out form for debugging.

    The name core somehow came to be attached to the silicon chip performing the processing functions listed hereinbefore, and I think an 8086 could fairly be called a core, although I don't recall that usage at the time it was introduced.

    More to the point here, buying a processor module without performing a minimal evaluation of how it works, and in particular whether its performance is suitable for its intended purpose, is not due diligence. Unless AMD hid the architecture details so the number of FPU and ALU wasn't known by the public, I would find for AMD.

    1. choleric

      Re: None of the above

      "More to the point here, buying a processor module without performing a minimal evaluation of how it works, and in particular whether its performance is suitable for its intended purpose, is not due diligence."

      THIS ^^

      Buying processors involves careful evaluation, I come back to the processor market every 2 to 3 years and I have to re-educate myself each time to understand the technology of the day and it's pros and cons.

      The terminology varies over this time period, with terms changing their meaning. Additionally AMD and Intel call similar features by different names simultaneously. It's always like comparing apples to oranges.

      The key factor is never the marketing guff, it's the real world experience of running your particular workload.

      If you know enough to understand what "cores" means then you know enough to understand that implementations vary even sticking with the same manufacturer from generation to generation. And if you don't dig into the specifics and their significance to you then you're one core short of a full die, and probably sharing a FP unit too.

      1. Tom 7

        Re: None of the above

        Alas the careful evaluation of the different processors would require different copies of Windows and presumably two copies of all the software you need to test it on and the cost of that could far exceed the cost of the two different chips you are testing.

  9. Mephistro

    I was under the impression...

    ... that similar tricks -e.g. shared caches, predictive units, memory controllers and other elements- where common in most multi cored processors designs. If this is the case -not totally sure, as I'm not a microprocessor fetishistexpert- then this could lead to a precedent that would allow every processor maker in the world to be sued into non existence.

    1. Dabbb

      Re: I was under the impression...

      Not really. Right now I'm looking at 26 core Intel CPU on which under Linux I see 52 cpus. Intel never marketed it as 52 core CPU.

      1. Spazturtle Silver badge

        Re: I was under the impression...

        That is an SMT CPU with 2 threads per 1 core, Bulldozer didn't have SMT it only had 1 thread per 1 core. And even on your 26 core CPU those 26 cores shares some resources.

    2. Remy Redert

      Re: I was under the impression...

      Shared L2 cache? Sure. Not L1 cache and certainly not the FPU. Both are vital to performance. 16kb of dedicated L1 cache per core is stupidly small.

      Memory controllers vary a little more, but can generally be shared across multiple cores without much of a performance hit. More importantly, they have been shared since the dawn of the multi-core CPU era.

      This lawsuit, if it goes to jury trial, is going to have to establish the basic per-core features that need to be present on a CPU and in doing so, will probably look at competing CPU designs of that time to decide that.

      1. Anonymous Coward
        Anonymous Coward

        Re: I was under the impression...

        I believe the Bulldozer core "complex" proved 2 cores that allowed the majority of instructions to be processed independently by each core within the complex. The exception was the FPU that was split into 4x64-bit ADDers/MULtipliers that allowed 4 x 64-bit/2x128-bit/1x256-bit instructions per operation. Wikipedia has diagrams showing the similarities and the differences.

        AMD's design argument was that 256-bit FP ADD/MUL were rarely required, but performance of 256-bit FP operations would suffer compared to an "equivalent" number of Intel cores. While some are comparing this to hyper-threading, it is more like Sun's UltraSPARC T2+.

        The biggest issue with the design was the requirement to introduce new compiler optimizations to fully use the processor capabilities - the Bulldozer chips using these designs offered little performance increase over the previous generation of AMD CPU's and even provided less performance in certain benchmark tests, while Intel increased performance during this time period.

        The court case will be interesting - it can be clearly demonstrated that each argument has some merit (i.e. running code excluding 256-bit FP will run as though there are two cores, running 256-bit FP code will act as though there is either 1 core or 2 cores running at 50% performance...), but given the widely publicized performance benchmarks at the time showing that Bulldozer didn't perform as well as expected and that the systems were (significantly) cheaper than competing systems, it will be up to the lawyers to provide a story...

  10. Big Al 23

    Another frivolous lawsuit

    Siren chasers trying to cash in on the technical ignorance of a jury.

    AMD did not hide anything. Their architecture was always clearly stated. It's just technically challenged people looking to cash in. All one has to ask is if an AMD dual core CPU can process two threads at the same time. End of discussion. You don't need an individual FPU per core to have a multiple core CPU. Intel 286/386 CPUs did not have any FPUs. If you wanted one you had to buy another chip called a math co-processor. So were Intel I286 / I386 CPUs not actual CPUs? Obviously they were CPUs.

    1. Dabbb

      Re: Another frivolous lawsuit

      Not so fast. How Intel marketed their dual-threaded CPUs at the same time, by total number of separate physical cores or number of threads ? It it's former then AMD deliberately chose to mislead customers by going against established practice and expectations.

      1. Ian Michael Gumby
        Boffin

        Re: Another frivolous lawsuit

        Not so fast. How Intel marketed their dual-threaded CPUs at the same time, by total number of separate physical cores or number of threads ? It it's former then AMD deliberately chose to mislead customers by going against established practice and expectations.

        Bingo! Give that man a Cigar

        This is the core reason why the lawsuit is allowed to continue.

        The claim is based on the misrepresentation.

        Look at today's Ryzen chips. They count cores the same way now.

        (At least that's how they appear to stack up.)

        Its the misrepresentation of the older chip that is the issue.

        If I had to bet, I'd say AMD settles before end of trial.

      2. nagyeger

        Re: Another frivolous lawsuit

        I thought hyperthreading was basically a set of alternative registers for the same hardware?

        My definition of core is probably wrong, but I'd have thought a core was defined by a patch of silicon that would continually (barring interrupts) burn cycles given "NOOP; JMP -1" and had a its own set of general purpose registers.

        Hyperthreading fails at "continually" bit. it's just clever time-sharing. branch prediction units, etc. don't have a complete set of their own registers (unless I'm wrong), so they're not cores either. On-chip caches, FPU, prefetch and all the rest of the fluff that keep the bits flowing and feed spectre/meltdown so well cannot be part of the definition of a CPU core, otherwise you're saying that a 6502 / Z80 / 286 / atmega-328p don't have a core.

        They bought X houses, and it turned out they were semi-detached. Sorry, you should have read the spec, it's still a house. Round here you can't even guarantee indoor plumbing in a house. Maybe you want it, maybe (because of where you're from) you expect it, but it's still a house without it.

        1. The First Dave

          Re: Another frivolous lawsuit

          The difference between Hyperthreading and multi-core can be seen under a microscope / from a circuit diagram: Hyperthreading leaves almost no visible presence in silicon, whereas multi-core is visible as a separate/specific area.

          The fact that each core is on the same die is where the difference lies between cores and CPU's.

          Is it really that difficult to grasp this?

      3. Spazturtle Silver badge

        Re: Another frivolous lawsuit

        These CPU don't have SMT, so they only have 1 thread per core, so core count and thread count are the same. A Bulldozer CPU has 8 physical cores.

  11. Long John Brass
    Paris Hilton

    Keep lawyers out of engineering please

    If I build a CPU that executes 8 simultaneous execution streams at once, but shares a *large* pool of integer, FP, move store units such that any given execution path can get any execution unit or units; then is that eight "cores" or one? I not talking hyper threading here I'm talking 8 simultaneous threads.

    If you then add out of order into the above mix it gets real interesting. You can argue that a given CPU doesn't have enough FP units or wastes load/store units.

    But then modern CPUs already do a lot of what I described above, where do you draw the line?

    Especially as the line keeps bloody moving :)

    Keep the lawyers out of it otherwise we all and up as buggy whip makers

    1. Ian Michael Gumby
      Boffin

      Re: Keep lawyers out of engineering please

      Sorry but you need to keep marketing out of software and computers.

      Its not the lawyers who caused the issue. But marketing.

      Marketing like sales reps tend to exaggerate the truth.

      Like since when did simple ML processes become AI?

      1. quxinot

        Re: Keep lawyers out of engineering please

        I utterly agree that a lawsuit isn't required.

        But I'd love to see AMD's marketing get lined up against the wall and slapped, with teams of customers running the length of faces with their hands upraised, slapping each in turn.

        The FX-8xxx were a fancy quadcore, and honestly a very, very good one. Tons of clockspeed headroom and pretty good performance for the price, but a 'real' 8 core they were not. Finding that they were not required reading quite a bit of reviews and/or purchase, as it was absolutely not even hinted at in the marketing that was available at the time that I can remember.

        Now that I think about it, can we get all marketeers to line up for a slapping, not just AMD's? And the lawyers as well? I just came up with the best income stream ever.....

        1. Spazturtle Silver badge

          Re: Keep lawyers out of engineering please

          They were not quad core CPUs, they had 8 cores. Each core had 2x ALUs and 1x 128bit FPU, when given a workload that needed 256bit floating point calculations the 2 cores in a module could work as 2 cores (with 2 ALUs each) with a shared 256bit FPU.

  12. Dabbb

    "There could be hundreds of thousands of people potentially impacted, and so any damages would stretch into millions of dollars."

    Sorry, but you got it wrong, it's not damages, it's lawyer's fees. Plaintiffs will be lucky to get a dollar.

  13. Chung Leong

    Let's sue the NFL!

    The length of an American "football" is only 11 inches.

    1. iron Silver badge

      Re: Let's sue the NFL!

      It's also not ball shaped!

      1. Anonymous Coward
        Anonymous Coward

        Re: Let's sue the NFL!

        >It's also not ball shaped!

        And they mostly use their hands, not feet unlike Football which septic tanks insist on calling a stupid name which the rest of the World ignores and calls it Football.

        1. Korev Silver badge
          Joke

          Re: Let's sue the NFL!

          Not here it's Fussball

  14. VoidConstructor

    Completely frivolous

    AMD wanted to fit more cores on a chip, and they made some sacrifices to do so. Nevertheless, just because some resources were shared in an innovative fashion doesn't mean they cheated customers or deceived them.

    New products come out all the time in tech, so to argue "this isn't the way others were doing it [at the time]" is silly.

  15. John Savard

    Another Factor

    Although the floating-point unit was shared between two Bulldozer cores, it was a vector floating-point unit.

    Both cores could be simultaneously doing something with a single floating-point number just fine, each one using half of the shared core. It was only when they were using the specialized vector instructions for full-width vectors (the earlier half-width vector instructions would also not conflict) that a conflict would arise.

    And, of course, historically, most computers that had hardware floating-point just had instructions that worked on one number at a time. Even MMX only allowed vectors of integers when it first came out, not vectors of floating-point numbers.

  16. drankinatty

    /proc/cpuinfo Never Lies (or does it?)

    What did AMD encode as the label within the chip itself?

    $ grep model\ name /proc/cpuinfo

    model name : AMD FX(tm)-8350 Eight-Core Processor

    ... <snip 7 more>

    No qualification there about modules/cores. A core on a multi-core processor has always been understood to provide an independent processing unit, including an independent floating point unit and independent caches. A FPU for one core should never effect another. Seems like the Eight-Core reporting by /proc/cpuinfo is not quite right here.

    1. imanidiot Silver badge

      Re: /proc/cpuinfo Never Lies (or does it?)

      "A core on a multi-core processor has always been understood to provide an independent processing unit, including an independent floating point unit and independent caches."

      - Citation needed -

      1. defiler

        Re: /proc/cpuinfo Never Lies (or does it?)

        Strikes me that there's no reason for that downvote. A citation is exactly what's going to be available if this runs to a verdict. A citation is exactly what's required to prevent it going to trial (or to finish the trial in opening statements).

        For me, a core on a multi-core CPU has always meant that I can run an additional thread without impacting performance. And it appears from other comments that there are very few instances where the shared FPU would actually impact performance. So squeezing extra instructions into unused silicon (like HT) would not justifiably be a core, but the extra integer units in Bulldozer would be. But that's entirely my opinion. We shall wait and see if it's backed up by law.

        1. gnasher729 Silver badge

          Re: /proc/cpuinfo Never Lies (or does it?)

          "For me, a core on a multi-core CPU has always meant that I can run an additional thread without impacting performance. "

          Doesn't work like that on modern Intel processors. The cores share a very important resource - the processor cooling. So if you run an additional thread, temperature goes up, and you need to reduce the clock rate.

          Doesn't work for any modern processor. The cores share a very important resource - RAM. If you add more cores, your performance per core goes down when the cores start fighting over who can access RAM first. (L2 and L3 cache are also often shared).

    2. iron Silver badge

      Re: /proc/cpuinfo Never Lies (or does it?)

      So the 8086, 286 and 386 were 0 core CPUs?

      1. katrinab Silver badge

        Re: /proc/cpuinfo Never Lies (or does it?)

        And the 486sx

      2. Remy Redert

        Re: /proc/cpuinfo Never Lies (or does it?)

        No, we changed the definition after we moved from 66mhz 486 to the early Pentium and equivalent AMD designs that no longer offered an FPU or not choice because they simply always had an FPU.

        1. Bush_rat
          Facepalm

          Re: /proc/cpuinfo Never Lies (or does it?)

          Man it must be easy to score goals with the posts strapped to your ankles.

    3. katrinab Silver badge

      Re: /proc/cpuinfo Never Lies (or does it?)

      By the time you are looking at it in /proc/cpuinfo, you have already bought it. It is the information you were given before you bought it that matters.

    4. DuncanLarge Silver badge

      Re: /proc/cpuinfo Never Lies (or does it?)

      "A core on a multi-core processor has always been understood to provide an independent processing unit, including an independent floating point unit and independent caches"

      Sorry but no. "core" at minimum would refer to a CPU and L1 cache (may not be present). A CPU is an ALU plus clock generators, instruction decoding logic and other glue logic plus some registers, amybe even just one. A CPU has been defined since the days of the very first computers that were constructed from valves but I'm going to only consider going as far back as the transistor based microprocessor, the Intel 4004.

      Nothing has changed that definition since then. The Intel 4004 is a CPU as any other and thus a single core. Put 4 of them in one chip and you have a 4 core chip.

      What I'm saying is the term "core" is not a defined term and is very flexible. Its definition thus would vary between manufactuers who would provide their "cores". If these cors were "modules" that shared an FPU between to Integer CPU's then that is the core. A core with an CPU+FPU+L1 cache is just a different kind of core and a 8 core offering would thus have 8 of THOSE TYPES of core.

      Thus I argue that the definition of a core is a set of CPU's supplied in a single chip package. These are CPU's I'm talking about. They only do integer math at a minimum and dont have L1 cache. All a CPU need is an ALU, some registers and logic to fetch and decode instructions (opcodes) and data (operands) from external memory. Learning a bit of machine code is very enlightening.

      So if I give you a chip with 4 6502 CPU's on it and a bit of logic to manage them all, thats a 4 core chip. If I give you one with 4x pentium cpus each with their own L1 cache and a shared FPU, thats a 4 core chip. If I give you a new design of that chip that adds 3 more FPU's dedicated to each pentium CPU thats a 4 core chip that has the potential to beat the previous offering.

      Thus this AMD chip was an 8 core chip. It had 8x what AMD offered as cores. An 8 core intel chip would have been of a different design and as we know a better one.

      The term core is not defined. It is marketing speak at best. This lawsuit is just nitpicking by people who dont know the terminology. If anything comes out of this it may be a formal definition of what a core is, as defined by non-technical people.

  17. Adrian Midgley 1

    The core is not the whole apple

    If a core has to have all the parts of a CPU, then the word is redundant.

    Core means the central essential parts.

    1. Charles 9

      Re: The core is not the whole apple

      Which should include the floating point units which have been standard issue since the 80486. Earlier chips didn't expect an internal FPU so can be excused, but not anything since. It'd be like saying you have a bathroom for men and women when you only have one unisex toilet. So what happens when a couple come in with simultaneous toilet emergencies?

      1. katrinab Silver badge
        WTF?

        Re: The core is not the whole apple

        If you say you "have a bathroom for men and women", then you have a bathroom that both men and women can use.

        If you want facilities that two people can use simultaneously, then you need two bathrooms. And if you make both of them available to both men and women, that greatly increases the chances that both members of the couple can use them simultaneously.

      2. Jedipadawan

        Re: The core is not the whole apple

        >"Which should include the floating point units which have been standard issue since the 80486."

        So 8 bit controller chips including the good old Z80 which is still available are 0 core?

        You run up eight Z80 in a single machine running in parallel machine (and the old SCaMP allowed for multiprocessor use in the 1970's) and it still a zero core CPU?

        Does a 'core' have to include hyperthreading to be a core? Or it has to be 128bits now? If a CPU dos not have SSE4 instructions it's not a core? The 6502 processed data without an FPU. In fact, including the carry bit it could count up and not beyond 0-511 or +/-127. So my old Commodore PET had no CPU?

        A CPU is a processor that can carry out instructions according to a program.

        A core does the same in parrallel using the same instruction set as the other cores.

        No other definitions work.

        Once you go beyond that it all just boils down to performance and that's down tot he user to decide on price and use case. AMD shared some stuff - as Intel did with the Pentium-D - so get a lower priced CPU.

        My atom n3350 processor is the slowest laptop CPU on the market today (and it's been superseded by the N4000! I bought the last n3350 in the store!) but I'm not going to sue Intel for not supplying my hyperthreading, a 'proper' GPU or limited cache RAM. I know what I bought but I researched before I bought!

        1. Charles 9

          Re: The core is not the whole apple

          "So 8 bit controller chips including the good old Z80 which is still available are 0 core?"

          From my earlier post: "Earlier chips didn't expect an internal FPU so can be excused, but not anything since."

          The Z80 is pre-80486 so falls under that exemption. Times change as do expectations. Thus my benchmark, the 80486 with its internal FPU.

          Put it this way: I didn't consider the Cell CPU to be 8-core because it shared too much.

      3. Jedipadawan

        Re: The core is not the whole apple

        >"So what happens when a couple come in with simultaneous toilet emergencies?"

        One waits as a CPU does.

        Them after emergencies, the two get back to work in parrallel.

        Bottlenecks do not decide the definition of a core.

        You are talking performance, not core here.

        By this logic Intel should be sued for selling 'two core' Pentium-D which only used a single memory addressing causing a massive bottleneck.

        1. Charles 9

          Re: The core is not the whole apple

          "One waits as a CPU does."

          But it's still a false expectation. A single unisex toilet is one for men OR women. One for men AND women implies both can be accommodated at once, meaning you get sued because someone ends up peeing their pants.

  18. Herby

    From what I hear, a "core" is...

    ...anything the marketing literature says it is. No more, no less.

    Who knows, I may even get on the jury given my locale. That would be a treat! Of course, for all I know, they are talking about "Apple Cores" (boo hiss...).

    1. Jedipadawan

      Re: From what I hear, a "core" is...

      >"...anything the marketing literature says it is. No more, no less."

      Outside of Acorn/ARM every single CPU manufacturer could be used over their definitions of "RISC" since 1987!

      1. Mage Silver badge

        Re: From what I hear, a "core" is...

        RISC and CISC is very misleading as neither CPU type is really about how many instructions the compiler or assembly language can use.

        Load and Store architecture is maybe a better but less snappy term!

        https://en.wikipedia.org/wiki/Load%E2%80%93store_architecture

        RISC instruction set architectures such as PowerPC, SPARC, RISC-V, ARM, and MIPS are load–store architectures.

        So no, the Acorn designed ARM is not the only RISC, nor even the first. Acorn didn't even invent the idea, I think they looked at something Western Digital did.

        1. Jedipadawan

          Re: From what I hear, a "core" is...

          Hmmm... Load/Store applied to the 6800 and 6502 but even Chuck Peddle who put the 6502 together denied the 6502 was RISC.

          Load/store is not held to be RISC. My understanding of actual RISC is zero microcode. Now, I do will not pretend to know whether the likes of the 6800 or 6502 used microcode at all or not but neither Motorola or MOS/Rockwell described their processors as RISC.

          The design of the ARM does seem to have been genuinely innovative such that Intel RUSHED to claim the 486 was (somehow!) RISC at the time! That also explains, in part, the low power consumption of ARM compared to X86 who have to translated X86 into RISC type instructions.

          Load/store is RISC like in terms of instruction set but, as I understand it and I have to bow to the engineers on this one, but not in terms of actual physical implementation on the die. And that's what Chuck Peddle said back in the day!

          And I was a 6502 man!!!

          Ok, I gotta sleep. It's 00:15AM where I am!

          1. sgrier23

            Re: From what I hear, a "core" is...

            8o)

            initially I WAS A Z80, Then A 68000 which was accelerated to a 68010 and onto 68030 with 68882 FPU a few months later.

            Those were the days...

            1. Jedipadawan

              Re: From what I hear, a "core" is...

              They WERE the days because back then you were in control of your CPU and your computer!

              Then the dark days came and everything went Windows and even Apple only survived because a conscious decision was made by Microsoft NOT to exterminate Apple just so Microsoft could claim they were not a monopoly even while knowing Apple were a hardware manufacturer and not software.

              Worked too.

              For years it was expensive Mac or Microsoft.

              Then Linux came usable and got decent software while laptops became affordable and reliable (unlike the kit from the mid to late 90's that was guaranteed to be dead in 18 months!) and computing became fun again!

              Although smartphones also came along roughly the same time which turned Google into another monopoly and people into glass screen zombies...

          2. Tom 7

            Re: From what I hear, a "core" is...

            RISC is RISC - it could theoretically have microcode*, its just that people discovered that a lot of the time a compiler could optimise stuff and make things a lot faster than the same source compiled for the microcoded stuff. When memory was as valuable as the CPU then microcoded stuff seemed like a good idea. Sophie Wilson seems to have spotted that the simplicity of the 6502 could be combined with a limited instruction set, 32 bits and still burn through source code faster than MISC and using a lot less CO2!

            The meaning of core will disappear as number crunching chips for AI come more online - my RaspberryPi ARM CPU has vector processing code which are surprisingly nippy for doing some maths stuff and I can see more generalise arrays of this sort of stuff optimised in different way for AI rather than, say graphics, which will become very important in the future but IBM seem to have shown 8bit maths is more than adequate for a huge amount of AI stuff so how many 8bit cores does a 64bit 'core' count as?

            * you could argue that memory cache is a form of microcode but for memory access rather than ALU access.

  19. Will Godfrey Silver badge
    Linux

    Just a thought

    I wonder if any of the plaintiffs have any connection with Intel.

    I bought a Bulldozer unit and knew exactly what I was buying. I had no difficulty understanding the tradeoff and it did exactly what I wanted at the time. Later I re-purposed the computer for more fp work and it didn't perform so well. I was not the least bit surprised by that.

  20. Terje

    I think the most damning thing here is a shared L1 cache, how can that not be a bottleneck? I can't imagine not having a rather significant number of collisions there.

    1. Jedipadawan

      But bottlenecks do not equal "Not a core."

      See the 'nearly' dual core Pentium-D.

      Then we have the non-OOO original atom processors. Bottleneck+ compared to any other CPUs of the time.

      A slow CPU does not equal a non-CPU.

      And if anyone automatically believes marketing then... one has not been out in the real world long.

  21. jms222

    SSE

    Regardless of your definition of core the statement that it can't execute n FLOPS in parallel is false simply because of MMX/SSE that can execute 2^n of them.

    Even when you can execute 2^n it's very easy for that to be throttled by cache/memory bandwidth because that is of course shared (beyond L1).

    And all this stuff has been documented for years and one should always test performance before investing etc etc.

  22. Maelstorm Bronze badge

    This lawsuit is about definitions.

    This lawsuit is about definitions, nothing more, nothing less. I actually have a FX-8350 chip in my main workstation computer. Runs just fine for what I use it for. But I do understand the false advertising claim though.

    To be technical, in my mind, I consider a core as having it's own instruction/data cache/fetch circuitry, instruction decoder, branch predictor, register file, ALU, register forwarding unit, and FPU unit. This dates back to the era when 80486DX machines were common which was the first x86 CPU to have both integer and floating point units on the same die.

    So if some of those resources are shared, I can see how that can be an issue. But my philosophy is if it works, then I'm not going to complain. Most of my work is coding anyways which can be done on a 8088 machine.

  23. imanidiot Silver badge

    Uhhhmmmm

    Or it can go to a jury trial in which members of the public (albeit likely tech-savvy ones since the case is taking place in Silicon Valley)

    You guys seriously overestimate the tech-savvyness of the general public. Even those living in a supposedly "high tech" area are not that likely to be super tech savvy. Most people are idiots, independent of location or supposed level of education and intelligence.

  24. iron Silver badge
    Stop

    paid a premium?

    As a happy owner of an FX-8350 which is still easily running the latest games and applications without problems I disagree with the statement I paid a premium for the chip. It was a damn sight cheaper than 4 core Intel equivalents at the time!

  25. TeeCee Gold badge
    WTF?

    Really?

    ... and paid a premium for that.

    Er, no. If they'd paid a premium for it, it would have been an Intel product.

  26. DAHISTRIMUS

    im with AMD, its has 8 cores, even if they are all crap ones.

  27. SonOfDilbert
    Mushroom

    without interference or interruption between cores

    The <ahem> *core* of the problem would appear to be that there is a reasonable expectation that an 8 core CPU can carry out eight tasks *simultaneously* - without interference or interruption between cores. This is what most people understand a modern CPU "core" to be and it seems that this AMD CPU cannot guarantee this pedigree of operation under a number of circumstances (like floating point ops, for example).

    Regardless of whether you or some other random person thinks that this is acceptable or not, the cold, hard facts are simply cold and hard and rather fact-like.

    1. Jedipadawan

      Re: without interference or interruption between cores

      But, in reality, all manner of multiple core CPUs have had bottlenecks and clashes for price considerations.

      Intel should be sued for their bottlenecked RAM access in the Pentium-D, for instance, and their non-OOO original atom designs.

      Performance is not the issue, it's the savviness of the customer. If performance becomes the benchmark I can sue Intel for not supplying me with dual core i7 performance on a dual core atom n3350 laptop. Of course, I know if that were to happen prices of laptops go up threefold at least!

      We can't measure laptops by performance. As it stands and as I understand it, the 8 cores in discussion DO operate in parrallel... except for FPU operation.

      I knew when I bought my n3350 - just two weeks ago actually - I was buying THE minimal spec laptop CPU wise (the last in the store actually. Yes, it was in a store. It was an emergency and I had to see the unit for myself to test the hardware would run with KDE-Neon) it would not realistically run games.

      But I knew what I was buying, why I was buying it and the compromises needed for price and form factor.

      Dat's life!

      Performance cannot be an definition of a core.

      1. SonOfDilbert

        Re: without interference or interruption between cores

        > Performance cannot be an definition of a core.

        That's not what I'm suggesting. Overall performance is a combination of factors.

        I am suggesting that the term "core" means to many people a unit that is capable of executing a workload simultaneously with other "cores" uninterrupted.

        1. Anonymous Coward
          Anonymous Coward

          Re: without interference or interruption between cores

          So what happens when two cores want to write different values to the same memory location at the same time? Memory access is part of a CPU's basic workload, is it not?

  28. Anonymous Coward
    Anonymous Coward

    FX8350 4.2ghz OC Black Edition

    Bottlenecking does not always occur with the Bulldozer models of AMD 8Core CPU's, yes on many pre-built retail PC's it can occur, but in Home-Built Full Black Edition Gaming PC's bottlenecking can occur but depending on your build often not at all.

    My PC for example is Home-Built in it is an FX8350 4.2ghz Black Edition, 24gb Ram, 1070GTX OC Edition, Creative Soundcard, AMD Black Edition Motherboard, I have No bottlenecking n have tested positively against some major benchmarks and play games in Full 4K UHD with full HDR including DeepColour and get upwards of 60+FPS on all games.

    Many Intel Fans have given hate for AMD as a Brand, but AMD CPU's can be a great in the right rigs.

    1. defiler

      Re: FX8350 4.2ghz OC Black Edition

      That's kind of not the argument at all. The argument is that if you were performing a large amount of FP work the performance would be strangled by the availability of resources. There will always be a bottleneck, but the accusation here is that the bottleneck was purposefully introduced but not made clear to purchasers.

      Similar to putting a restrictor onto the inlet manifold of an engine - it reduces the airflow, and creates a bottleneck. The rest of the system could run much better, but when you're reliant on that inlet you're dragged down. If that were put in quietly by the manufacturer without updating the published spec then they'd be in trouble.

      I also have the 8350 - great CPU, but a but thirsty on power. I remember reading about the shared FP units when it came out. I also saw the benchmarks and the price and decided that it was the best poke I'd get for the money. I'd suggest that if these people were reliant on FP they should have paid attention to those aspects. Or just grabbed the great price (was for me, at least) and sucked it up on the speed.

      1. Jedipadawan

        Re: FX8350 4.2ghz OC Black Edition

        But that does not define a core.

        If we're talking misleading marketing (but when is marketing NOT misleading???!) in terms of performance then, sure, that might be an issue... though it would still be hard to stick given the nature of marketing... but performance does not determine a core.

      2. Wellyboot Silver badge

        Re: FX8350 4.2ghz OC Black Edition

        That was my thought when I picked an 8350 as well, price / performance numbers put the AMD 8 cores + 4 FPUs ahead of the similar price i5 with 4+4 on the benchmarks I was interested in. AMD website even had pretty drawings showing the 2+1 module layout.

        After years using cores with zero cache and/or zero FPU, does anyone else remember Lotus-123 needing to be told there wasn't an FPU on a 486sx? Nothing else seemed to care.

    2. Patrician

      Re: FX8350 4.2ghz OC Black Edition

      "Full 4K UHD with full HDR including DeepColour"

      Your very lucky to get that out of a 1070, no matter what CPU you're running.

  29. Simon Harris

    "up to a jury of 12 to decide"

    Probably a more interesting case than you usually get when called for jury service.

    Certainly more interesting than my jury service last year - I spent the whole week not being picked and had to hang around in the waiting room.

    1. Anonymous Coward
      Anonymous Coward

      Re: "up to a jury of 12 to decide"

      I got attempted assault with an ice cream van. Not sure a 20-year-old Transit could pull off the manoeuvres they described even if it wasn't full of compressors and frozen treats. Hey-ho.

      1. AIBailey
        Coat

        Re: "up to a jury of 12 to decide"

        Sounds like the evidence for the prosecution was a little Flake-y,.

        (I wonder if the detective on the case looked like Magnum? That would have been Fab)

    2. Jedipadawan

      Re: "up to a jury of 12 to decide"

      If all twelve jurors have to share the same waiting room does that mean there is only juror...?

    3. Caver_Dave Silver badge
      Holmes

      Re: "up to a jury of 12 to decide"

      If it is anything like the UK court system, the Barristers will object to anyone who is slightly technical being in the jury. In one case I sat on they had to start again with a completely new potential jury, as almost all were objected to for one reason or another (most through having been fishing in the past 10 years)!

      1. Anonymous Coward
        Anonymous Coward

        Re: "up to a jury of 12 to decide"

        Some barristers object to people who know the subject because they read up the case the night before and are worried about looking stupid.

        Others object because they have built up a very complex and convincing sounding case which involves "educating" the jury in the barrister's version of reality.

        Really I think the courts could do a much better job if juries of expert panels of retired people (so no corporate axe to grind) were used. After all, people are supposed to be tried by a jury of their "peers", which presumably means that for a computer architecture case, the jury should be computer architects. Not people who think the flat thing with the picture is "the computer".

    4. Anonymous Coward
      Anonymous Coward

      Re: "up to a jury of 12 to decide"

      Will that be 12 complete people, or 12 heads, with each pair sharing a body?

  30. Patched Out
    Meh

    FX8350

    I'm currently using an FX8350 Black Edition and I demand my $1.00 compensation (after lawyer fees, of course).

    I'll settle for a 20% off coupon for a Ryzen, though.

    TBH, After purchasing the FX8350, a new AM3+ motherboard and memory to support it, I was disappointed that I did not see much of a performance increase over my previous Phenom II system.

    1. Wellyboot Silver badge

      Re: FX8350

      >>>did not see much of a performance increase<<< were you regularly maxing out the Phenom? if not the raw speed increase before hitting the RAM/Disk storage access limits will be most of what you see.

      Moving from spinning HDD to a good SSD will make a far bigger improvement.

    2. boatsman

      Re: FX8350

      hmmmm

      now, what task did you do with the phenom which did not improve on the 8350 ??

      I made the same switch, made a big difference (= rendering video's and run multiple vm's )

  31. Anonymous Coward
    Anonymous Coward

    So how would Oracle/MS charge you on this Proc?

    1. bpfh
      Joke

      As much as they can.... Probably 16 cores for Oracle, as many cores as they can and x2 for a vague definition of hyperthreading :p

    2. hititzombisi

      Ok, how many millions do you have in the bank?

      All of it, please. Yes, all of it I said.

  32. DuncanLarge Silver badge

    The way I have always seen it

    A core as far as I know and care to define it is at minimum an element of a modern CPU that can execute its own instructions on its own registers without interfering with other cores.

    This means that all the cores could end up sharing an FPU and caches, although I would expect a decent chip to give each core its own L1 cache.

    This means I have, and being an AMD user for years, had no issue with how AMD cores were designed. I knew their cores worked like this and understood that it was one of the main issues surrounding the performance difference between them and Intel CPU's. I just saw bulldozer as a poor architectural design forcing the cores to share too many elements like the FPU which impacted certain workloads.

    This article suggests that the marketing information may have mislead peeps into thinking the cores were more independent than they actually are so maybe there is something to be argued here. However if AMD show that the FX chips outperform the non-FX chips for most workloads then I think that might win the the case.

    I always saw the early multi-core cpu as a hybrid between the single core cpu and the multi-cpu systems I drooled over.

    1. boatsman

      Re: The way I have always seen it

      and here you touch the reason why AMD will get off the hook :

      "This means that all the cores could end up sharing an FPU and caches, although I would expect a decent chip to give each core its own L1 cache."

      - intel : yes. most of them, but not all. (i X versus Xeon xxxx )

      - sparc : it depends. old stuff yes, newer stuff sometimes yes, sometimes no

      - ibm POWER : no. never shared anything processing : prefetcher, predictor, fp unit, integer units: all dedicated. cost you something, but hey....

  33. Anonymous Coward
    Anonymous Coward

    I'm with AMD on this one.

    My 286SX didn't have an FPU either.

    Are you telling me that my computer didn't have a CPU.

    1. Jedipadawan

      There was a 286SX?

      Really?

      Or is this a typo and you meant 386SX?

      [Even the 386DX did not have a FPU and the 486 FPU was generally useless.]

      1. Mage Silver badge

        re: the 386DX did not have a FPU

        You could waste your money on an FPU for 386, called the 387. You needed something very specialist to get value out of a 387 or 487 or 486DX.

  34. aje21
    Coat

    Hmm

    If you sit the 12 jurors on two benches of six, does that mean you have only two jurors?

    (Perhaps too obvious an analogy to use court)

  35. East17

    Nonsense from ambulance-chasing lawyers :(

    This is nonsense done by ambulance-chasing lawyers IMHO.

    Still my opinion comes with a 22-year background in IT and 3 university degrees out of which one is Law ...

    So I guess until the judge and jury get at least a 22-year IT hardware tech handover, they can be swayed anyway the lawyers want ...

    A less than great implementation of a good idea that was then sabotaged by compiler discrimination is not really resonable cause to redefine the "computing core" ...

    Sharing one execution engine and doubling others while sharing L2 or L3 cache does not redefine the "core" in comparison with a core sharing the BPU and the cache IMHO ...

    What a load of bull ...

  36. ChrisPVille
    Boffin

    Superscalar is not multicore

    I think the sharing of FPU resources is completely irrelevant. As everyone has noted, FPUs are sometimes shared among heterogeneous multicore CPU designs (or omitted entirely).

    The bigger problem is the shared instruction fetch and decode. What AMD has here is much much closer to a conventional superscalar CPU, with independent execution resources able to run through a single instruction stream out-of-order, than the usual definition of multi-core.

    Calling the single superscalar module two cores is definitely misleading, at least from a CPU design perspective. By AMD's logic, many Intel, IBM, ancient MIPS, etc. chips would all have inflated core counts.

  37. Milton

    It's the performance, stupid

    Several BTL posters have already made the point: there is no watertight definition of what constitutes a separate CPU core, and certainly not one that wouldn't have had to be changed every 10 years or so in the last 40 years. Those of us who bought AMD CPUs some years ago (a FX-9590 is outputting these words right now) were, by virtue of their choice, somewhat tech-savvy and entirely capable of asking themselves about how and why AMD's architecture was the correct fit for their needs. For one thing, you'd have had to factor in water cooling, which tends to focus the mind wonderfully. (In my case, I was working on molecular modelling for a then-client and needed a zippy CPU with specific qualities for orchestrating a bunch of GPU routines; done a crypto project since then, for which it has also been useful: but others, I daresay, will have been looking at gaming and wotnot. Either way, my workstation still blasts along at 5GHz—and hasn't, fingers crossed, sprung a leak.)

    So of course it comes down to performance. Did the 4/8 core CPU justify whatever benchmarks and marketing were advertised for it? Can we say that the putative bottlenecks genuinely, generally and significantly reduced the system's performance below what should have been expected from a more independent 8-core implementation?

    My own experience makes me sceptical, as I have found the CPU rock solid and even today, eyewateringly fast for my needs, which still sometimes include heavy lifting. But I am not a gamer, and certainly there may be use cases I am unfamilar with where the differences being argued about will have a measurable effect. I don't think, though, that I'd want to be the plaintiff relying on an incremental performance angle in the absence of a universally agreed definition of what constitutes a core ... it's not like saying "This engine was marketed as V8 but I only got a straight four".

    1. Jedipadawan

      Re: It's the performance, stupid

      As I understand it the 486 'FPU' was so useless as to be effectively broken.

      So Intel should be sued now for marketing a chip with a non-FPU?

      That was a far worse instance of dodgy marketing!

  38. Ziggy1971

    This lawsuit is someone trying to play stupid, claiming they didn't get what they expected.

    The vast majority of the plaintiffs would have known very quickly as to whether or not they got what they expected. Simply using their computer as they had in the past should have indicated any performance changes. And, if the performance wasn't what they expected, they would have tried to find out why, either by their own means (research) or technical advice from professionals. Either way, most, if not all, the plaintiffs would have known about the performance and/or cores and either returned the product for refund or accepted it as it is.

    (Surely Windows Task Manager would have shown 8 cores. Even if it didn't, I'm sure the performance must have been well above a quad core, perhaps not double as performance change is rarely linear.)

    If they get a bunch of technical people, scientists/engineers, to testify at the proceedings, does anyone believe that the plaintiffs are going to understand any of it? If the plaintiffs really are ignorant about technology, how could they make any claims? How are they going to understand and accept the outcome, regardless of the ruling. I doubt that ALL the plaintiffs suffered the same "ignorant" affliction.

    As technology changes so does the definition of a core in a CPU (among other things). That's one of the great things about research and development.

    An entire computer has to share resources, that's how it gets work done. If a CPU didn't share it's components with other parts, exactly how is it supposed to transfer information from one place to another?

    Back then (Bulldozer era) I don't know if any company revealed EXACTLY how the inner workings of products were as much of the information is proprietary. The only thing consumers want to know is how much faster/better it is and how much it cost.

    What's next, a hard drive doesn't respond quick enough, 5 years after warranty expires, so it's specs are misleading and inaccurate?

    Does the exact number of cores (by definition of the plaintiffs) really matter?

    I don't believe that all those thousands or millions of people bought the CPUs, not fully knowing what they bought.

    Unless all cores are created equally, there will be discrepancies by definition.

    If AMD, or any company, has to reveal EXACTLY what the definition of a core is, then it is bound to that definition and cannot deviate from it.

    By the way, who's the plaintiff? Intel? They've been struggling with development for years already and this sounds like something they could pick on; CPUs that have long been out of development and virtually nobody uses now, or at least a very small percentage of people.

    I'm not clear on new AMD CPUs yet, but don't they offload much of the resources onto the I/O die with the new Ryzen 3?

    Anyway, enough ranting.

  39. Snowy Silver badge
    Facepalm

    At the time

    [quote]The company said at the time that a Bulldozer module could average 80% the performance of two complete cores. [/quote] From https://www.tomshardware.co.uk/amd-fx-processors-lawsuit-continues,news-59824.html

    I can not see what they have to complain about.

  40. sgrier23

    American Legality

    Greetings.

    I have just read this and thought "Oh my Gaud, will Americans sue for everything?"

    This is utterly bonkers and the court in California should have said to them "Don't waste my time."

    1. Jedipadawan

      Re: American Legality

      The key word here is "California."

  41. Richard Boyce

    Recent security lessons

    Aren't we now running processors that have had to be hobbled because of the resources they intimately share? Are the people who paid extra for hyperthreading and similar still getting the benchmarks they paid for?

  42. guyr

    Buyer wake up

    I bought two Opteron 4234 processors for a workstation. I was fully aware of the Bulldozer architecture, and that a module shared an FPU. As others have said, processors change over time. I hope this case goes down in flames. We don't want to hamper innovation by punishing vendors after the fact. AMD did not hide Bulldozer's architectural details. If it did, *that* would be grounds for a lawsuit. They were up front about the architecture, so I can't see where they did anything deceptive.

    1. Jedipadawan

      Re: Buyer wake up

      Exactly.

      By this logic I could sue Intel for not giving me hyperthreading on this n3350.

      The words "Bad precedent" do not even begin the describe the fall out if this goes against AMD and Intel could be sued for all manner of 'false claims' going right back to the 8086 being a 16 bit chip. [It was actually an 8 bit 8085 with bits soldered on in essence!]

  43. boatsman

    Sun is gone. fortunately for them. cuz they had 256 core cpu's

    which could not execute simultaneously..

    so if AMD bites the bullet, Oracle can start counting the millions they are going to loose :-)

  44. tygrus.au

    A core is what I say it is, nothing more, nothing less

    Early CPU's never had a FP unit and would still be called a CPU core when counting. Would you call the FPU of an early Atom processor a full FPU? The Atom CPU core is far less powerful than their normal desktop but everyone accepts the differences. To misquote Humpty Dumpty from 'Alice in wonderland', “..it means just what I choose it to mean—neither more nor less". 8 CPU cores with some shared FPU resources performing better than an Intel Atom but less than a modern Intel Core i7. Buyer beware.

    1. Uffish

      Re: A core is what I say it is, nothing more, nothing less

      ... and that little nugget of truth triumphs because there is no accepted definition for a CPU core so everyone's definition can stand on it's own merit.

  45. Amblyopius

    Just ask the jury to get out their phones and ask them how many cores each one has. Your options are:

    a) they don't know

    b) they know but the cores are not compliant with the definition pushed upon them

    c) they run out to sue their phone manufacturer for false advertising

  46. Andy8

    licensing costs

    Back in the day I remember there release and AMD clearly stating there new approach CMT, I myself still run a fx8350. So my question is as I worked for a large company that was mainly AMD opteron (bulldozer) for our esx, Citrix farms etc. Alot of software was licenced by number of physical cores. Oracle products being heavily used here. If they say it's not an 8 core. Can we all claim millions back in overpaid licencing? As most vendors counted the cores as advertised.

  47. David Roberts
    Facepalm

    Only in the USA

    See title.

  48. David Roberts
    Happy

    AMD FX-6300

    Just checked, and this is the chip in one of my machines.

    I bought it (after looking at a lot of benchmark data) to upgrade an ancient system on the basis that it would match my old Core i5 2500k in general performance without costing anywhere near even a cheap Intel processor with similar performance.

    I know that this lawsuit is USA only and that it doesn't cover this particular 6 (or 3) core chip but I have no issues with how the chip is/was described.

    In my general purpose PC the processors are mainly idle, anyway.

  49. rmstock

    AMD wins all the way

    AMD is clearly the winner over Intel these days, with its Ryzen Threadripper ripping Intel apart. On full load the CPU's Temperature doesn't exceed 50 oC (122F). Clearly the NORTHERN DISTRICT OF CALIFORNIA Judge has been pressured to allow this ridiculous class-action lawsuit, which is only a fight over words. When was the last time that happened ?

    "At Just Over Half The Power…?!

    Also, in that same test, it showed the system level power. This includes the motherboard, DRAM, SSD, and so on. As the systems were supposedly identical, this makes the comparison CPU only. The Intel system, during Cinebench, ran at 180W. This result is in line with what we’ve seen on our systems, and sounds correct. The AMD system on the other hand was running at 130-132W. If we take a look at our average system idle power in our own reviews which is around 55W, this would make the Intel CPU around 125W, whereas the AMD CPU would be around 75W." https://www.anandtech.com/show/13829/amd-ryzen-3rd-generation-zen-2-pcie-4-eight-core

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like