back to article AMD says its FPGA is ready to emulate your biggest chips

The flexibility of field programmable gate arrays (FPGAs) makes them ideal for all kinds of applications ranging from smartNICs, telecom networks, and even for emulating retro game consoles. However, AMD's – formerly Xilinx's – latest Versal FPGAs unveiled Tuesday can do a bit better than simulate a 30-year-old microprocessor …

  1. bazza Silver badge

    Rarely Useful

    Interesting to see the same old use cases being rolled out, again. Quite a lot of them are not very mass-market, much more niche-market.

    FPGAs remain ******* hard to get working effectively. There's a lot of talk of direct model synthesis, but that's never very good. Someone doing some proper coding gets better results, and the number of people who actual work in VHDL or Verilog and are good at it is surpassing few. You've really got to need to use an FPGA, to justify using one. Software on a CPU is a lot easier.

    They're also expensive, hot, and prone to having insufficient on-chip resources. If you thought Apple Sillicon had expensive on-chip RAM, wait until you're paying Xilinx / Altera prices for it. They do not have that much on-chip storage - 100sMByte tops - and as soon as that becomes insufficient then you're better off with a CPU and GPU. The thing I think is interesting is that with these monster FPGAs, they're firmly in the land of large and hot; i.e. a large GPU or CPU is volumetrically and thermally competitive. The problem is that for computational applications, the GPU or CPU can easily out perform an FPGA if the problem data set requires data storage in off-FPGA RAM (GPUs especially have huge memory bandwidths in comparison), and are a whole lot cheaper to buy.

    That's the benefit of there being a huge consumer market for CPUs and GPUs - the cost of the chips gets to be quite low. With very little in the way of a market (in comparison) for FPGAs, they're always an expensive solution to a problem. There is a very small range of problems where it's worth it.

    1. short

      Re: Rarely Useful

      I'm with you on the painful dev tools. There has to be a better way (or many better ways)

      I don't think the RAM limitations are inherently worse than for CPU/GPU, are they? If customers want masses of RAM, I'm sure that Xilinx will oblige, probably not on-die, but chiplets of cache in the same package or nice fast DDRx interfaces - nothing that they haven't done before, right down to the cheapie zynq devices.

      If the workload is such that a CPU is the right answer, an FPGA won't be, that's not going to change.

      1. bazza Silver badge

        Re: Rarely Useful

        The thing is that if you start needing off die ram, overall performance starts relying on the memory bus performance. Because GPUs and CPUs are all about memory performance and have a lot of it (Intel are up to, what, 200GBps memory bandwidth per chip? NVidia GPUs go beyond that I think), they win.

        FPGAs are torn between providing acres of programmable logic and using the silicon real estate for things like DDR5 memory interfaces. CPUs and GPUs don't make the same trade, because they store their programs in off-chip memory. Generally speaking, this means FPGAs are pressured to skimp on things like DDR4 interfaces.

        FPGAs have always been slow but potentially heavily parallel devices, and become just slow devices as soon as something forces them to be more (or fully) sequential (like a lack of memory interfaces).

        Meanwhile, CPUs and GPUs have become much more parallel with specific instructions. SSE / AVX in Intel AMD CPUs are pretty good. Altivec in PowerPC / Cell was epic for the day.

        To compete at all, FPGAs have had to include hard cores for certain DSP routines like FMA, and similarly now for AI applications, because doing the same thing in programmable logic cells is very slow indeed. I think that rather spoils the point of the programmable logic because all it is doing is pushing data in and out of hard cores. In comparison, all that CPUs and GPUs are doing is pushing data in and out of their own SIMD vector units / cores. The difference is that programming the logic is hard and it runs at, say, 400MHz, whilst the software is easy and can easily clock along at 4GHz. FPGAs needs a lot more hard DSP core hardware to be competitive. Thus they become large and expensive chips.

        Also, FPGAs tend not to be keen on floating point (maybe they've got better?), and AVX / Altivec and GPUs love chewing through floating point arithmetic.

    2. Stuart Castle Silver badge

      Re: Rarely Useful

      I'm no FPGA developer, so could be very wrong, but it seems to me FPGAs are good for short product runs that require custom silicon, whether that silicon is a new design, perhaps in testing, or an emulation of an old design (for retro computing, or controlling old machinery).

      It does seem as though once the number of units required gets quite high, you'd be better off looking at making actual chips.

      1. Anonymous Coward
        Anonymous Coward

        Sometimes necessary

        Had just read an article where FPGAs were needed to do impossible math: Ninth Dedekind number discovered: Scientists solve long-known problem in mathematics. Kinda funny that the supercomputer was 'merely' the host for the FPGAs.

        1. Korev Silver badge
          Boffin

          Re: Sometimes necessary

          > Kinda funny that the supercomputer was 'merely' the host for the FPGAs.

          Exactly like they are for GPUs today too

      2. Anonymous Coward
        Anonymous Coward

        Re: Rarely Useful

        I'm pretty sure my Novation Summit synthesizer is built around FPGA's...I play retro music on it so does that count as retro computing.

    3. martinusher Silver badge

      Re: Rarely Useful

      One reason why FPGA coding seems relatively crude compared to programming is that just getting the logic down and compiled is the (relatively) easy bit. The fun starts with layout and constraints. So there's no point in making them super easy to program if the only result is that nobody can lay out or time the things. So you don't really need to simplify the coding, 'consumerize' it -- sure, you could but what's the point when coding is the simple bit?

    4. IvyKing Bronze badge

      Re: Rarely Useful

      Considering what a top end FPGA costs, there's no way in hell that they would be used in any kind of mass market product. OTOH, there are a number of signal processing applications with extremely hard real time response constraints where the only alternative to an FPGA would be an ASIC.

    5. Justthefacts Silver badge

      Re: Rarely Useful

      That’s partly true. One obvious point is that for mass-market, any use-case that benefits from “non-software-type” (eg bit-level manipulations) is implemented on an on-CPU accelerator hardware block.

      And where do you think the designers of that block implemented and verified their design? On FPGA. So, that’s a major market, even if niche.

      Counter-example: both Xilinx and Altera are owned by AMD and Intel respectively. If both those companies took their FPGA toys off the table, where do you think any other CPU manufacturer could prototype develop their CPU? They couldn’t. They literally couldn’t. No other FPGA manufacturer has even within an order of magnitude capability for prototyping on that scale.

  2. ocelot

    Always the same.

    Every time somebody sets up to emulate chips with FPGAs, by the time you have completed the circuit boards of the emulator product, partitioned your design and synthesised it to the FPGAs, there will be a general purpose computer available that can simulate your design at comparable speed.

    I used to dabble with this technology back in the 1990's : millions were spent on a commercial custom chip emulator with hundreds of what were state-of-the-art FPGAs fitted in a box. We bought bigger FPGAs and built an emulator with about 10 FPGAs, I hacked together some design partitioner code. .. and then the company bought a computer that simulated the new device at a sensible speed for a fraction of the price.

    1. DugEBug
      FAIL

      Re: Always the same.

      Not true. Today's emulation products are not the stuff you dabbled with in the 90s.

      If you are emulating a large SoC/ASIC, you buy a board with these monster FPGAs already on it from Synopsys/etc. They are very expensive, but not as expensive as taping out a buggy chip. Those boards already have the RAM/peripherals that you may need to fully emulate the chip. Setting it all up and getting it to work is a PITA, but you are rewarded with emulation that is many orders of magnitude faster than simulation - and the software integration/test can be done before you commit to silicon.

      Intel/AMD/etc. wouldn't dream of taping out a monster chip without emulation.

  3. Anonymous Coward
    Anonymous Coward

    Yes but can it

    emulate the 2901 bit slice processor …..

    1. Anonymous Coward
      Anonymous Coward

      Re: Yes but can it

      Yes. Several thousand of them.

  4. monty75

    I bought a cheap FPGA (Tang Nano 9k) to play around with learning Verilog. I enjoyed it but there didn’t seem to be enough money in it to really sink much time into.

  5. martinusher Silver badge

    They're very useful parts, bit...

    We've used FPGAs extensively for custom peripherals. The smaller ones are cheap, easy to use and very flexible -- changing the firmware changes the product function. The soft processors you can build with them are a bit slow but perfectly adequate for running communications. That said, they do have limitations. The most insidious is mission creep -- you have a product that can do 'X' which the marketing people will want to do Y, Z and heaven only knows what else. Adding extra functionality is easy up to a point, but as soon as the part utilization gets above about 75% then all sorts of little glitches and snags start creeping in, especially if you're already pushing the speed of the part. (Then there's the whole subject of clock domains......).

    Its a fun way to earn a living but its not particularly easy.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like