
Slowly ousting proprietary junk out of this world, one chip at a time ...
It's a kitten rather than a roar right now, but if the MIAOW project unveiled at last week's Hot Chips conference can get legs, the next year could see the launch of the world's first “open GPU”. The result of 36 months' development (so far) by a team of 12 developers, MIAOW – the Many-core Integrated Accelerator of Wisconsin …
FPGA are very power hungry compared to ASIC, but real HW. As the Verilog can be complied to ASIC design instead it just now needs the IP and money sorted to have a user chip, a chip for a MoBo or card.
This is quite exciting. So anyone with a suitable FPGA dev kit and the knowledge to use such can test out and adapt this?
If it uses FPGA and can be utilised as a compute board as well as a GPU this will be picked up by crypto miners all over.
We might see more efficient mining clients if the board truly does end up being open.
I suspect some proprietary chips will creep in somewhere.
Afterall the Raspberry Pi was intended to be open source but it has proprietary chips on it. The broadcom GPU still has fairly naff drivers.
Didnt the raspberry pi foundation run a competition at some point to get Quake 3 running on the board at over 30fps?
No, the FPGA is for prototyping. The same FPGA (or better an ASIC) specially programmed for mining is going to be better than this GPU implemented using an FPGA.
An FPGA is programmable HW, a standard part anyone can buy for prototypes or low volume (a custom chip needs 10K to 1M pieces and you check the design works by doing an FPGA version first. An ASIC can cost 100K to 1M for NRE, or even more if multi-layer, large die and small geometry etc)
The design is meant to be as Open source (SW & HW) as feasible. It's basically a now obsolete ARM phone chip on a breakout board. Any complete non-trivial computer design, even if open source is going to have some (or mostly) proprietary chips. Probably the USB/Ethernet chip is proprietary. Fully documented so you can write drivers from scratch is the issue. Almost all chips are proprietary.
That's because the BBC don't show repeats of "The Goodies" ... though, I think I heard Tim Brooke-Taylor saying once or twice that they are now available on DVDs (including the "infamous" Apartheight episode that was banned for rebroadcast in case it upset the South African authorities)
From the reaction from people who actually were at the presentation, MIAOW hasnt been designed to steer clear of patents. Right now a non issue, but should it take root the big stick will come along and make its stamp... Really, not the best base for a patent free open source GPU.
It's also missing some gfx functions, texture-mapping, and has a single processing pipeline, when you start enabling more pipes you run into all sorts of caching and corruption issues you never spotted, so its not just a case of altering some parameters and resynthesising. Great start for a uni project, but there are better options out there already not gaining the publicity.
If your interested in the subject of diy gpu processors and fpga, check out Jeff Bush's amazing write up of his open gpu on fpga. You need something with a fair number of logic elements to load his design on so something like a de0 nano or similar but you can check out the code/verilog etc from github right now, and his blog is amazingly insightful to read.
http://nyuzi.org/
I'm also interested in cpu on fpga, but its a massive massive rabbithole of learning to fall down into. There is the venerable tg68 implementation of the 68000, fpga arcade, vampire v1 and a few other interesting ones released under open source licenses you can download the sources to and check out, plus there are further closed projects in this space. The above is focused on m68k because its a chipset I knew well back in the day, so I can relate past experience to bootstrap learning for the bits I don't understand. But there are z80/6502, even machester computer on fpga...
The chinese are getting in on the area with their own fpga designs and fab, gowin semiconductor has released two designs in the past two years to compete with altera and xlinx etc. Lattice is another with interesting developments and now there is a open source toolchain (icestorm) covering the lattice icesticks to lower the difficulty bar to getting started.
http://www.latticesemi.com/icestick
Its like that guy on here that built his own diy wirewrapped discreet gated computer that took up half his house (I think he is my hero after that elreg article...) , except you can pop the dev board and usb blaster in the desk drawer between sessions without visitors spotting your a raving loon until you start to babble about von neuman architecture and the like. And its great fun giving the brain a bit of a workout compared to the level of thinking required for real work.
I think that from a software perspective, the problem isn't patented hardware. It's secret hardware. A lot of what's in current GPUs is undocumented and treated as a commercial secret or disclosed only under a non-disclosure agreement, meaning you can't write open software to use it other than if you can reverse-engineer its function.
Why do GPU manufacturers keep their chip internals secret? One explanation is that since they can't patent the techniques they are using (because of prior art), they fall back on secrecy. Another is that they *know* that they are violating someone else's IP and don't want that someone else to find out! And of course there are closed-ecosystem cartel / monopoly / NSA-backdoor / conspiracy theories too.