why no geek bench or crysis stats?
how many coins can this thing mine in a day?
British AI chipmaker Graphcore has announced a new series of hardware products based on its latest second-generation Intelligence Processing Unit (IPU) known as the Colossus Mk2 GC200. Hot on the heels of its top competitor, Nvidia, which launched its AI-focused A100 GPU in May, the Bristolian upstart is also keen to flex its …
Crysis? Bah! Will it run Kerbal Space Program with unpty-zillion mods installed, including Kopernicus and multiple planet packs, in real time with all the graphcis settings at maximum? If so, please earmark one for me, I should be able to save up for it by sometime in the next millenium!
>....influencing students who will turn into professionals and be more comfortable going with what they know.
At which point the students (now employees!) will request the current generation of the NVidia hardware. And if the Graphcore stuff is even remotely cheaper in any way, the beancounters will ensure that they get that instead.
Otherwise yes, they have an excellent idea for marketing their products to a whole generation of developers. But beancounters will count beans, and screw everything up. Per usual.
My experience of Tensorflow:
"holy good grief this is utter pile of cobbled together shit, I'll go roll my own rather than use this turd"
The whole training/production system is completely different, training on the fly was not supported, training is all python+compile and production is run on Google cloud or Android. To reconfigure the model, you had to reconstruct the empty model, and retrain it from scratch from training data. Even if the change was neutral. It had useless sigmoid but not usefull max_out.
Compile this, compile that, deploy...
I wanted a box and a library. CreatModel, estimate, backCorrect, serialize, deserialize, reconfigure
I wanted a good comprehensive set of stock activation functions (+ add my own easily)
Why does it come with all this useless baggage around it? Why would I take all this data I have, turn it into a format I can input into an add-on library in Python, so the Tensorflow python library can access it. Tap tap tap on this crappy interpretted language just to get the data into the system, and tap tap tap to get the estimates out or use an API that's completely different to access the thing I just created and was playing with, on a cloud I don't need.
It would be nice if you delivered a 2K entry level box, developers can plug into a PC, use your APU to develop their skills on small models, using the nice *SIMPLE* API in lots of languages, and learn YOUR kit directly. Then they can upscale as needed.
Get the developers and you've got the market.
Target the existing frameworks, and you get the breadcrumbs they leave you. Most tensorflow models will end up run on Google servers. If you're market is only Google that's fine, but if you have bigger ambitions.... developers developers developers.
> Tensorflow [ ... ] holy good grief this is utter pile of cobbled together shit
Not just Tensorflow. I could name two other well-known ML frameworks that are just as bad, if not worse. One of these other two doesn't believe in documentation of any kind.
Which makes me wonder if anyone has ever gotten anything useful out of these ML frameworks. When it comes down to it, everything seems to reduce to (a) multiplying two matrices and (b) AI'ing a cat photo.
Here's a cat picture. Tensorflow confirms it's a cat picture.
To that, add the mandatory overhead - ranging anywhere between 4 hours to a full week - caused by debugging broken Python crap that needs fixing.
"Nvidia watches Brit upstart Graphcore swing into rear-view mirror waving beastly second-gen AI chip hardware"
Ok is it just me, but if you're Nvidia and you're driving along the road... and someone swings into your rear view mirror, either you just passed them, or they are going in the opposite direction.
Or I guess that they are accelerating hot on your tail... but that's not what first comes to mind...
Biting the hand that feeds IT © 1998–2020