Oh great, we will finally have to argue with the toaster over our choice of bready snacks.
Eggheads have devised software that can automatically produce machine-learning models small enough to run inside tiny microcontrollers. Microcontroller units (MCUs) are pretty darn common: they can be found wedged inside everything from microwaves and washing machines, to cars and server motherboards, running relatively simple …
Tuesday 4th June 2019 10:12 GMT DropBear
This post has been deleted by its author
Tuesday 4th June 2019 10:32 GMT Anonymous Coward
Tuesday 4th June 2019 10:48 GMT Tom 7
Re: And I do wonder how this would work on a Pi Zero
I'd steer clear of the pavement if I was you!
The nematode brain modelled as a NN managed character recognition using considerably less neurons than 'standard' methods. I was alluding to the idea that AI will probably come to be made from 'brain units' that are sort of pre-wired NN that perform certain 'brain functions' from which we can produce far more reliable AI and I thought atmega328p might be worth looking at for hosting these components. Looking at the spec though they are on 20mips and a PiZero GPU can achieve 24Gflops for 10 times the price so I deleted the post you were answering to and decided to reply to yours!
I do however believe that AI is almost pre-nascent at the moment. When people look at actual brains in nematodes, insects and eventually mammals we will be able to identify processes that make up intelligence and behaviour developed and refined over some 600million years of evolution. We're barely modelling the AI equivalent of NAND gates at the moment!
Tuesday 4th June 2019 10:44 GMT Wellyboot
The paper give training times as 1-11 GPU days depending on the various methods used. (GPU setup is 4x NVIDIA RTX2080). From this I can speculate that the MCU operation will be very slow process looking at a picture stored in flash.
It's just an academic exercise at the lowest edge of computing and hats off to them for getting it to work at all on such a small footprint.
Tuesday 4th June 2019 11:26 GMT Luke McCarthy
It doesn't specify which STM32 they used in the paper, and the RAM available varies by quite a lot depending on the model. The highest end model have 1MB, and go as low as 16KB. The image format could have been 8-bit or even 1-bit since it's only character recognition, and the data could have been streamed so the whole image wouldn't have to fit in memory to work. Also some STM32 have a DRAM controller which would allow them to access several megabytes of memory.
Tuesday 4th June 2019 15:31 GMT fajensen
Maybe by using Stochastic Computing - https://spectrum.ieee.org/computing/hardware/computing-with-random-pulses-promises-to-simplify-circuitry-and-save-power
Very simplistically, data is striped into serial streams of bits, where randomised long runs of '1's and '0' represent the values. Complex calculations can then be performed on the streams with very simple logic circuitry, thus using very little power. The trixy operation being the randomising process, but, they are fixing it (and a PRNG will still work, it's only that it is complex and burns power).
Tuesday 4th June 2019 17:58 GMT Jason Bloomberg
At three one-byte values per RGB pixel, a 26x26 image fills your 2K RAM completely
That 48x48 icon on the right is just 1,772 bytes, 32x32 and just 928 in the selection options, and quite easily recognisable ->
It should be possible to decode that into a bitmap on-the-fly and I would guess there are other tricks which could be used if one has plenty of code memory and aren't so fussed about speed.
Tuesday 4th June 2019 10:45 GMT The Man Who Fell To Earth
Tuesday 4th June 2019 20:12 GMT Anonymous Coward
I once wrote a proof of concept Python program that plays rock, paper, scissors against a human opponent and learns their pattern of responses, so gradually winning more and more often. It would easily have fitted into quite a small microcontroller - in fact 16 bits and about 2k of RAM would do it, 1k if you had a dedicated display and buttons.
Tuesday 4th June 2019 12:47 GMT Boring Bob
Tuesday 4th June 2019 12:55 GMT Lee D
Re: What is new here?
It's now cloud IoT hyper-convergence AI neural nets with fabric... and chintz and...
Pretty much, nothing's changed in the meantime except we have slightly faster computers but pretty much the end result is no different... we can just "afford" to bundle this junk into little speakers and your search menus where we couldn't before.
In terms of things actually *learning* or doing useful things, we're still stuck in the absolute dark ages of the technology where really the problem is "unlearning" - i.e. having a machine that trains itself on a million images is all well and good, until you spot something it's doing wrong and you have to basically retrain it from scratch because it has 1,000,000 entries that say it's doing the right thing, and one that says it's doing the wrong thing and it has to resolve that somehow without losing all the subtle nuances that it was trained on (i.e. you can't just weight the error 1,000,000 more than the others).
Ai, since the 80s and through to today, learns and then plateaus just on the cusp of usability and then *stays there forever*. It's almost a perfect PhD research topic - do it, write it up, hope nobody ever asks you to apply it to anything else that doesn't involve literally starting from scratch every time a change is made.
And, even then, it's generally only 90-something % accurate which is pretty useless compared to even a trained dog.
Tuesday 4th June 2019 13:37 GMT heyrick
Tuesday 4th June 2019 13:54 GMT Anonymous Coward
Emperors new clothes and neural network humbug
I can't help feeling that ANNs are essentially a symbol of computational laziness. The basis of rational scientific approaches to problems is to have some understanding of a mechanism - otherwise you are basically doing astrology. 'I know 5 people born in early March who are artists, so Pisces are creative'. Hence adversarial images, unintended biases in decision support systems etc. If a problem can be reduced to a solver in 2kb of ram then a bright human ought to be able to work out what the network is doing, and produce code that beats it for efficiency.
Tuesday 4th June 2019 15:46 GMT Lee D
Re: Emperors new clothes and neural network humbug
ANN is just "a statistics based magic box".
You plug things in. It makes some kind of spurious and random correlation between what you're teaching it and the input data, which you can't really interrogate, understand, improve or modify.
If you plug enough things in, it might train itself enough to work a percentage of the time. Then untraining, retraining or anything else? It's pretty much throw it away and start again - it's existing "superstition-based" intuition will trip over every exception to the point that it becomes painful to make any significant change after the learning plateaus.
It's been an unsolved problem for decades, we just get to throw more and shinier hardware at the same problem.
You would otherwise notice, for instance, that Google searches would become personalised, almost-psychic, tailored to every user... Siri would know what you want before you ask it... because millions upon millions of users are training it daily and it doesn't sleep - it should be learning exponentially. It's not. And over time everything you interact with that has "AI" would get better and better... it doesn't. Improvements are microscopic at best after the initial plateau.
Literally Siri and Google should be Skynet by now. The reason they aren't... AI and ANN in particular just doesn't work like that.
What we need is a really, really radical re-think of the whole thing with something entirely different. Because genetic algorithms are the same, ANN, everything we try hits the same problem. And for every million images you train it on, it takes a significant percentage of those images again to retrain it for the ones it gets wrong. And again. And again. With diminishing returns.
Tuesday 4th June 2019 17:56 GMT DCFusor
Re: Emperors new clothes and neural network humbug
I get the idea Lee has a clue here.
Somewhere around the '90s, Timothy Masters pointed out that you really wanted a sigmoid type activation function, else one neuron becoming wildly activated "too sure of itself" would cause errors.
As would using more than at most 4 layers, which is enough to solve any problem soluble by MLFF neural networks. He cautioned strongly about overfitting due to too large a network for the available training data, and even gave examples of the types of errors you get when you don't follow that advice.
While not as dramatic as mistaking turtles for guns and some of the funnier examples we see today, I kind of grin at the naivete of the new people discovering this all over again - (or failing to discover anything because they didn't learn the underlying math and intuition required).
So, now we have to go "deep" - too many layers. To do that, even on new hardware, we use RELU activation functions (training seems to be faster) - so we return to the "too sure of itself errors" predicted. And with too many layers and too many coefficients we now overfit even more, essentially committing every mistake Masters (and no doubt others) warned of and getting exactly the results predicted by them almost 30 years ago.
At best, these things are classifiers = the rest is pure hype. When over-fit and given noisy data...we see the results, I need not go on.
This is not to say that you can't get useful results in a small CPU. It's just that you do better when you pay attention to how things work, and don't get distracted by quick training times that produce the odd lucky guess - because a lucky guess is all that is, not real world day in and day out performance.
Tuesday 4th June 2019 21:04 GMT Mike 16
Re: Pisces Artists
Just FYI, my wife is a Pisces, and an Artist (in the sense of "Sells her stuff in a few galleries, albeit some orders of magnitude less expensive than a Damien Hirst slowly-rotting shark". OTOH, Hirst is an Aries, so maybe the Rams get the Big Bucks)
As for "code that beats it for efficiency": Not gonna happen as long as managers approve the purchase orders for snazzy software tools and hold mere code-jockeys in lower regard than phone sanitizers. Remember that Time to Market is the _only_ metric.
Tuesday 4th June 2019 23:49 GMT martinusher
There's this book.....
...on my shelves about DIY Artificial Intelligence (or, more accurately, Machine Learning). It describes predictive algorithms and gives you some to try. Since it was published in 1983 or 1984 the examples it gives are in some early form of BASIC. It does explain in the introduction that, yes, researchers tended to use exotic logic programming languages you don't actually need that stuff to demonstrate how these algorithms work and even get useful results out of them.
There's wisdom in that text.....I've often thought that trainee programmers should be given small, slow, computers to work with so that they get a bit of a feel for how code gets executed.
Wednesday 5th June 2019 10:30 GMT heyrick
Re: There's this book.....
I completely agree. I believe that courses teaching programming (of any serious fashion) should include a stint with something like a BBC Micro (Apple2 for Americans), something where you can actually probe and observe every single signal to understand how it actually works. None of this "little black lump of magic".
It's also a good exercise in demonstrating 20 bit (five byte) floating point numbers on a machine where the processor has only two registers and an accumulator, no FP, no multiply, and treats everything that isn't an address as an eight bit value.
Wednesday 5th June 2019 05:26 GMT _LC_
2 KB - I wonder
These cost less than a buck and a half (including shipping!):
ARM®32-bit Cortex®-M3 CPU Core. 72 MHz maximum frequency,1.25 DMIPS/MHz (Dhrystone 2.1) performance at 0 wait state memory access. ...
64 or 128 Kbytes of Flash memory. 20 Kbytes of SRAM.
2.0 to 3.6 V application supply and I/Os. ...
2 x 12-bit, 1 μs A/D converters (up to 16 channels) ...