
Can't resist...
Imaginary numbers? So they really are making it up as they go along...
(Yes, I know. It's more complex than that...)
Boffins from Duke University say they have figured out a way to help protect artificial intelligences from adversarial image-modification attacks: by throwing a few imaginary numbers their way. Computer vision systems which recognise objects are at the heart of a whole swathe of shiny new technologies, from automated shops to …
Well it was not much better than 50% reliable at recognising a panda, and this technique makes it worse, but not quite as much worse as a slightly different technique. So it might be OK in a low-pressure situation where you don't mind too much if you get a few gibbons mixed in with your pandas, but still doesn't sound great if you plan on setting it loose in a couple of tons of metal careening around the place mere metres from squishy meatbags.
Sure, we're already pretending we have AI, why not throw gardening in the mix ?
I really would appreciate it if El Reg at least could stop pandering to this marketing illusion. It's not AI, it's a statistical analysis machine.
In any case, call me when they need mowing the lawn. I'll be sure to ensure that all the data points are cut down to the same height.
... and the neural net, it sort of makes sense. Instead of representing each point in solution space as a magnitude, it is now a vector (magnitude plus direction) to the next most probable step in that space.
Once you hit the network with real data, it saves time (processor steps) by proceeding along the most probable path.
“… correctly labelled by the object recognition algorithm with a 57.7 per cent confidence level, was modified with noise - making the still-very-clearly-a-panda appear to the algorithm as a gibbon with a worrying 93.3 per cent confidence.”
Great! I no longer need to say that the only innovation — in the past 70 years — is text annotations. And insist that everything depends on textual search.