Re: let's look at this a little sceptically
One potential is to perturb signs in the real world for autonomous or assisted cars
Yes. More generally, it shows DNN-based classifier models are even more fragile in the face of active attacks than was previously generally acknowledged. Indeed, at least some of them are so fragile that they can plausibly misidentify an input[1] due to small random perturbations.
That has very serious consequences for use cases such as autonomous vehicles. It's a very important avenue of research.
Recent research into understanding what's actually happening in DNNs[2] may help.
[1] That is, assign it to the wrong class, rather than to the null class. It's a precision failure, not just a recall failure.
[2] See e.g. https://blog.acolyer.org/2017/11/15/opening-the-black-box-of-deep-neural-networks-via-information-part-i/.