Re: how they want some attention...
It’s not a bug. It’s intentional. When you’re scaling an image to a smaller size you lose data, as you are only able to represent a fraction of the original data. You need to decide which parts of the data are more important and which can be thrown away.
The side effect of this is that in this very particular use case, the classifier can be tricked into classifying an input incorrectly, and human auditing is less likely to detect it (“hey, who flagged this cat as a traffic light?!”).
Yes, it has limited use at the moment, but when ppl start selling data sets on a larger scale, and for sensitive use cases, it could be a more significant issue.