This is why you can't have an automated adult-image filter of any worth.
The second someone can just put something small onto an image and radically change its categorisation without actually changing the overall nature of the image, you know it's going to end up in things like that to stop unwanted categorisation.
And vice-versa... some poor guy with a hacker's conference sticker on his backpack gets scanned by an automated system as having a rifle as he transits an airport, for example.
Until we understand what the "AI" (pfft) is actually doing to categorise, which criteria it's using, we can't make any comment on its accuracy or otherwise. Train a human to recognise something like a banana and they can tell you they are looking for a particular shape, size, colouration, orientation and apply those criteria using their learned knowledge of the object to identify zipped, unzipped, facing the camera or away, broken, twisted, ripe, unripe, etc. bananas. Train an AI and you literally have no idea whether or not it's just decided "if the center pixel is yellow, call it a banana" or some other random criteria that happens to fit "most" images of bananas but also a huge variety of other images and which can be turned to false detection by anyone willing to experiment.
This kind of "throw data at something AI" stuff is really doomed to failure, except where it really doesn't matter at all and where a human would be cheaper to employ anyway (e.g. a banana factory).