> Hilton believes that it shows the model doesn't have a secret understanding of some unknown language, but instead demonstrates the random nature of AI.
But it's not random, though. It's true that the words don't appear to have a specific meaning, but at the same time Daras' weird prompt really does produce images of birds eating bugs - consistently so, not randomly. Hilton's prompt doesn't produce bugs specifically, but it does produce animals and not, I dunno, cars.
To me, the results show the black box nature of AI, more than its randomness. We just don't know why it has that behavior, but it's not random.
The reason this could be an important distinction is that this behavior could be the basis for adversarial attacks. What if, by wearing a t-shirt with an apparently nonsense word on it, I could consistently fool face-recognition?