Reply to post:

No, OpenAI's image-making DALL·E 2 doesn't understand some secret language

Filippo Silver badge

> Hilton believes that it shows the model doesn't have a secret understanding of some unknown language, but instead demonstrates the random nature of AI.

But it's not random, though. It's true that the words don't appear to have a specific meaning, but at the same time Daras' weird prompt really does produce images of birds eating bugs - consistently so, not randomly. Hilton's prompt doesn't produce bugs specifically, but it does produce animals and not, I dunno, cars.

To me, the results show the black box nature of AI, more than its randomness. We just don't know why it has that behavior, but it's not random.

The reason this could be an important distinction is that this behavior could be the basis for adversarial attacks. What if, by wearing a t-shirt with an apparently nonsense word on it, I could consistently fool face-recognition?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022