For each pixel in the output we can say it added 1% of the pixel to the left, 2% of the pixel two to the left, and so on, but that doesn't really help us understand why the first photo looked good and the second looked awful.
Thanks, but that's not answering my question. I wanted to know "why can't an AI explain how it came to a particular decision?". Not "was that a good or bad decision?".
In your example, explaining what happened to the pixels is simply describing what the program did. My question would then be "how did the program decide to do these things?".
Whether or not the processed photos look good or bad - or whether an AI's decision was good or bad - is not the question that I was asking.