Someone can tell you the architecture of their AI, and all the weights of the trained network, but it doesn't tell you why it makes any particular decision. Perhaps we have to wait until AI is conscious enough to explain itself.
OK, everyone can call me naive and shoot me down in flames, but is there any reason why an AI CAN'T tell you why it has made a particular decision?
It's just a computer program. It must have followed a particular series of steps and decision points. Why can't it log these along the way? (Just like I did years when I used to insert code into programs to help debug them.) Even if it has derived the rules it is using - rather than the rules being explicitly coded into it - then the rules must be represented somehow, and its path through them must therefore be loggable.
I know nothing about the size and complexity of today's AIs. Answers such as "it would slug the performance" and "it would generate too much information" would be perfectly acceptable responses to the question "why DOESN'T an AI tell you why it has made a particular decision?".
But "why CAN'T it..." is a very different question. I see no reason why not - they are only finite state machines after all, albeit with an awful lot of states and state transitions.