"All I'm asking is why people think that that cannot be logged and output - ie why the AI cannot explain how it arrived at an outcome."
That log would be perfectly easy to generate. However, it would take you weeks (or more) to read it and you would be none the wiser at the end of the experience as to why the computer had said "no".
Put another way, the computer does not have a reason, it merely has a very long calculation. Many moons ago, its designer discovered that the result of the calculation was fairly well-correlated with his or her own prejudices, at least on a test data set, and that designer therefore decided to use it as a substitute for making the decision themselves.
As long as everyone understands that it is a mere corrrelation on a mere test dataset and is being used as a substitute for an equally (but differently) flawed process of human judgement, there isn't a problem.