"To derive a description of my credit rating from all the data about me, the program/filter/macro/neural net/AI must have followed a finite number of steps of sequence, selection and iteration."
Yes, and you can log them. However, there is only a single step. That step is a function call that takes as parameters your profile data - plus several thousand numbers that represent the network's weights. Those are the problem. The function's body is relatively simple; it just does some fairly trivial math on all of those parameters, producing a new set of numbers (which may be larger or smaller than the one you started with). This math is done in a single chunk; no divide and conquer here. This is iterated a small number of times; the final output is your credit rating.
The function does not encode the "reasoning" that brought the decision. That "reasoning" is encoded in the network weights, the thousands of parameters. Unfortunately, those parameters are nameless and have no semantics attached, because no human set them; they were set by the network itself during training. That would already be enough to make the process inscrutable.
But it gets worse. Not only you don't know what each of those parameters mean - they don't even *have* an individual meaning. There isn't one or a few weights that encode "prejudice against black men"; there isn't one parameter that is the weight given to your age. Rather, that information is encoded as relationships between weights. You don't know which ones, or which relationship. Which means that if you try to change one of them and run the function again in an attempt to see what your change did, you will find that the output is different for *all* possible inputs, because by changing a single parameter you have changed the relationship it had with *all* of the others.
Basically, yes, you can log everything the network does, and you can track the calculation, but this gives you absolutely no information on *why* it does it.