Would you trust a stranger to make sensitive decisions now delegated to AI?
'AI' is a 'black box'. What goes on inside and how or why particular outputs arise is pretty impenetrable to persons designing and to those training neural networks. Designers at the level of coding would be little more able to give specific meaning to a given set of weights than anyone else.
Bear in mind, it is the weights assigned by 'learning' that have closest analogy to what human software coders do. The weights are the program. Code specifying an untrained neural network and transaction protocols among 'neurones' is better regarded as equivalent to background firmware; few ordinary programmers need delve into firmware code.
Present day AI has become established as an heuristic tool of value in circumstances involving assessing and classifying complicated patterns within data submitted to the AI. However, current AIs offer no insight into how they arrive at results. They can (supposedly) reliably draw inferences and make prediction, each within their realm of operation, but they cannot explain underlying 'reasoning'. That would necessitate a higher order of functioning wherein not only incoming data is processed but there is an introspective mechanism for examining some currently assigned weights as if they too are data; this loosely called sentience.