Trust of black box systems is overrated
By erasing them completely, not only is a significant part of the history of AI lost, but researchers are unable to see how the assumptions, labels, and classificatory approaches have been replicated in new systems, or trace the provenance of skews and biases exhibited in working systems."
If you cant understand your black box, then maybe you shouldnt be trusting the results quite so completely...