Bunch of snake-oil salesmen
"With algorithms increasingly making key decisions about our lives, it’s important not only to be properly represented in the data they’re considering, but to understand how they’re reaching their conclusions."
No, that's not enough. The only thing that will make algorithmic decision-making in these areas acceptable is a special kind of algorithm. This kind of algorithm would not only be open and understandable. It would also be explain to explain how and why it came to a particular decision. And over and above that, it would be able to _take responsibility_ for that decision.
There are approximately 4 billion of these algorithms moving about on the planet. (Slightly fewer, OK, if you exclude children, the senile and those with debilitating mental illnesses). And we already have heuristics, albeit imperfect ones, to select the most able of these algorithms and empower them to make decisions about sentencing, credit and so on.
What benefit is supposed to come from cracking this problem - the hardest AI problem of all? When we have perfectly good techniques to do these jobs already?
The answer is that the whole project is intellectually dishonest from top to bottom. For example, it's not actually trying to crack this hard AI problem at all - while simultaneously (and inconsistently) claiming that these algorithms can do the job not just just as well, but better than the human equivalent.
It has no aim except to contribute to the general contemporary deskilling and disempowerment of humans, while making as much money as possible for the charlatans who seem to be able to pull the wool over the rubes' eyes sickeningly easily.