Machines do not "make decisions" and are not likely to do so in the foreseeable future. They just follow a pre-programmed algorithm - albeit one that may be pretty complex.
Which differs from a human brain taking a decision how precisely?
A machine makes a decision when it operates with internally generated code or huge internally generated continuously modified weighting tables on equally huge internally generated tables of data, heuristically pruned to fit in available memory. It becomes impossible for a human to understand why any particular instance arrives at a particular point A or not-A. Potentially this, even if we can dump a Terabyte of internal state at the precise moment that the decision was taken.
When you are running and are tripped by something you didn't see, you'll try to regain your balance. Can you tell us the details of your last success, or what you would do next time to avoid your fall, or even whether falling was avoidable? Yet clearly we do learn to run. Young kids fall over a lot more than adults. And if/when we advance from building intrinsically stable wheeled and tracked vehicles to bipedal "mechas", I have little doubt that the same will be true of their control systems.