A few points
If the driver was in full ongoing control of the vehicle - driving it - and there was no autonomous system involved, the evidence of the driver video would be explanation enough of where fault lay. The driver was not looking at the road. That the pedestrian was apparently not looking too would be secondary.
The video of the road cannot be sufficient to say the pedestrian was not possible to see in the light conditions. The video camera will adjust the aperture for optimum video - if it was recording the darker areas the fully lit areas would be totally overexposed. So the video does no show what a human eye would see if looking into the apparent areas of darkness.
The autonomous system should have been able to detect to pedestrian regardless of lighting conditions. It is technically possible. My AUDI would have flashed a 'pre-sence' warning in this situation - day or night.
It looks like the pedestrian made a very bad decision. Let us say she did, lets even presume she was in some way compromised (distraught, drunk/stoned, nuts). If you are driving on a highway out of town it is true that you are moving faster that you would be able to react if the 'unexpected' occurs. That is what happens when you hit a deer or a drunk on the freeway on a bike at night with no lights.
If you are driving in an urban environment you have to drive in a much more cautious manner. Never drive faster than you can see. The reason is that there are people about, in the dark, in the rain, children running, making mistakes, distracted people crossing a wide road in the dark - even at a point where they should not be. A good driver is watching-out for this type of event when driving in town, all the time, and modifying their driving moment by moment to be safe.
This is called predicting and I do not believe artificial intelligence is close to being 'intuitive' in this way. More than safety too. We predict what other cars are doing to help the traffic flow and contribute to everyone getting home on time and safe. In experiments, junctions where there are no stop lines or traffic controls flow better and with less incidents that their former configuration. Humans are very good at forming cooperative self-organising systems.
I took the trouble to visit the location and area when this UBER accident occurred via Google street view. So I know the location of street lights, they are sufficient, there is a warning sign advising pedestrians not to cross when this person did and more. What I noted, which I would have noted as a driver, is this looks like an area where there may be remote people late at night, in the darkness. There is the river-side park areas, there are large covered areas under the freeways, there is a park with covered shelters for barbecues.
If I knew the town I would know the nature of the area, the risks. If I did not know the area I would be cautious because I would be conscious I did not know the area. That is what good human drivers do. They do not just deal with GPS and the rules of the road.
I believe that we are a very long way from having automatous driving systems that are more safe than good human drivers in these sort of situations. I do think it is strange this vehicle appears to have failed to 'see' the pedestrian, it should have been possible in this simple enough circumstance. I think the human 'driver' failed, obviously, but this is the weakness of a early driver assist system - a human's attention will not remain sufficiently to avoid such occurrences. A human is not a robust enough backstop.