These programs are mostly giant filters
The term "Artificial Intelligence" is misleading. Many of these programs are just very complex filters that take a mass of data. munge it into an internal form and use it to get a near match to existing records. So 'facial recognition' is no different from fingerprint matching, ballistics matching or any of the other forensic techniques that have evolved over the years. Where they go wrong is their application -- popular media has portrayed DNA matching as utterly foolproof, for example, so juiries tend to believe it even in the absense of cooroborating evidence.
So there really isn't any 'ethics' involved in AI. There's plenty of questions about specific applications but they're no different from the questions we should be asking ourselves since the dawn of time. For example, facial recognition has potential limitations due to training bias. But then, so do eye witnesses -- they're notoriously unreliable. People -- especially colored people in the US -- have been wrongly convicted of crimes based on poor witnesses for ever. The fault is the justice system that refuses to question imperfect data -- if he's black then he obviously did it, end of story.
The other big issue with application of large databases is reducing people to some quality index based on proprietary hashing algorithms. The index describes you as 'good' or 'bad' (and invariably in the US there are more 'bad' black people -- that is, poor people who don't qualify for credit except that that rating decides all sorts of other things that affect people's lives).