"A robot could be great if it improves the quality of life for an elderly person as a supplement for frequent visits and calls with family. Using the same robot as an excuse to neglect elderly relatives would be the inverse."
That's a good summing up of the difference between intent (which I'm using for something very human) and function (something a machine can have).
A human can use all kinds of tools/measures/strategies to neglect their elderly relatives - but can be taken to task for it. In other words, the human can think beyond the specified goal or function, and place it in a wider context. The fact that humans don't always think of ethical implications, or often ignore ethical criticism, is no counter-argument whatsoever. The important thing is that they should (whether they do in practice or not - it's an aspiration). The machine can only serve the function, and be judged on how it does this. Intent and meta-thinking is not relevant to it.
(As an aside, the ethical imperfection of humans is one of the most brazenly hollow and self-serving arguments coming out of Silicon Valley fanboys to justify replacing them with machines).
Using this intent/function distinction, a bigggg problem with AI becomes clear. AI, far from being neutral, always carries a hidden payload of intent in it: the intent of those who designed it, those who market it, those who make money from it and those who use it. It's not the machine's fault in any sense that it carries this payload, and it's no flaw from the machine's (fitness for function) point of view. But until we get true strong AI, AI will always carry this hidden intent.
This is very different with humans. Although parents are sometimes blamed when someone does something terrible, no-one would ever describe conception of a child as a design process, over which parents have control. Even upbringing (which has more of an influence) is very different from the design of a machine.