As a ~15-year-old discipline, machine ethics is in its infancy. This is why I say that it will be a great many years before we can even think about planting proper ethical reasoning into drones--there are many complicated issues to tackle before it's viable, especially because of the inherent philosophical concerns (which we've been trying to iron out since, oh, we started reasoning as a species at all).
Also: I speak primarily of automated drones. One could very well make the argument that even simple models have the potential to make better judgments than humans (see the rest of Dawes's research for a great stab at that line of thinking).
Even still, will humans deem it ethical for drones to make ethical decisions on their own? Quite probably not; it just seems viscerally wrong to us. This whole automated ethical reasoning I'm talking about may never gain traction--and maybe we'll decide that it shouldn't.
So, maybe we'll decide that a human should be in the loop, controlling a drone or otherwise. Ornstein has a great quote about the role of the human subject in such matters:
if the only role of a human participant is to watch a meter and push a button when the needle goes from green to red, then the participation is merely symbolic. Under such circumstances reflection and judgment are effectively eliminated, and these are precisely the qualities that constitute the crucially important human contribution.