"It's a sign of the times that commenters on a tech site should be concerned about how bugs in an experiment feel."
For me it's not so much about how the bugs might feel, but more about the dangerous slippery slope* that this kind of research and experimentation entails. Inevitably, research that starts with doing things to bugs, soon moves on to frogs, then rodents, then monkeys, and eventually human beings. And of course there are all kinds of justifications for it; in this case "search and rescue," a nice populist application to soothe the uneasiness that people feel about the idea of developing technology that allows people to directly control the actions of other living things. After all, if it's a case of a bug being used to save a life then what's wrong with it being a frog? Or a mouse? Or a monkey? Where does the justification stop?
Even if you say, well it would stop short of human beings, there is still the fact that if it can be done to a monkey, someone somewhere in the world will apply it to human beings, legally or illegally, regardless of legislative frameworks. What matters is not whether it will be done, what matters is simply that it can.
Furthermore, coupled with the advent of indetectable and invasive nanotechnology, this sort of thing has the potential to become something truly horrific. If you look at issues such as contemporary slavery, which is unfortunately prevalent even in supposedly free nations, you can begin to imagine some of the absolute horrors this kind of research could unleash.
For every worthwhile justification for such research, there are a dozen ways it can be misused. The question is, do the benefits it could confer outweigh the dangers represented by such research? This is the sort of thing that, like nuclear research, needs to be subject to strict controls imposed by an international regulatory body similar to the IAEA, and forcibly stopped the moment it advances to any creature more advanced than, say, a frog or a mouse - to ensure it can never be done to people.
*I find it interesting these days that the "slippery slope" argument is increasingly being dismissed as a logical fallacy alongside such expressions as "ad hominem" and "appeal to authority". I suspect this is a particularly nasty piece of social engineering being employed by certain elements of society to dismiss concerns about not only the misuse of technology, but such things as invasive surveillance and far-reaching police powers, etc. etc.