"People [could do bad things...] It’s a realistic possibility, granted that a lot of machine learning software is open source"
I'd offer that these statements form a non-sequitur.
Isn't this what people used to say about encryption? Proprietary = more secure? That didn't work out too well, did it?
Better that vulnerabilities are out in the open, rather than being quietly exploited by those "in-the-know".
This kind of research is possible specifically because the algorithms are so accessible.