"In addition, if I were a malware developer I'd LOVE to have a tool that allows me to check that my malicious code is well hidden from Apple evaluation. I'm surprised the developer cannot see why Apple would have a problem with an app that makes testing that possible."
Because refusing to allow anyone to check up on security is actually more insecure than invasive security tests. Especially if (like this) it's something a malware developer could knock-up themselves in an hour or two. Among infosec bods, having the tools open to everyone is considered much better than just letting the Bad Guys write their own tools to find exploits and check security and the good guys taking it on faith that everything is just peachy because Tim Cook says it's fine. The idea is that we don't trust anything, even our own brilliance.
The old 'if we don't talk about how insecure X is, then it's secure through obscurity' thing is BS, and it's been considered BS for 30 years by everyone... except for Apple and Cisco. Everyone else runs bug bounty programs for exactly this reason; Apple haven't got one because it would imply imperfection in their latest products (which only Apple are allowed to do, and only after the next iteration of the product is out). Cisco go further and will actually try and prosecute you for flagging up bugs, which is even worse.
There's an old rule in Sigint circles which says that no matter how clever you think you are, someone else is cleverer. If you think your security is unbreakable, all you're proving is that YOU don't know how to break it, not that no-one else does. Pretty much everyone else - Google, Microsoft, Oracle, Linux, hell, even Adobe have grepped that idea and realised that open-sourced security testing results in way better security than trying to run a black box (and is usually cheaper, too). The bad guys will ignore your rules about not checking the eval detection. The good guys won't, and so won't be able to help you find bugs til they're being exploited in the wild.
That's why modern encryption methods tend to be published and people are invited to try and break it, rather than hidden away. If you can check that your malicious code is hidden from Apple evaluation, the problem here isn't that you have an app that tells you. The problem is that Apple's evaluation can't detect it. That is the thing which Apple ought to be addressing, rather than banning a tool which can be easily re-created from scratch by a malicious app developer.
Ultimately, it's pretty typical Apple tbh - try and appear hardline on security without conforming to anything remotely approaching Best Practice, and then removing anything which highlights the difference between their talk and their actions.