Re: The algorithmic model can be can be selectively tested with different types of input
Some research recently published on the arXiv preprint server examined inserting back-doors in algorithms during the training phase. The rationale was that training was likely to be outsourced - sent to the Cloud - to get the compute resources and that the training data could be manipulated while it was in the Cloud. Worked well. Back-door could not be detected looking at the model and it survived additional training largely unscathed.