A paper from DeepMind to be presented at the International Conference on Learning Representations (ICLR) next month in April shows that studying these interpretable neurons alone isn’t enough to understand how deep learning truly works.
A quick look at how well the brain is understood, given much of the chemistry and connectivity is might have provided such an insight. Discovered systems are always going to be more difficult to work with, properly engineered systems with decent models are a wee bit easier.
I speak as someone who has come off a very frustrating if eventually successful and extremely lucrative consultancy attempting to look at a strange fault in a distributed embedded system (can you say race conditions.....)