Hi SeanC4S, I'm thrilled to finally see someone talking about the software consequences of this tech. Here's my question: what kind of specs do you think it would take for a convolutional net to be done realtime instead of in batches? If 3dxp converges server architecture to a real storage class memory (mind you that this may take 3-5 years or more), how many terabytes would it take per core to actually allow all of this to happen in memory? Furthermore, if storage class memory becomes a reality, is there a need for batch operations (a.la. standard Hadoop) in distributed jobs at all or can everything become streaming (a.la. Spark, Storm, etc...) since we could just put all the files into shared data structures and keep them there forever?