"..since it doesn't have to wait 1/60th"
But no modern sensor does.
1/60, 1/120, 1/128,000 etc. is only for output/creation compliance, it's not related to the limits of the sensor directly. If only computation is desired, then the limit is at the CPU driving it or the sensor, as seen with how electronic view finders and "pixel peeking" work. *IF* these cameras detect when light changes without polling of some kind, then they'd have to be more analog than digital, which I'm not sure is the case (but that is obviously possible).
"Event based cameras don't simply compare frames for differences - the data change is detected at the pixel level and the output can be per-pixel asynchronous"
That's simply how any digital camera works, in fact that might describe exactly how the chemical process in film works (not sure though... has to be close to it). CCTV sensors mix it up a bit by having a co-processor due to the low light requirements (like Sony "Starvis" or whatever), but they just do the same thing in their own routine under their own direction.
The real problem is all the questions on how this DARPA camera is different, while simultaneously being "classified".