NATS crashed.
The description given in public very obviously means it simply crashed on unexpected and/or bad data.
And then the backup went ahead and crashed as well, as expected. Same code, same assumptions, same data, same crash.
Worse, it clearly didn't create a useful log (or even core dump?)
If it had done then the staff would have been able to figure out which flight plan crashed the system, remove it from the automated queue and try again before the four hour "major disruption" deadline.
Or at least which small block of 10-100 flight plans contain the problem. Drop those out, continue.
Then they could manually process the funky flight plan(s), and finally set someone to work on figuring out why that flight plan crashed the software - without a nasty deadline hanging over them.
Asking someone to manually process ten flight plans in the knowledge that one of them made the automation fall over is also a very effective way of finding the flaw. Handing them 10,000 is a very effective way of making sure they ... don't.