Re: I have trouble understanding this.
Let's say you have a server with reliable weekly backups. The server has been infected with ransomware and cannot be decrypted. The last four weeks' backups were encrypted as well because the operators watched you and determined that you do a test of the backup tape every month, so they waited for you to do one, corrupted the backups for the next month, then went through with the full attack. You can't restore any of those, but you can restore the one from five weeks ago. However, if you just hit the big restore button like you would do if the disks had failed, you'll get the server image from that time which still has their malware on it. So you may instead have to recreate a new server and carefully copy only the data back onto that server. Then you have to do something to recover last month's data, which could mean using some incremental backups you have, recreating from other sources, or dealing with unrecoverable data. Deciding which to do and actually carrying it out requires someone familiar with the system and someone familiar with the data, likely not the same person, and some time for each to evaluate the situation, determine the best method of recovery, and carry it out. Carrying it out may require more people to spend time doing so. You also have to make sure that the malware won't be able to reinfect the new server when you have it running, so you'll need to make some changes. I imagine you understand all these actions.
Now you have ten thousand servers, and they're not all the same, and many of them aren't just standalone servers but various types of infrastructure, from networking equipment to functions that get resources provisioned automatically by your datacenter VM management software or your cloud provider. Most of them don't do anything on their own, but work in a big cluster of other things. The data on the resources covers everything your company used to use, so you need many more data experts to determine how to recover it. The scale of the recovery effort isn't linear. Fortunately, your team is likely bigger, but that only goes so far.
In such a situation, it often comes down to luck. Something may have evaded the attack because it was better secured, because it didn't work the way the attackers had planned, or for many other reasons. It can also be a great opportunity to change the systems. I usually have a long list of changes that would probably be good to make, but we don't do it because everything's running right now and making big changes could break something. Now that everything is broken and we're rebuilding from scratch anyway, it might make some sense to make improvements so that the new version is better. That adds delays as well.