Re: Cost Savings
> who missed the optimization potential at the beginning
Everyone. And rightly so.
The order is: make it work; make it work correctly; make it work fast.
If you *think* you see an opportunity for optimisation in the first two steps, just make a note of it and carry on. As soon as it is working 'correctly enough" to be functional, run it through the profiler. Then, and only then, locate the hot spots - both time and memory consumption, often intertwined- and improve those. Re-profile to find the next hot spots.
Otherwise, without data from the profiling, you will inevitably waste time on premature optimisation: the "slow code" you think you spotted may only run once a month - or it may even be excised completely by an algorithm fix whilst "making it work correctly"!
> (and who suggested/justified/ordered the profiling?)
You should always have in your plan the expectation of profiling (although I am horrified how often I've worked with coders who are "expert" with their IDEs and do not even know if they *have* a profiler or are worried when the answer is "no". It was shocking whem Microsoft stopped providing a profiler and nobody seemed to care!). Only drop - or postpone - that once you can see the program working and somebody properly signs off as "it is demonstrated to be running well enough".
So the question should be: "Who *stopped* the profiling? What was their justification and was it valid at the time?"
Lots of times, it is really easy to see that profiling and optimisation are not necessary - because (numerically) lots of time you are involved in only a short program, a one-off tool, probably just flung together, run and forgotten about. Great. But even then, good practice is to write down "I did not do this stage, it wasn't necessary/useful, signed Burt".
> how much pressure were they under to "get it done, not get it done well"?
That is what causes the profiling to be dropped - and that fact signed off by the PM and manager of course (ha, ha).
BUT even after finding this saving of x million pounds and k servers no longer needed, saying "Why was this allowed to happen? Why didn't we find this out earlier? Ah ha, it was Fred's fault, he told them not to profile; sack him now" that is very possibly an over-reaction:
How much growth has there been in the use of the code since it was written? Is it now processing (k * 10) or more transactions compared to when it went into production? If do, and (being very, very crude) you can now shutdown k whole servers, at the beginning you could have only saved 1/10th of a server. Less easy to shut that down!
In other words, as time goes by, IF you experience growth, the balance of costs to fix versus savings made will alter.
As to "Can the same person pull off the same trick again? Was he the only one who could?" - highly likely those are pointless questions, because if your system has reached the point where one (lowly) person could ever be in the position to randomly say "I wonder what happens if I profile this?" then what you have found is a management and planning failure: you've grown but haven't bothered to think about what growth does to a system. You haven't bothered to go back and check how many profile runs were just not done etc. If you pay a bonus to the engineer - and you should - you should take it out of the manager' bonuses (yeah, I know, that'll never happen).
Reminder: the above is different from the situation that possibly more of us have been in, where everyone knows full well the program must get a speed-up, we even have all the customer complaints that it isn't keeping up. But nobody can figure out how to improve it, until Jim has his brilliant moment.