Once I was hired as an "extra hand" to help a real estate company develop their new website where all of the content was backed by a database connected to their internal systems. It was very late and slightly over budget but mostly complete. But it was put into production nonetheless and then they started having a persistent performance problem where for many users the site would "hang" for about 15 to 30 seconds and then work normally for a few more requests and then hang again. It there were more than half a dozen users browsing the site at the same time it was almost guaranteed that everyone would experience this behaviour.
While the target audience for this company was quite small (high-end luxury estate), this was a serious problem as the user experience had to be as flawless as early-2000s technologies allowed.
The "senior" developers all were blaming database performance, server performance, low memory, etc. because profiling the production servers showed 100% CPU both on the server and the database, but very low memory usage. But even after moving everything to a more powerful server (quad processor! yay!) the problems remained.
The problem never happened even once using the test database that was fully loaded with all sorts of fake data.
Me, knowing almost nothing about the project at that time, started looking into the code and after one very long night noticed that there was a section where as the last step building the web page it would choose three random internal "ads" for other real estate from the database based on the current search criteria or estate being viewed and then complete sending the partial web page to the client.
Problem was that the routine for selecting the random items was extremely flawed. Select one random ad, then select the next one. If it was the previous one, select another. Select the next, check against the already selected list, choose another if already in the list. Repeat until the desired quantity is selected. Why this was done this way I do not fathom. You know, "senior", "enterprise" ways of doing things?
Now imagine that I want 3 ads and the database only has 2 for the search criteria... it would keep trying to select new ads to add to the list, but they will always be the same. It would "hang" the processor thread at 100% CPU utilization and also the database because it would receive thousand of useless (cached) queries per second.
The client would receive the web page almost complete and then "hang" until the server timeout was reached and the connection dropped, allowing some more browsing until the search criteria for the ads would again result in not enough items to fill the list, hanging the page again.
The problem never happened with the test database because it was loaded with thousand of fake "ads" carefully distributed into all of the possible categories and because of that the poor selection routine never was starved of records to pick from, while in production the "ad" distribution was much less random letting several of the categories with few or no items.
For the pleasure of finding the fix would involve rewriting one function of one class to use one slightly more complex SQL statement I got a lot of ugly looks from the "senior" developers and "senior" DBAs that were proposing yet another server upgrade to "fix" the problem. And my work there was complete, got paid for one day and that was it.
I think that bringing a fresh pair (or more) of eyes that never worked on some project that is experiencing persistent problems is indeed a valid approach in many cases.