The simple solution
The basic problem is that IT staff tend to only monitor stuff inside the datacentre - not the user experience. That might have been sufficient 20 years ago, but nowadays the only reasonable approach is to start with the users' experience and to work backwards from that.
The reason we have that situation is that the tools that come with most IT systems are really only designed for monitoring servers. Looking at a small number of parameters: CPU, memory, disk I/O and network traffic. Any dashboard is usually simply plopped on top of these metrics.
We tend to value what we measure, rather than measuring what we value.
In fact it's quite easy to do the right thing. Even better it can be done for free. Using packages such as AutoIt3 for windows or Tcl/Expect for Linux/Unix, it's quite feasible to measure response times that the users experience, or the times that queries they execute take to run. We've been doing that for a long, long time and it usually provides exactly the information needed, quickly and accurately.
With the proper analysis, it's possible to see quite small deviations from normal response times. Generally users are prepared to put up with quite a lot of pain before they'll pick up the phone to the Hell Desk and report anything, so with these techniques it's perfectly possible and practical to know before they report it that there's a problem looming.