"If you have 1,000 vulnerabilities, focus on the 200 that are actually loaded to memory"
Finding out whether vulnerable code is loaded into memory is likely to turn into a major research project given the complexity and scale of modern software, and there could be a lot of false negatives given the dynamics of library-oriented code (unless of course we're going to reverse engineer applications as a matter of course in contravention of the EULA).
However an adequate approximation to it can be provided by the CVSS environmental score (see section 4).
The majority of vendor-published CVSS scores are raw CVSS base metrics, although independently discovered vulnerabilities may get assigned a temporal score. Consequently, depending on their origin, CVSS scores may not be entirely comparable. However they're a lot better than a bunch of independent incompatible (typically verbal) vendor scores. However I've never encountered an organisation that undertakes calculation of the CVSS third stage environmental score as a matter of course. It's a calculation that has to be done by the organisation itself, but it's essential, as that's where the filtering process takes place to reduce the vast list of potential threats to something more manageable.
So there's no real need to re-invent the wheel provided the necessary processes and effort are applied to the problem using extant tools. I have to wonder whether the authors of this study have developed (or are developing) a new tool that does what they're advocating. but it seems to me that "loaded to memory" could be a transient phenomenon and thus get missed quite often. Presence of a high score vulnerability and accessibility to an adversary should be the most important metrics.