Re: Explanation please?
Yes. Most of the images don't come from wikipedia itself, for example, but from the Wikimedia Commons (another domain, another lookup). How many pictures does that one page contain compared to other pages on the 'Pedia? What distribution of 'Pedia/Commons requests are made?
Put simply, because of all these side requests, just one page can create a fingerprint that can be combined with other pages to create a distinct trail. And unlike what the article says, many of us have longer-term IP allocations (otherwise, home servers don't work so well). Worst part is that this sniffing is all done via basic Internet protocols; trying to mask them will require changing the protocols which may not be efficient or even possible.