> but the human ear can't deal with more than CD quality - that's why it was defined.
To be historically accurate, the CD standard 44.1KHz was just the best they could do with late-70s technology... the original Sony PCM digital systems recorded to video tape, and 44.1Khz was the highest sample rate they could accommodate.
[As an aside: early CD players were a truly remarkable feat of engineering! Processing data from the CD (at rate of over 4Mbits/s) with 1980 era LSI chips was quite something...]
The main problem with the 44.1kHz sample rate is that it requires *very* steep reconstruction filters. It is basically impossible to design such a filter without significant passband ripples and/or time-domain problems. Bumping the sample-rate up (to at least 48kHz) allows for better behaved filters:)
The second problem that the sensitivity of human ears varies by frequency, and in the region of greatest sensitivity (3-4kHz) the apparent noise floor of a (non-noise shaped) dithered 16bit signal *can* be heard. Using a noise-shaping dither can improve things, but with a 44.1kHz sample-rate there really isn't enough frequency headroom available...
Of course, 24/192 is complete overkill!
In fact, a properly engineered 16/96 system, using pre-emphasis and a noise-shaped dither, should easily exceed any realistic requirements.