I have a concern with the volume leveling. As far as I understand it, a certain multiplicative factor is determined for each track. The volume multiplication is then performed each sample in 64 bits, thus extremely accurate, but the result must then be fit into the source resolution (16 bits say for CD), and how well that fit is, differs from sample to sample, which should cause some distortion. My question is why the multiplication is not done within the source resolution. The leveling will not be as exact as now, but it will be consistent through out a track, not causing any distortion. I am not really an expert in this, so I may have missed something, but it would be nice of some one could clarify this.
That’s not the case. The result is not fit into the source resolution, but into the highest resolution supported by the DAC. Since most DACs support at least 24 bits, the result is converted back to 24 or 32 bits, and the quantization error is negligible.
I’m not sure what you mean by this. The multiplication factor is usually fractional, and multiplying a 16-bit integer by a fractional number would give a fractional result, which would have to be fit back into an integer all the same.
Ok, the bit depth conversion is done to the dac resolution, not the source resolution, That was something I missed. Thank you!