Many of the HQP filters are ‘sinc’ filters. For longer than I would care to admit, I thought that referred to some kind of synchronisation. In fact, it’s a mathematical function used in DSP:
To be more exact, that’s the “normalized” sinc function, where the argument is scaled by pi.
According to the sampling theorem, it’s the only filter that exactly reconstructs the analog signal from its digital form, and the only one that resolves the correct missing samples during up-sampling, i.e. the ones you’d get if you sampled the original analog signal at the higher rate in the first place. If you use any other filter, even a minimum phase one, you won’t get back the original, so why would anyone?
Of course, practical sinc filters need to be windowed, as the theoretical function is infinite in time.
It’s short for Sin Cardinal. The sinc function is at the core of the Whittaker-Shannon interpolation formula which is used to perfectly reconstruct a function(with some constraints) from samples. In effect, you can added up a bunch of sinc functions combined with sample values to reconstruct a function.
Whittaker is the mathematician who found what we now call the Nyquist limit for sampling. He did it about 10 years before Nyquist wrote his paper on telegraphy that hints at this limit. Whenever we talk about sampling functions and reconstruting something from them, it is Whittaker’s paper that laid the mathematical basis for what we do now with AtoD and DtoA converters. His paper was published in about 1913.
Infinitely long sinc will also have infinitely poor time domain performance for time limited signals. Just like infinitely long FFT will have. So for example when you are doing a spectrogram you need to choose a transform length that gives you wanted time and frequency accuracy. Since the two are 1/x related.
Then when it comes to ADC and DAC cases, for PCM sources where the oversampled ADC data has been decimated to lower sampling rate, the ADCs are all but perfect. Thus the resulting data contains a lot of errors compared to the analog signal that entered the ADC. Those errors can then be later corrected to some extent using apodizing oversampling filters when bringing the sampling rate back up again for D/A conversion.
For example popular AKM ADC chip when producing 44.1k sampling rate output, will reach the full alias attenuation of 85 dB at around 18 kHz. Thus the 18 - 22.05 kHz band has rising slope of aliases. When running at 48 kHz instead, the aliasing band begins at 20 kHz.
But since there’s no way to know what kind of errors were introduced by the ADC process, if any, the best bet is to take the digital signal as the truth and use the oversampling filter that most closely approximates the theoretical one, i.e. a windowed sinc.
For that purpose I have analysis running for the input content. Running apodizing filter for content that doesn’t need correction makes no harm. Running it for content that needs fixing helps correcting issues. While reproducing all the errors uncorrected is not preferable (at least to me, but it is up to you to choose).
There are just almost unlimited number of ways to do “windowed sinc”, point is to choose the best one. And I prefer one that best preserves both time and frequency domain simultaneously.
At the moment I have around 70 different oversampling filter choices. (I think ESS is now somewhere around 7, with just one apodizing)