Relative computational complexity of filters?

Is there a chart which shows relative computational complexity of the various filters? Or at least within filter groups?

I’m working though the filters to see where a new machine falls over and I’d like to be able to reference documented expectations.

For example, if I knew poly-sinc-guass-xla is 4x complexity to poly-sinc-guass I could expect -xla to work / not work based on utilization of poly-sinc-guass.

Are there filters which can better utilize GPU offload over others?

Thanks! This is a lot of fun… until the music skips :slight_smile:

1 Like

I don’t have such chart. And the end result depends a lot on such things as conversion ratio and the hardware in question.

Not better, but GPU is more efficient on certain types of operations than on others. The ones where GPU is most efficient are the RAM hogs that benefit from fast parallel memory access of GPU. These are closed-form and sinc-S/M/L.

I would think a relative comparison wouldn’t have such a large dependency on the hardware. But, as most people, I’m probably oversimplifying what’s going on.

When I wrote this inquiry it was because I was seeing wild jumps in utilization between filters that didn’t seem to make sense. Since then, I’ve realized this was not result of the differences in filters but a result caused by the source material when trying to convert between base rates (44.1 to DSD256x48, 48 to DSD256). That was giving me unexpectedly high utilization. I’ve since checked auto rate family and things are much more predictable.

However, a chart would still be interesting to me. I’m going from memory of my testing last night so consider these numbers just as non-real of what kind of data I’m after, taking the highest utilizations I’ve seen from source material…
poly-sinc-ext2 is averaging ~30% utilization across the cores in use.
poly-sinc-ext3 is averaging ~80% utilization across the cores in use.
These are based on a consistent clock frequency.

Is it really expected that ext3 needs 2.65x cpu overhead, or maybe it could be said it has a 2.65x complexity, over ext2?

Again, I’m probably over simplifying things here. Maybe we just need something like a HPQ benchmark database where the software collects machine info, runs some kind of stress test, scores it, and publishes a ranking?

I’m just getting way too excited about HPQ and my new build maybe. Have a great day.

Theoretically about 4x, but it is not so straightforward though. For example amount, layout and speed of CPU caches matters, as does RAM speed, number of cores and the core structure (especially on AMD with CCXs). Also different CPU (Intel vs AMD) and the CPU generation matters.

1 Like