What do most people use HQ player for?


For some reason my iFi nano won’t play at all using the DSD5v2 256 filter.

It plays fine with all the others but not the 256 ones.

Any ideas


You can’t trust that display with 3.13b3–Jussi just changed how it works when he added the Auto selectors, and exposed a new way to get the actual output rate/type that reflects what is actually going on.

I switched Roon to use the new mechanism last night, but that change hasn’t made it into a build (even on alpha) yet. It’ll go out whenever we make the next release.

Ok Cheers Brian

Sorted the DSD256 problem.

My ifi Nano needed updating to the latest 5.0a Strawberry firmware in order to play DSD256 on Mac.

Playing fine now.

I am not hearing any difference between Roon alone or with HQPlayer. My La Scala Aqua DAC is a NOS so maybe that makes a difference. I will experiment some more but I think my trial period must be running out shortly.

Just to explain some major differences between how typical DAC chips work vs how HQPlayer processes signal…

When RedBook content is played, typical DAC chip processing path looks like this:
[2x 63-tap FIR] -> [2x 31-tap FIR] -> [2x 15-tap FIR] -> [16x sample&hold] -> [3rd order sigma-delta modulator]

So after first stage rate is 88.2k, after second stage 176.4k and after third stage 352.8k. After that point, every sample is repeated 16 times (sample&hold aka zero-order-hold) reaching 5.6 MHz rate before entering the modulator. Samples are either 24-bit integer in older/inexpensive chips and 32-bit integer in modern more expensive chips. This processing means there are some notable rounding errors involved. There are also mirror images of the signal around every multiple of 352.8k rate. Each FIR stage has about half the taps, because it has half the number of master clock cycles to spend vs sample period. All processing is run synchronously to the sampling rate.

If you input at “2x” rate of 88.2/96, the first FIR stage is dropped out, if you input at “4x” rate of 176.4/192, the first and second FIR stages are dropped out. If you can input at “8x” rate of 352./384, all the FIR sections are dropped and only remaining S&H and SDM are in use.

What you can do with HQPlayer is alternatively:
[128x polyphase or closed-form] -> [7th order sigma-delta modulator]
for DSD output. Or alternatively:
[8x polyphase or closed-form]
for PCM output straight into S&H stage or a ladder DAC (like Metrum for example).

Samples are 64-bit floating point (with 80-bit used where necessary). There are significantly more clock cycles per sample available due to higher clock speed - 4 GHz CPU clock vs 25 or 50 MHz DAC clock. Processing is also asynchronous, so when necessary, the processing can decide to go back in time and recalculate things if it thinks that some adjustments need to be made, availability of large RAM enables this in practice.


Thank you for the contribution Jussi. :relaxed:

Is there anyone that can translate this for the average Joe? That may as well be Greek to me. Or maybe this is content that loses meaning if translated… like you can’t simplify the Krebs Cycle very much. Generally you have the knowledge to be able to get it, or you don’t, and you’ll never understand it.

scolley -

Whatever you “feed” a DAC (standard 16/44.1 or “Redbook” / CD, 24/96, 24/192, etc.), most will convert it to a much higher sample rate once it’s inside the DAC. Before the output to analog (i.e., a sound you can hear), they do one last conversion from PCM > SDM (the same format as found on SA-CDs).

So there’s a lot of processing that goes on inside your DAC that may as well be a “black box” to you, since you have (basically) no control over it (NOTE: some DACs have a toggle or two that let you choose between some pre-baked filters, including minimum phase vs. linear phase, etc.). Hardware / software resources inside a DAC are limited, and most never get firmware upgrades to make the conversion better over time - so you’re stuck with whatever your DAC is doing inside.

If you’re happy with the way your DAC sounds, and never want potentially to improve the output, there is no need for HQPlayer. If, however, you want to try to make it sound better by doing the various conversions outside your DAC, with potentially better software and certainly more processing hardware, HQPlayer lets you do all the processing on your computer and send the final result to the DAC, more or less completely bypassing the internal processing of the DAC.

For those of us who also are into digital photography, this is like using an external processor to convert raw files for final output instead of just accepting the pre-baked JPEG the camera spits out.

There’s certainly a lot more to it (e.g., which filters to choose, which modulator to choose, etc.), but I like the idea I can choose my own settings (among the generous selection Jussi provides, of course) and get better sound out of my DACs. From personal experience, the sound is much better on both my LH Labs Pulse X Infinity and my iFi Micro iDSD.


Thank you John. :relaxed:

In my limited understanding, whether or not this is the case is very much dependent on the DAC?

I don’t fully understand the technicalities of DACs and (up)sampling in Jussi’s explanation, but I have to say I’m quite surprised that there’s a ‘typical’ processing path bearing in mind the huge range of costs and different brands of DACs, as an end user I guess I just assumed they all had some sort of ‘unique selling points’ or revolutionary designs!

hifi_swlon -

As I understand it, most DACs are actually exactly the same in terms of processing “path” - the only real difference is the filters used along the way and the analog components the designers choose.

That said, there are some DACs (like Chord Hugo / Mojo, etc.) that do things differently (the Chords use programmable hardware instead of traditional DAC “chips”), and your mileage may vary with these - Jussi suggest just sending the highest rate PCM these will accept instead of sending DSD.

OK - that got my attention… a key tidbit excluded from the prior “translation” it sounds like.

So John, you’re saying that Jussi said that in his post yesterday? If so, that would mean NOT sending things as DSD, as recommended in the following? [quote=“andybob, post:9, topic:7393”]
…Kick-Start Guide by Geoffrey Armstrong…
If so, that’s a pretty important detail, and quite a departure from what appears to have been recommended here thus far. If I’m following things correctly - particularly recommendations by Brian.

Not trying to put you on the spot, but just to make sure that’s what Jussi was saying, as it appears to be a significant departure from prior recommendations here.

scolley -

The usual recommendation is to send DSD, to almost all DACs.

Only in very special cases (e.g., the Chords) does he recommend PCM instead - see this quote from a post at Computer Audiophile:


1 Like

Thank you for the clarification. :relaxed:

If the DAC uses some off-the-shelf DAC-chip, the approach they take is almost surprisingly systematic across the board with only small variations. There are of course differences on a particular filter design and even more so on the actual D/A stage implementation, but the high level architecture is very common and similar.

For those DACs that don’t use DAC chips from the typical sources such as Texas Instruments (Burr-Brown), Analog Devices, Cirrus Logic, Wolfson Micro, Asahi-Kasei or ESS, but instead use some custom implementation, the optimal case can be determined on DAC by DAC basis.

1 Like

Thanks for posting about this.

[2x 63-tap FIR] -> [2x 31-tap FIR] -> [2x 15-tap FIR] -> [16x sample&hold] -> [3rd order sigma-delta modulator]

When I first learned that this is what it looked like in prosumer level DACs, I was unhappy. Especially the widespread use of S&H.

[128x polyphase or closed-form] -> [7th order sigma-delta modulator]

Is there any benefit to polyphase resampling over “traditional” zero-stuffing + FIR interpolation when the sample rates are an integer ratio?

(I think the answer is no, that polyphase is just an implementation technique for implementing rational rate conversions efficiently, and that it’s mathematically equivalent to the above in the integer cases, but I’ve seen enough people talking about as if it’s “better” to make me wonder if I’ve missed something).

Processing is also asynchronous, so when necessary, the processing can decide to go back in time and recalculate things if it thinks that some adjustments need to be made, availability of large RAM enables this in practice.

Which of the DSP settings in HQPlayer settings take advantage of large amounts of history like this–is this the adaptive SDMs?

(I know that SDM implementations incorporating feed-forward/feed-back and IIR filtering both take into account “infinite” previous state if you ignore the numerical limitations–but I don’t think that’s what you’re talking about, since neither require a lot of RAM to do so).


I don’t understand the specifics but it seems convoluted regardless.

What’s the advantage of this rather than doing it in a single step?

Is there a well explained ‘beginners guide’ to sampling on the web that anyone can point to? I looked up nyquist, and sampling theorem in a few places but not surprisingly things get heavily mathematical really quickly, without really explaining the process in broad strokes. I’ve got a physics degree but that was a loooong time ago and my maths hasn’t been practiced in a while.

To be totally honest, I’m still completely confused as to why turning digital data into an analogue audio signal is so complex and contains so much error that requires complex filtering. Surely the numbers represent part of a frequency / amplitude wave and can be directly extracted. At least thats in my mind.

What am I missing?

PS there are some clever people here :wink:
I’d like to think I know a lot about my field, but in this one I feel like a four year old.

It is faster and can reach quite impressive speeds while at the same time allowing fractional ratios. In certain cases there are differences, but for generic use cases the practical functionality are similar.

Most cases utilize the history/lookahead and rework possibilities to some extent, but adaptive SDMs are the most flexible.

1 Like

Especially in DAC chips where the precision is limited there are cumulative rounding errors across the generations over multiple filters. The DSP pipeline cannot keep accumulating bits endlessly, so it has certain truncation points. Usually that point is at least between filter stages.

IOW, the source information used for recursive steps is not first-generation information, but instead results from earlier steps.

It would be much easier if someone didn’t choose to optimize data storage to the maximum extent. RedBook (CD) was defined to be very tightly bounded to specified human hearing limits in order to save as much storage space as possible. Down side is that it is quite challenging to accurately reproduce the original information from such extremely bounded package. It is sort of challenge on how close to the theoretical limits one can reach in practical implementation.

So in a way hires is much easier case to deal with. Perfecting RedBook playback has taken very long time and it is still an ongoing effort.

When the theoretical mathematical formulas have infinity symbols, you know that you are going to have certain challenges on practical (real world) implementation… :wink:

1 Like