Why do audiophiles like HQ Player?

WM8524 is good target for upsampling because it’s internal digital filter is performance limited, having only -50 dB stop-band attenuation and 0.1 dB pass-band ripple.

In most cases, poly-sinc family is a good starting point. Apart from poly-sinc-hb, all the poly-sinc filters are apodizing and can thus deal with the ringing introduced by typical decimation filters during the production phase of the recording, giving more consistent recording-to-recording performance.

Minimum-phase variants are usually good for multi-tracked studio pop/rock recordings that have lot of fast transients like drums/percussions. While linear-phase variants are usually good for classical music or other content recorded in natural acoustics with minimal micing techniques.

You can also try closed-form filter which is quite special, and non-apodizing by definition. In some ways similar to poly-sinc-hb. As a result it may sound good on good recordings while not improving not-so-good ones so much.

Which variant of the filter is best for you, depends on your personal preferences and on what properties of the sound your hearing puts most emphasis on.

For those Meridian aficionados that I know are lurking out here (takes one to know one I guess), this HQPlayer thing is very interesting:

  • Meridian applies an apodizing filter to incoming 44.1k signals and converts it up to 96k.
  • If you feed it 96K it will bypass the apodizing filter. (in the display the k becomes K)
  • HQPlayer basically allows you to choose your apodizing filter, to your liking.
  • you can easily compare whether you like Meridians apodizing better than the best sounding (for you) on HQPlayer.

So, to the ’ Why do audiphiles like HQPlayer?’, my translation is: the digital world is not perfect (until the advent of MQA perhaps :angel: ), to take the ’ edges’ (ie the ringing mentioned above) off, filtering is needed.

I find it very rewarding to play with it (and that on a simple raspberry pi!) and would recommend to just play with it. For most the cost of this is insignificant compared to the set, with, in my case, lots to gain. YMMV of course.

@rovinggecko
Thanks for the insight. Slight correction…your Meridian gear up samples 2x so your 44.1 becomes 88.2, 48 to 96 etc.

I think some audiophiles like HQP because some audiophiles like to tweak.

Nothing wrong with that. Given that the focus is now more in the digital realm it is no surprise that tweaks become more digital and DIY.

MQA is just another delivery format (codec), it doesn’t change anything in the ADC or DAC…

From 96k there is still long way to go to analog, lot of DSP on the way.

HQPlayer focuses on optimizing D/A conversion process (implementation) for what ever the source or delivery format may be. In big picture, my take is that DAC hardware should be doing just that, converting digital to analog and not doing any DSP at all because it will be always resource constrained for DSP. It may have been good idea in 80’s and early 90’s when computers were not used as primary source and didn’t have as powerful processing capabilities as they do now. When you are upsampling to DSD, HQPlayer is doing billions of calculations per second. DAC chips don’t have such processing capabilities so they cut corners in many ways:

  • Using low oversampling factor for digital filters, reaching only 352.8/384k digital filter output rate
  • Due to resource and precision constraints, low stop-band attenuation and high pass-band ripple
  • Using sample-and-hold oversampling to go up from there, causing image frequencies (distortion) around multiples of 352.8/384k
  • Limited precision fixed-point arithmetic DSP pipelines
  • Using low-order simplified delta-sigma modulators

To complement that, I published the DSC1 open hardware DAC design that doesn’t have any DSP and is pure discrete DSD DAC (no off-the-shelf DAC chip).

So for a modern DAC there are two major DSP parts in the play, digital filters and the modulator. HQPlayer aims to replace both with a solution that is not limited by DSP resources. For DAC chips, DSD mode cuts the shortest path through.

@jussi_laako, thanks for elaborating. One thing that I’m curious about is the potential for high CPU and ram activity when using a tool such as HQPlayer to process the audio stream introducing electrical noise into the stream. How does it reconcile with the views of those (with and without commercial interests) that insist the computer should in effect run in limp mode with all non-essentials turned off, real time kernels etc?

That effect largely depends on the DAC how well it is isolated and how the computer itself is implemented. One can also use various USB isolator products (iFi, AudioQuest, UpTone).

But there is a solution to address precisely that and it is called Network Audio Adapter or NAA for short. That is a small software module with asynchronous FIFO buffer between HQPlayer and DAC. Copper ethernet is transformer-isolated by the specification and for maximum isolation one can utilize optical ethernet. It also allows running HQPlayer server in a different room than where NAA + DAC is used. (combine that with Roon remote control and you have a great distributed system!)

NAA can run on a low-power PC (Atom or similar CPU) with Linux (preferable) or Windows, or some Mac computer. But going further it can run on various small very low-power ARM devices, like CuBox-i, BeagleBone Black or Raspberry Pi (or many many other similar ones). Usually these run very stripped down Linux with only the NAA service running and can be powered from high quality linear power supply.

Well, if you are talking about the compression part of MQA which will be used by Tidal, et all; then I agree. However, if you are talking about newly captured & mastered audio utilizing the MQA time correction process, then it is my understanding that part of the process is to include ADC information so that time correction can be done as part of the MQA DAC process (like the SAM configuration in a Devialet for your speakers when it corrects the time issues in the cross over). In this case any additional processing will potentially negate the advertised benefits of MQA Mastering.

What do I need if I want to measure this MQA stuff myself to verify that it works from ADC to DAC? So far I have not seen any proper technical description of MQA, just advertising babble. But I know at least one DAC that claims to support it and I know it uses stock ESS Sabre DAC so I know exactly what the DAC chip is going to do with the data.

What is input of the MQA codec? Modern ADCs are delta-sigma design, so the native output from the A/D stage is most typically 5.6 MHz bit-stream that is converted to PCM using a digital decimation filter. The apodizing filters in HQPlayer are designed to correct the problems from these filters.

What is output format from the MQA codec? If it is for example 96 kHz PCM, then certainly there will be further processing before the signal is converted to analog.

All modern DAC chips are delta-sigma designs, meaning that PCM input needs to be converted to high speed bit-stream. As said earlier, DAC chips typically convert PCM input first to 352.8 or 384 kHz PCM using a digital filter of varying quality and then use stupid sample-copying (sample-and-hold/zero-order-hold) to take the rate up to typical 5.6/6.1 MHz speed for the delta-sigma modulator to produce the bit-stream for the actual D/A conversion process.

What HQPlayer is doing is not adding any additional processing to the chain, but replacing the processing performed by DAC chip with better implementation done in software. So HQPlayer performs high quality digital filters taking the sampling frequency straight to 5.6/6.1 MHz or even higher 11.3/12.2 or 22.6/24.6 MHz, without any quality compromising sample-and-hold stages and then converts it to bit-stream for the actual D/A conversion using high quality dithered delta-sigma modulator.

So if any further processing would negate benefits of MQA Mastering, then certainly that is going to happen in all modern DACs. If you really want to get rid of the filtering effects and want straight digital path from the actual A/D process to the D/A process, you need to use native DSD recordings. Any PCM will have processing at both ADC and DAC side and that is unavoidable.

1 Like

Very interesting Jussi, and consistent with Daniel’s prediction that we will need to be sold further versions of recordings in order to achieve the 10ms advertised benefit of MQA.

If Tidal/Roon implement a software MQA decoder then I expect we might still benefit from feeding HQP a 192kHz input.

Now this intrigues me. Is there a turnkey solution for this with a beaglebone black?

Oh, and how does this implementation differ from what RoonSpeakers will do?

There is ready to use micoSD image for CuBox-i. Similar one coming for BeagleBone Black (and maybe RasPi too) when I find time to make a build…

Thanks. Keep us posted.

Hi @fritzg,

Are you asking about differences between RAAT and an NAA ? These are some similarities and differences as I understand them:

Operationally, the NAA is seen only by HQP and a RAAT device only by Roon.

Because HQP does multi-channel (non Roon input) and room EQ with a convolution engine, an NAA can output multi-channel and convolved streams. Roon doesn’t have those features (yet).

Similarly Roon can send a volume normalised and cross-faded stream to a RAAT device, which are not features within HQP.

A Roon Core can send multiple audio streams to different RAAT devices and group devices together to send a synchronised stream. I understand HQP outputs only a single audio stream to a single NAA.

Both are relatively thin clients, but I believe the NAA will be thinner than RoonSpeakers.

I suppose it’s fair to say NAA has been available for sometime (even if not obvious/straightforward how to setup on all platforms), and RAAT hasn’t been released yet so at this point in time NAA is your only option for such a network endpoint? Today at least.

I’m hoping RAAT will be aimed at the less technical user - ie download something that can be put onto an SD card and shoved in the Pi and you’re done.

Hopefully it won’t be too long before Roon has these features :wink: I would love to be able to do room correction and stay within the Roon Ecosystem. But I know resources are limited.

As I understand MQA, it is not really a codec. Sure there is a codec involved, but the unique MQA folding and unfolding does not involve filtering. You start with higher-resolution digital data (which has been created in an ADC involving a filter), then it does the folding which does not involve filtering, giving you a 44 or 48k data stream. At the other end, you do the reverse process, unfolding (without a filter) to recreate the high-res data, then put that through a DAC with its filter.

The distinction is not just nitpicking. I think it means that any processing of the data stream prior to unfolding would destroy its MGA-ness. This would include upsampling (with filtering) in HQPlayer. It would also include processing that is often done in processors, such as bass management and room correction.

(I have been wondering how Meridian will combine its room correction with MQA processing: it seems to me they have to MQA-unfold first, in the processor, and then do room correction and other stuff on the high-res data stream, and I don’t think current Meridian 8xx processors have the processing power to do room correction on 352k data. Curious.)

So if I am right in my understanding, daisy-chaining Roon-HQPlayer-DAC would be a problem with MQA data if the DAC does the MQA unfolding. If Roon does the MQA unfolding, it would work, although the need for really sensitive filtering is less critical on high-res data than on CD-res.

We all hope we will get more info about MQA at CES next week.

First, I am not using HQPlayer, have no opinion on how well it works

Just an observation on the idea of using a general purpose computer with greater compute power than the hardware in a DAC: this is not necessarily consistent with current evolution of compute architectures. We see a lot of specific-purpose processors in modern systems, from mobiles to servers. In mobiles, the goal is to get better battery consumption, and we see it for image, audio, video processing. And both PCs and game consoles have GPUs.

We also see a lot of it emerging in servers in cloud datacenters: the main purpose is just performance, but power consumption matters in a datacenter as well. For example, there is a lot of interest in hardware acceleration for crypto; a general-purpose x86 chip has crypto functions, but even so, all the crypto that we want to do in the post-Snowden era would be a big load on the general purpose processors, and would add a lot of power consumption. Add 10% on top of a 30 MW datacenter, that’s a lot of extra wiring and backup generator capacity…

Intel has long been married to the single general-purpose processor, while the ARM ecosystem is focused on lots of specialized processors. But Intel just spent $16.7 billion on their biggest-ever acquisition, of a company specializing in custom processors–that’s a powerful statement about the centrality of asymmetrical multi-processing, of custom hardware.

Obviously HQPlayer provides value rooted in Jussi’s intellectual property, and doing it in a computer gives more flexibility and agility than doing it in a DAC. But I’m not sure the power argument holds up.

To me it is very similar to HDCD encoding, which I would call a codec… I could also implement my own variant of similar thing, but I’m not too worried about bandwidth usage so I have no problem streaming hires DSD or FLAC over internet so I don’t bother. 4k NetFlix is not an issue, so I don’t see how audio would be either.

I get minimum 50 Mbps over 4G/LTE here. For local playback there’s terabytes of space and gigabit ethernet. Same as with HQPlayer’s processing, Moore’s law will fix space/bandwidth limitations if any exists.

Now HQPlayer is intended to replace the DAC’s filter, so any decoding should happen before HQPlayer.

Why do you think upsampling (oversampling) in HQPlayer would be destroy something vs. the upsampling (oversampling) in the DAC?

Only DAC I ever run in PCM mode is Metrum Musette, since it doesn’t support anything else. All other DACs run fixed to DSD mode at highest possible rate.

In current setup I believe it would be the Roon doing the decoding part.

Yeah, we agree: the MQA unfolding happens before the DAC.

  • Inside Meridian equipment (or any other licensee), that is taken care
    of by the hardware design.
  • If you’re running Roon or some other player straight to a DAC, there are two options: with an MQA capable
    DAC, Roon doesn’t have to be aware of the MQA stream, as long as it is bit-perfect; in order to support MQA on non-MQA DACs, Roon would have to do the MQA unfolding and the DAC has to support the high data
    rate.
  • For daisy-chaining like Roon-HQP-DAC, Roon has to do the unfolding.

I don’t really know that MQA-ignorant upsampling in HQP would destroy the MQA data stream, but it seems inevitable based on the descriptions they have published.

As much as MQA ignorant upsampling in any of the DAC chips on the market today. I know only one DAC so far that is promising to have MQA decoding between input and the ESS Sabre DAC chip.

That is practically no different from the HDCD decoding DACs of the past. Biggest difference I see is that HDCD decoding was available as chip-solution while MQA is licensed IP that needs to be purchased as a software module by each DAC manufacturer. I’m not aware of any DAC chip that would have MQA decoding.

So far I’ve heard only about packing 88.2/96k PCM content in semi-lossy way into 44.1/48k 24-bit container. But how about 192/24 content or 352.8/24 content, or DSD64/DSD128/DSD256 content?

Can you elaborte on this? I was under the impression many DACs, including mine do not run in DSD mode.