MQA first unfold in Roon? MQA? [Delivered in 1.5]

What happens when you use Roon DSP and render as DSD? When I first heard MQA on Tidal I thought it sounded great - and was all for having it in Roon because I can get it if I turn off all DSP in Roon. So basically (to me) there is a conflict between Roon DSP and MQA. Then I tried replicating it in DSP. Upsampling PCM to 192kHz or playing everything as DSD128.

To me - and I appreciate cognitive bias is a factor - I can’t tell the difference between MQA and the Roon upsampling. I am sure there is a difference - the point is that I am so happy with the upsampled audio that I don’t miss not having MQA. My kit is still capable of playing it - I just choose not to. MQA is like a toy for me - don’t have it and you want it. Have it and you get bored of it.

So my question for the smarter folks (and to Chris’) point about DSD content - what happens when I use Roon to play non DSD content as DSD or as upsampled PCM? Because it sounds great to me :slight_smile:

Modern DSP hardware is capable of processing 1-bit DSD natively without converting to PCM(DXD).

http://www.jamminpower.com/pdf/dsd%20editing%20system.pdf

In the early day of SACD, Philips and Sony uses ‘8 bit wide’ DSD sampling at 64fs but all these can be done natively in modern DSP chip.

And this has been widely adopted?

Not sure in commercial sense application but I’m stating it is possible to process 1 bit natively.

Guys, I apologise of being off track, please stick to subject. Thanks!

That white paper was from Sony and there is no date. My concern is whether or not there are sonic consequences of doing what they say.

Or DSD transferred from Analogue Master and Mastering performed in Analogue Domain

Good use case, but not one that is going to have any significance in the near future, I think. Which is why figuring out how best to handle MQA is important, because love it or loath it, that’s where the studios now are.

1 Like

Post the first unfold, it is just upsampling - Bob Stuart said so in one of his video explanations. However, this upsampling is done with MQA choosing one of a set of possible upsampling filters (32 filters total? I don’t recall exactly). This filter choice is the one added piece.

Bear in mind that any unfolding to original data would be accomplished precisely using a DSP filter (as per the paper that I’ve posted far too often). Whether you choose to believe that there is actual unfolding beyond the MQA Core is another matter.

The first unfold truly delivers more info - and is the same across the board.

The next (rendering) stage is using pre-existing upsampling filters in DACs, possibly with specific parameter choices (think of the iZotope parameters in Audirvana for example). It is possible some DACs (eg Meridian’s) have added specific tuning in these filters, but in most cases this is not the case.

For example, in the Dragonfly case, MQA is implemented at the controller level: the controller chip interprets the MQA filter selection and maps it to the closest possible filter in the ESS DAC. There is NO change in the ESS DAC that chip that is MQA specific.

What I’m trying to say (and believe) is that these pre-existing up-sampling filters also do the subsequent unfolding to get back to the original samples. The filters may, of course, differ from product to product, because they are probably a convolution of a number of filters, some of which are hardware/implementation-specific.

Ok, got it, understand what you say. But there’s no way to algorithmically recover information that is not there at all… :slight_smile:

There is if it’s been encoded with reversible multi-sample encapsulation during the down-sampling process.

Jeezus that’s a lot of jargon. No there isn’t any real information. Think about it this way: If you had sample rate information above 96KHz in the original signal, then that information is completely lost. Imagine two tracks with identical PCM data below 96KHz sampling rate but different data at higher sampling frequencies. The upsampling filter can only see the data below 96KHz so it will reconstruct the same exact upsampled data.

MQA has made it clear that first unfold will have all music contents recovered.

‘The first unfold recover all direct music related information. Output is 88.2kHz or 96KHz’

Bob Stuart even said there’s virtually no music information beyond the 88.2/96kHz sampling rate.
From my understanding, the rendering is tied to DAC specific impulse filter over-sampling (a.k.a up-sampling) and optimisation. There’s no real ‘magic’ work here.

Nope. Doesn’t fit. Nyquist disagrees. MQA can do nothing to bend the math. They have said all “psychoacoustically relevant” content is preserved. That’s different and not a mathematical statement.

Whatever. I agree 100% that there is no music content above the first unfold. Ask yourself then why MQA is pretending that subsequent unfolds exist and occur.

1 Like

Basically these are claims (BS) not engineering mathematics.

They want to make more money by selling renderer like dragonfly.