Zero. In fact, if Tidal does what I fear it might – stop offering un-adulterated music in 16/44.1 in favor of ‘MQA’ed’ DRM versions – I would cancel my Tidal subscription, and Roon without a lossless streaming offering like Tidal becomes much less desirable as well.
I use a Musical Paradise MP-D2 DAC and feed it DSD256 using HQPlayer on a dedicated PC, but have really liked what I heard from a low dollar min-PC feeding the same DAC using Roon DSD conversion/upscaling – comes very close to HQP. It took me years to settle in on this digital front end, which is now to the point where I enjoy digital to the same level as vinyl – I love what Roon, HQPlayer, Tidal and a good DSD DAC can do. I do NOT want to give up this DAC (or the option of many others), or the option of DSD up conversion for ALL source material, and won’t compromise any part of that to get to ‘MQA’ which IMO is a solution to a problem that doesn’t exist and nothing more than DRM by stealth.
I know ‘first unfold’ in Roon might be a workable compromise providing everything Roon does after that is still usable – like HQPlayer integration and/or Roon DSD upscaling – but the fact that Roon would seem to be sliding into bed with MQA even at that level is troubling to me. I love what Roon has done to date, but would like to see MQA fade into oblivion.
"iOS devices’ audio interface (over the lightning connector or Bluetooth) are actually limited to 48KHz/24bit resolution. You cannot get any higher via the lightning audio interface. So having first unfold, or full decode for that matter, would be pointless.
The only way to get past this limitation is to connect a DAC over the lightning USB interface. This is how a Dragonfly can get higher rates. But this is a pretty niche market so I doubt TIDAL would ever bother to add first unfold to it’s app."
If you plug a pair of headphones directly into the lightning connector, it uses the lightning audio interface.
If you plug a device into the same lightning connector - in your case the dragonfly - it uses the lightning usb interface.
Interface is hardware and software inside the phone. Not the connector. So while it is the same physical connection the way it is treated is quite different.
Music on your phone is digital. Your headphones are analog. So we have to have a DAC. Plug headphones in and it uses the DAC built into your phone. Plug a DAC in and it uses the external DAC.
To further clarify, the “lightning audio module” is provided by Apple to third party manufacturers as part of the MFI (Made for iPhone/iPad/iPod) certification program for headphones, and is indeed limited to 48k. The ambiguity of the terms “lightning audio” and “lightning usb” here, and the fact the device is clearly capable of outputting hi-rez is cause for some confusion in this back and forth. Cheers!
And as customers who’ve ponied up the Lifetime fee, we expect our voices to be heard.
Until you solve the overarching multivariate, multidimensional problem years from now, please give us at least the same capability as Audirvana+. You must have that capability already.
Put all the caveats you want around it wrt DSP. Your customers aren’t stupid. Stop treating them as such.
I’m sure Roon has no technical issue about implementing MQA playback, whether partially or full decoding.
It’s all about licensing, as mentioned somewhere, it also need to take into consideration on multi zone endpoints and DSP. Roon can consider offering MQA as plugins, users can then pay according. I think this may bring MQA quicker to the user. I don’t know whether this is good idea?
I think the issue is DSP, the MQA guys seems terrified that DSP can be used with MQA (don’t ask me why). For me personally, MQA is at most 1 step forward (arguable) but disabling room correction (DSP) is many steps back.
To get both full MQA decoding and DSP in Roon it is required that the full decoding happens in software in Roon (obviously). This means that:
1- The hardware manufacturer does not pay a licensing fee
2- MQA rendering is happening without tailoring to the particular DAC - you would have to use “generic” upsampling choices (a)
3- MQA rendering device cannot have a blue light meaning “as the studio intended to be heard” because now you’ve DSP’ed the signal
Having said all this, if MQA ltd does not relax their asks here, it is my opinion that it will be to their slow, boring demise.
(a) This is a legitimate claim, however in practice I think it is effectively nonesense, as in most DAC cases you have limited choices of upsampling parameters (say ESS DACs), so tailoring a “generic” but “very precise” upsampling algorithm in Roon might actually, on the whole, be a better thing. In the case of the dCS Rossini, my understanding is MQA and dCS worked on a careful implementation for the dCS Ring DAC, so that’s possibly a little better, not sure.
Full disclosure: I am interpolating here from pieces of info, I have no insider knowledge and might well be completely wrong.
From what I have understood, its the first MQA unfold that is the important one, the second one is basically up-sampling. So if Roon could do the first unfold, and then allow DSP on that, you could combine the best of MQA with room correction or other DSP. The end signal going to the DAC will be 96 khz PCM, but thats ok.
This is the only scenario that I will use MQA, at least for now (who knows what the future will hold).
But in this scenario MQA on DACs become useless, and I have a feeling the MQA guys gets a lot of their money from DAC licensing, so its probably not going to happen
EDIT: its possible to do a similar scenario today with Tidal, by using for example Dirac that provides an audio endpoint.
Personally I am totally opposed to MQA in any form, especially if my money that I pay to Roon ends up in the hands of MQA due to licensing etc.
But for me it seems that a good intermediate solution would be the SOX upsampling trick reported to sound good, i.e. simular to the one in Auralics products. I.e. add a MQA upsampling option in the Roon DSP and let the upsampling method be possible to be set differently when the content is MQA.
For example if you have a good DAC you probably want:
MQA --> MQA upsampling
Everything else --> bit-perfect to DAC
I too think the majority of MQA decoding is done generically in the first unfold. This is done in Audirvana and TIDAL desktop apps, and Audirvana itself will let you upsample that unfolded signal if you so wish, or apply any number of AudioUnits filters to it, and of course Dirac room correction if you have that set up.
A first unfold that does 2x original sample rate and is the same across the board followed by upsampling with MQA determined filters. The latter is the “rendering” stage and is tailored to the DAC in the sense that the MQA filter definitions will map to specific DAC settings that are DAC chip dependant.
The Dragonfly interprets the MQA filter specification in the PCM stream from the first unfold and translates that into the appropriate ESS upsampling parameters. There is no change whatsoever done to the ESS DAC filters in the MQA firmware update for the Dragonfly.