That there is no full-decode solution in conjunction with arbitrary DSP, such as room correction. This could, of course, occur in an MQA certified DAC, but such a product does not exist.
âFull software decode is not possible because the DAC must be known and characterized. MQA is an analog to analog process.â
My understanding is that Roon is different in that it recognizes (and certifies) the DAC. It would allow Roon to do a software decode specifically to the end point DAC, as long as that DAC is certified. The DAC would not need the hardware to do the decode itself to still have the full analogue to analogue certified chain.
Using the RAAT setup I would guess Roon provides a sufficient â closedâ system.
Time will tell. I hope Roon takes all it needs to maximize our experience :).
Iâm sure theyâll do their very best, but remember theyâre a small company still and have a lot of âurgentâ âtop prioritiesâ we also demand fixes to
If you can add DSP after the first, software decoding (which requires lossless, bit-perfect input to get to 88.2 or 96 sampling rate) but before handoff to the DAC for supposed processing to the higher rates (e.g., 176.4, 192, 352.8, 384, etc.), only the content up to 48kHz would be processed, and the original content would be completely gone by the time it gets to the DAC (i.e., it has been upsampled, filtered, etc. - NOT a lossless process). All that is left is the MQA âsignatureâ; i.e., some code that says, âthis was originally an MQA streamâ, to turn on the âauthenticatedâ light.
Questions that pop into my head:
How does the content below the noise floor that is required for the final MQA DAC processing survive processing (e.g., DSP, upsampling, etc.)?
If it somehow survives, does the DAC just âtack onâ that above 48kHz content - because it has not been DSPed - it canât, the software part only extracts up to 88.2 / 96. So you would have DSPed content up to 48kHz, and untouched content above 48kHz.
Makes a lot more sense to me if the original, software unfold actually contains âmusicâ up to 48kHz, which can be upsampled / filtered, etc., and the DAC processing is literally only additional upsampling / filtering against the results of the DSP, not adding any additional musical content above 48kHz.
Just my thinking and, as you say, I may be âincorrectâ.
MQA does incorporate at least one type of in band coded signature/watermark. This has been documented. Some are concerned that it could be used for draconian DRM purposes. At the very least, though, it appears to be used for MQA identification. And that signature/watermark seems to be quite robust if it can pass through intermediate DSP so that an MQA renderer still can identify MQA audio after the first unfold and intermediate DSP.
This is just my supposition, but I would bet on some sort of spread spectrum scheme. As long as the signature/watermark were very low bit rate, spreading it across the 2.3 Mbps of typical 24 bit 48 kHz two channel MQA should make it inaudible and resilient against many forms of intermediate DSP. The signature/watermark would not be sufficient bandwidth to carry actual audio content, but it could carry signaling information that would allow for informed upsampling. The MQA literature screenshot posted in this thread earlier today suggests as much with its parenthetical reference to âcontaining buried information on how to proceed.â That sounds like a set of instructions for intelligent processing/synthesis, not additional latent audio content waiting to be decoded.
The bit rate of MQA is actually capped around ~1.5Mbps and not 2.3Mbps as in a conventional uncompressed 24/48kHz PCM (24-bit48kHz2). I remembered I posted this graph sometimes ago but not able to get the answer Iâm looking for. May be you can help me.
Looking at the graph, one can tell immediately that MQA bit rate is capped throughout the sampling frequencies whereas the conventional PCM bit rate increases proportionally with the increased of sampling frequencies.
Now my question is why is the bit rate for MQA capped constant throughout the sampling frequencies and not increased? Obviously if the sampling frequency increases, thereâs more information it needs to carry, thus the bit rate has to increase!
This bring me one conclusion, anyone who look at the graph will straight away tell you thereâs a âcompressionâ going on, and thereâs is a lot of information that is being âthrown awayâ because thereâs not enough bit rate bandwidth to carry it in the first place especially at the higher sampling rates.
If anyone here can indeed offer me a good explanation I will be very happy. Case close!
MQA content is delivered by a 24/48 or 24/44.1 flac container. Regardless of the level of unfolding the amount of data delivered is still the same flac file (which assuming reasonable compression would have a data rate of about 1.5Mb/s).
When the flac file is uncompressed then the data rate is commensurate with the 24/48 or 24/44.1 container.
The graph you are referencing has nothing to do with the data rate after unfolding. Its intention is to show the efficiency of delivering high res content using MQA packaging vs just shipping the native high res data. In other words, MQA is good for streaming because itâs efficient.
@AMP I still donât get it, if the bit rate to start with is low at high sampling rates, where is the additional information that it needs to fold back exactly to match a conventional PCM?
Is thereâs a âmagicâ that is buried inside the noise floor that it can reconstruct all the information back to match a conventional PCM? You need sufficient information to reconstruct back, in this case, it doesnât. It is like âmagicâ to me.
No, itâs just a very sophisticated compression algorithm. Not truly lossless, but sacrificing the part of the bitstream that simply cannot be resolved by any typical DAC.
If the system doesnât behave in a âtruly losslessâ manner then it make sense to me. Iâm not trying to fault anyone or the system, just need the understanding what is going on here.
Itâs a bit of lossless compression applied so the bitrate is lower when transferring the encoded MQA file. MQA is not digitally lossless (meaning you wonât get perfect reconstruction of source bits)âŚbut thatâs why they emphasize itâs an analog-to-analog process. The compromise is losing a bit of digital data for the sake of getting a better analog reproduction.
MQA is wrapped in a FLAC container, the FLAC container is lossless. The degree of lossless compression by FLAC (the container) is very limited since MQA itself is already so compressed, thereâs no much FLAC can do it losslessly. I see this as easy way to deliver MQA to the masses.
I like @rovinggeckoâs take on all of this and @AndersVinberg is doing a fine job of reminding us of relevant references in the published material.
It seems technically feasible for Roon to perform DSP between the first and the second/third unfolding. The exact nature of the second/third unfolding is sort of beside the point. It will remain whatever it is. What everyone (I think) would like, in the best of all possible worlds, is the ability to convolve for Room EQ in Roon while enjoying full software decoding of MQA.
If full software decoding in Roon is not possible, which would seem to be a commercial decision by MQA rather than a technical one, then some capacity to do DSP after the first unfolding would seem desirable. That may mean that MQA lights do not go on, so be it.
There are doubtless reasons why the devs cannot tell us more. I suspect that the negotiations are sensitive and ongoing. They certainly seem to be the highest priority for Roon.
If it turns out that users must choose between MQA and DSP then I think that would be a bad outcome for MQA.
But that process still has some relevance in identifying and authenticating a ârealâ MQA file rather than something created on a laptop in a bedroom somewhere. So while it is not DRM in that sense, it still needs to be there for authentication. That is not a bad thing.