Berkeley DAC will not recognize MQA Tidal Master file if I use Roon [Answered]

My setup: Macbookpro to Berkeley Alpha USB to Berkeley reference DAC. Tidal settings: HiFi/Master,Sound Output-AlphaUSB, use exclusive mode.
If I play a Tidal Master file through Tidal, my DAC recognizes the MQA files, but if I play the files through Roon the MQA files are not recognized. I have tried various Roon settings but MQA not recognized through Roon.
I also use HQPlayer and Sonore SE, but will not add those back until MQA is recognized through Roon.
Thanks to

The Berkeley DAC acts as a renderer, it is similar to Audioquest Dragonfly RED. First it needs to do the first decode to 88.2/96kHz, at the moment Tidal app does that but Roon will not decode and just pass through. After receiving the first decode, then Berkeley DAC will ‘render’ or ‘up-sampled’ to the original sampling rate with appropriate digital impulse filters.

1 Like

Short answer: You need to wait for Roon to provide MQA Core decoding in the future. We don’t know when this will happen.

Tidal desktop app works for your setup because Tidal desktop app provides MQA Core decoding.

Thanks Peter. Roon’s presentation is totally misleading if it is showing Tidal Master options that cannot be accessed via Roon.

Hi Merek,

Which presentation are you referring to ?

They can be accessed, they cannot be decoded unless your DAC is a full decoder, not a renderer unless there is another issue.

If you select Tidal on the main page of the Roon app the next page of the Roon app has an option to select “Masters” but for what purpose if MQA is not available on Roon.

I can play MQA files from the Tidal app but not from Roon

You still can get fully decoding if you use Tidal app with your Berkeley DAC.

In case you cannot wait for MQA Core decoding from Roon, you may also use Lumin U1, which is now MQA-certified, to act as a Roon endpoint, perform the MQA Core decoding and send the required MQA Core signal (via AES or USB) to Berkeley Alpha Reference 2 MQA Renderer to fully enable the MQA feature.

By the way, Berkeley has commented on MQA in this article:

(edited/combined from multiple posts:)

It’s my assumption that everyone posting opinions on MQA here is doing so in good faith. Unfortunately, few individuals or organizations have the level of knowledge and methods necessary to make an objective assessment of MQA’s potential. We are fortunate to have that capability as a result of the extensive R&D effort into analyzing human perception and audio quality undertaken by our previous company, Pacific Microsonics, Inc. developer of the HDCD process.
We agree that “time-smear” reducing apodizing filters can have mixed effects, but when applied to files created by a typical A/D with awful transient pre-ringing the trade-off of much better spatial information vs. some timbral grunge is usually worthwhile. However, those kinds of “fix-up” tools that are optional parts of MQA aren’t what really interests us.
Our most important due diligence was thoroughly analyzing the entire analog to analog MQA chain using proprietary in-house methods and tools. The result was that MQA came within spitting distance of what 192kHz, 24-bit PCM is capable of using optimum A/D and D/A conversion filtering.
That level of quality, by the way, hardly exists in the wild and few have heard it. Some RR HRx releases that were never edited can get close assuming you use the right D/A. What more people have heard that sounds the closest is a live microphone feed.
All in all, we felt it was a very impressive result and made us decide to support MQA. Without a standard like MQA keeping the “windows” clean, conversion filtering at both ends will be all over the map with very few combinations ever approaching an optimum.
While I don’t expect to change the opinion of those who are convinced MQA is of no value, I felt it was my responsibility to honestly report our finding that MQA is of great enough value to both support and incorporate in our products.
Sincerely, Michael Ritter, Berkeley Audio Design, LLC

The most common types of low-pass filters used in A/D converters have a characteristic called pre-ringing. Pre-ringing generates artificial sounds in a recording ahead of (before) natural transient events in the signal that excite the pre-ringing. Please understand that by “transient event” we don’t just mean a sharp transient like a drum hit or plucked string. All of the natural sounds we hear in life contain micro-transients, the amplitude and timing of which convey both timbre and spatial information to the cochlea and brain. When these natural micro-transients are passed through a filter that has pre-ringing, sounds are generated that don’t occur in nature and the cochlea and brain don’t know what to make of them. The subjective result is that spatial information is diminished or lost and subtle timbre information is obscured or altered. This type of time-domain distortion is probably the single greatest weakness of typical PCM digital recordings.

in theory 192 kHz, 24 bit PCM is all that is needed IF both the A/D and D/A conversion filters are an optimum conjugate system. If optimum PCM conversion filtering requirements were widely enough understood and if the AES were a powerful enough organization I could envision an AES standard that A/D and D/A conversion products would need to meet to be accepted. Unfortunately, that isn’t remotely the case in today’s world. Unless there is an identifiable standard that gets the physics right and all the stakeholders from artists to recording engineers to distribution entities to equipment manufacturers to end users think will benefit them (including making money for some), PCM conversion filtering is going to remain a grab bag of random A/D and D/A pairings, most of which are pretty bad. MQA is the only such standard we are aware of anywhere on the horizon. So we support it. BTW, probably a major reason high resolution audio “isn’t making it in the marketplace” is that almost all of it is sub-optimum and squanders much of the inherent potential of 4X, 24-bit PCM. Have you ever heard an entire record/replay chain that was essentially indistinguishable from an excellent live microphone feed of a huge orchestral choir work? I have and it’s both astonishing and entirely doable with the knowledge we have today. But it’s never going to be available to most listeners without a marketplace driven standard that gets both the A/D and D/A conversion filtering right.


So much so they talk about time smearing, but the downside of using ‘leaky’ filters; short and long roll off in order to achieve better impulse response at the expense of aliasing get reflected back to the audio range. When aliasing get reflected back from ultrasonic noise it can intermodulate with the audio signals producing distortion. This is something modern digital filters use in AD and DA has avoided in order keep the audio signal free from this artifacts.

More test should be done in this area.