MQA disappointing

Yes it is a fantastic setup. I’m not using any manual filters. Roon does the first unfold and the Berkeley does the rendering. When comparing MQA and hi res or redbook I simply switch between Qobuz and Tidal with the same recording. When I started the Qobuz Beta I did a fair bit of comparing but you know what? That got in the way of enjoying the music so I rarely do it now. And I’ll make that some unsolicited advice for people on both side of the MQA debate.

1 Like

I mean the filtering in Roon (if you upsample) and the filtering in your DAC when listening to non-MQA. Do you use linear phase or minimum phase?

I did much the same as you. Initially I went all MQA in Tidal after a few quick tests. Only with time and on certain tracks did I start to notice a consistent character with MQA. I retested and then I realized that points 1 and 2 above were a consistent characteristic with MQA (which is fixed to be minimum phase) vs non-MQA (Linear phase). I found Roon upsampling with minimum phase to be much closer sounding to MQA than Linear phase (which is hardly surprising I guess).

1 Like

The Berkeley uses a process they call interpolation which is a type of upsampling to as much as 176 I believe. Other than unfolding MQA in Roon that is the only upsampling I’m doing. Just never heard that "hole " in the soundstage.

I thought Berkely had both minimum phase and linear phase filters that were selectable.

If you upsample in Roon then you must also be using a filter that was default or what was last selected.

If you aren’t aware of filters and the various types and what they do then it is hard to know what you actually compared. It can be confusing for sure and not worth the effort to understand for most people.

1 Like

The Berkeley may, I’ll do a little research on that because now I’m curious. But my personal bottom line is I can’t hear any soundstage problems and knowing full well it’s the quality of the master that makes or breaks a recording, I’m Ok with most MQA. I’m a little concerned about the DRM potential but no where near the level of some of the more evangelical opponents here.

I’ve got a dedicated listening room planned and built by the same team of acoustical engineers that also built my recording studio. My main system (in my private listening room): Chord Dave & a pair of Cabasse L’Ocean (see here). No holes in the soundstage (neither with MQA nor with other formats)…

Jeremy, try moving your speakers away from the wall and farther away from each other. There’s a good chance your “hole” will disappear. Your speakers are really great. I know what they’re capable of (if they’re given “space to breathe”) because I use them in my recording studio. :wink:

7 Likes

I don’t think you could actually AB with your system. Does Meridian give you an option of filters?

No filters, only room position compensation and EBA. MHR is switchable. You would ideally need two identical feeds to A/B test.

Thanks Alan. Awesome gear! Love Cabasse - are you in France - I don’t see Cabasse often. I agree fully about pulling speakers out into the room and listening more in a near-field position. I do that for serious listening sessions.

Baffle size is very important for soundstage too. Greater than 9 inches and less that two feet can degrade image. Solution is a narrow speaker or a very wide one or soffit mounting speakers in to the wall.

By “hole” in soundstage I mean relatively speaking in A to B comparison of MQA to non-MQA. The precision of imaging with MQA is typically about 1 - 2 foot whereas a good original non-MQA file will image down to an accuracy of about 1 inch in the horizontal soundstage. I fully agree that room and speaker setup greatly influences soundstage and imaging.

I am surprised you don’t hear the loss of precision in the soundstage. You should because Chord Dave uses a correct accurate Linear Phase filter and the inaccurate minimum phase filter used in MQA first unfold is the primary culprit. If it isn’t apparent on the Cabasse setup the you should easily pick it up on your studio ATCs.

The source of error is most likely the delay of high frequencies vs low frequencies with a minimum phase filter. The high frequencies are delayed and as a result the brain is less able to pinpoint location of a vocalist. Vocalists will sing normal notes that we pinpoint with time arrival difference at each ear however our brain also compares loudness in each ear of high frequencies (overtones, sibilance, consonants and articulation). The delay of high frequencies disassociates in time the two sources of positional information used by the brain and makes the soundstage far less convincing and less accurate (at least for me and I can hear only up to 14.5KHz and it is around 6KHz and up that loudness level in each ear becomes crucial to pinpoint location of a source - these highest frequencies simply don’t get past or around our head and huge SPL differences occur for sounds as little as 30 degrees off axis, as one ear is so much louder than the other).

Since you have a studio you will probably know the rule of thumb for filters - Linear Phase is best choice to be used if you want to preserve image and soundstage. Minimum phase is sometimes needed and used on individual tracks to clean things up (like low frequency stuff or a notch) without causing pre-ringing but best avoided overall.

1 Like

One thing you will want to confirm is not only what filters your DAC is using when you play MQA, and what filters your DAC is using when you play PCM, but exactly what happens when you play an MQA track and then switch to a PCM track. Unfortunately, some DAC’s leave MQA filtering scheme (which is a very weak, high IM filter) on and applied to everything thereafter once you play an MQA track. This would be a “happy accident” in any A/B in MQA’s favor…

1 Like

Yes, as Miska has pointed out “lot’s of stuff is done” in the compression scheme. The decimation of the file down to 88/96 with application of a min phase filter before encoding, etc. etc…

What was it Bob S said, we use minimum phase filters because everything in real life is minimum phase…

Is that so…How so?

No sound occurs before the event I am told…

Which even is that, and how is that related to min phase, which is (in layman’s terms) a way of describing the when and how various frequencies are produced in relation to each other (i.e. the “time domain”)?

That’s it, one thing after the other…

1 Like

What things are those?

This is the problem of minimum phase - high frequencies delayed with respect to low frequencies and causing serious phase distortion.

Linear phase preserves the time arrival of everything. Nothing arrives earlier or later than it should. Linear phase is the only form of filter that preserves the waveform perfectly with all frequencies arriving correctly in time.

The pre-ringing boogey monster is a false construct by manipulative marketers. The scare relies on the behaviour of an anti-aliasing or anti-imaging filter to an impulse response. The impulse response is not musically a sound that should ever make it to recorded music. It sits at the complete extreme of the spectrum outside the audible range (22.05KHz in the case of CD). Pre-ringing is a complete and total non-issue for D to A playback because impulse responses do not exist in recorded music (except in error). It only exists in a lab and since you can’t hear it, pre-ringing only exists visually from a lab measurement or Stereophile impulse response display.

That said, minimum phase filters are sometimes necessary to clean up individual audio tracks in the mixing stage. This is because the filters used on individual tracks prior to mixing operate deeply inside the audible range and a sharp filter within the audible range can create audible ringing.

This has been known and understood in high end audio for at least 50 years. Bob discovered nothing except a clever way to market a solution to a problem that doesn’t exist. Sadly, the solution to the problem is to create phase distortion and to degrade soundstage imaging. So the cure to a non-existent problem is worse sound. However, Bob came up with a solution to that too - simply tell everyone that the worse sound is better and authenticated!

1 Like

And my ears tell me things sound great. My speakers have EBA that time aligns the frequencies, something that can only be done in DSP. I’m happy with the sound.

In the back and forth on the Audiophile Style “MQA is vaporware” thread, John Atkinson never could explain how out of audio band “ringing”, which in addition is very very low level (db), and is the result of an dirac like “impulse response” (that is, not part of real world recordings - microphones can’t record such phenomena), is relevant in any way. He just assumed that because Bob S (and there have been others) say it is, it must be so. The assertion that it is related to “transient behavior” and thus really impacts sound quality, is just that an assertion that is never explained. “Ringing” is part of eccentric audiophiledom…