Linear Phase v Minimum Phase?

I don’t have any processing turned on in Roon. Absolutely everything off except MQA first unfolding. I prefer the original audio file to go to the DAC with no sample rate conversion. My DAC does everything I need done.

Linear phase preserves relative phase between various frequencies - so it preserves timbre. Timbre is important musically. Minimum phase does not.

Our perception of instrumental timbre is very influenced by the specificites of attack and decay of every instrument. Attacks should be as clean as can be hence signal precursors are unwanted, and in band-limited digital filters that comes out as a limitation to ultimate quality. For sure lin phase is desirable within the system or across the system but conceptually to me the system itself end to end must be causal, the effect should not come ahead of the cause.
Then my observation is pragmatic. Min-phase sounds better on the good recordings of all musical genres, timbrically as well, perhaps for the above reasons. Is is a matter of implementation or a more general issue ? As we are end-to-end the phase of the analog filter plays a role.

Sorry but we will have to just disagree. We don’t hear transients or the shape of waveforms in the way you would like to think. We are more sensitive and able to evaluate frequencies of sounds (oscillations) which is why frequency content and their relative phase is so much more important. What you hear is most likely the way the filter changes the relative frequency responses or timbre.

How could Nature not have evolves our hearing to make best use of transients to make best use of animal sounds, be it for hunting or protection ? Before making music, man had to make food for its own survival, and our genome evolved for thousands of generation under environmental pressure selection.

The way our brain is wired is certainly extremely different from an spectrometer or frequency analyser and reducing transients to relative phase variations between harmonic components stems for a purely linear analysis, which is a good starting point but certainly not the end of the story.

A convincing evidence of most interesting part of the story regarding human audition for transients can be found in a publication which I found reading MQA background information. Beating uncertainty principle by a factor 10 can’t be explained easily at all. I have been familiar for the last 30 years with inverse problem theory and can confirm that one does not get a factor 10 in such accuracy by simply adding prior information, unless this prior information massively dominantes the input measurements. So all of this simply calls for a non-linear analysis, one that if we could understand and simulate would have widespread applications well beyond audio.
Look for:,
A Physics Review Letter publication entitled “Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle” from Laboratory of Mathematical Physics, Rockefeller University, New York.
Older careful experimentation have been pointing out to this since the 1960ies but the protocol here is modern and well quantified. Also with the recent achievements in artificial neural networks we are starting to have some hints about how our brain can actually work, by mimicking some of its processes. And it does not work if one restricts it to linear operators.
Sorry for the digression away from the filters but the topic is deeper than it seems. Hope this helps…


Urls got smushed together: - Free to read, other link requires login.


A short transient like a Dirac delta function contains all frequencies. It is not very interesting to the ear apart from detecting the direction of the sound by using both ears and the distortion around the pinea. Musical instruments, most natural and animal sounds are best defined by their frequency content and timbre. Our vocal chords vibrate like a guitar string.

1 Like

Diracs are a useful mathematical abstraction, they correspond to an infinite spectrum and to an in fine instantaneous power. Not physical, especially in the realm of audio. It is only one the fundamental tools used in linear Fourier analysis that you keep referring to exclusively like nothing else is at play here. I encourage you to read the mentioned article, focusing on the context of real, physical perception and showing a gross inconsistency with a principle established based on linear analysis.

Conceptually, linear phase is a delay modulation of a zero phase, right ? Then zero phase has non-causal precursors which is a priori “artificial”. Minimum phase is designed to be causal, no precursors.

Having said that, phase control from the acoustic event to its acoustic reproduction is a complex issue, different microphones can have a different fair amount of phase deviations from the ideal flat phase response, so it might be difficult to make a universal choice. And to do a perfect mix…

I have fairly accurate imaging on my system and I pragmatically use minimum phase versions of filters that, on my system, improves the accuracy of instrument location. On good recordings in terms of realistic imaging, comparatively, the zero phase tends to give a bit of extra space but it is a somewhat “blurred” space that I find artificial. On my system. Others might make different conclusions on different systems, fine. No prétention to hold the truth, just trying to help.

1 Like

Hi Jeremy (@Rhythmatist),

I read with interest some comments you made in other topics where I understood you to be saying that pre-ringing, when using linear phase low pass filters in digital to analog conversion, is relatively inaudible. I believe you were referring to the variation of such pre-ringing by frequency.

In this article there is an interesting demonstration of (deliberately exaggerated) pre-ringing on a snare drum.

In the example the lower frequencies demonstrate greater pre-ringing than higher. Is that because the author is using EQ to cut the lower frequencies only or is it an example of what you were saying ?

On the topic of pre-ringing… check out this video from 6:10 where Rob Watts discusses issues with removing pre-ringing and he refers to an experiment with bass guitar and trumpet and piano.

Here are the relevant slides…


Christian, I quite agree with you. If your speakers are well time aligned and their phase response matches the minimum phase, I get better results with minimum phase EQing. If the speakers are not well time aligned - I would say ill-conceived, but unfortunately this is frequent - then the phase has to be corrected though.

For active crossovers, my experience is that above 250-300Hz and <1000 taps @44.Khz, linear phase filters perform very well and allow for a very good control of the system phase (impulse and step response). In the lower range, minimum phase filters are necessary, the pre-ringing is audible and detrimental.

1 Like

When I did play around with upsampling I much preferd the sound when using Minimum phase to Linear phase. Not much in it but it felt smoother to me.

Yes. Exactly.

The pre-ringing in the case of a sharp high pass filter with transition band placed within the audible range is much more likely to be audible. Therefore a mix engineer should stick to minimum phase for any sharp (high Q) filter that is within the audible range. Often this kind of high Q filter would only be used on a particular track of a multi-track recording (fix one bad instrument/microphone recording but leave everything else unadulterated). In general, for mastering and any work involving the entire final mix or any work with low Q filters there is a tendency to always prefer linear phase because it preserves the original time domain relationship of the frequencies: imaging, amplitudes and timbre.

If we look at digital playback, the accuracy of the original signal is best replicated by a linear phase filter with a transition band that is entirely above the audible range, with the lowest passband ripple and the highest possible image foldback supression: this preserves time domain information within the audible range perfectly and any pre-ringing will be at inaudible frequencies.

Rob Watts is correct about a few things. The idea that you need such high sample rates (768KHz) to temporally define transients within the audible range to less than a 1usec is unfortunately an obvious big blunder. Ordinary CD well defines transients to less than 1 usec.

The approach of using a good FPGA with high tap linear phase filter to have better filtering than what is on widely available DAC chips is good design. Whether 1 million taps is necessary is kind of like how many angels can dance on the head of a pin. The fact that Rob has designed his own filter partially using science but partially through listening tests, suggests that the sound is tailored a bit to his taste. The approach looks more rigorous than MQA. MQA appears best suited to poor quality D to A converters. MQA might make run-of-the-mill converters with cheap DAC chips with poor filter design sound better. MQA can only make a well designed DAC with proper FPGA designed filtering sound worse.

Hey Alec,

First it is good to see that some guys are into serious audio with Roon. It is good to see that, as much as it is also very good to see a young generation getting enthusiastic about small active digital monitors etc… They need to build their own experience - and go to acoustic concerts to listen to acoustic live music in the best possible conditions to forge their ears.

Active crossover systems with multi-amping allows for a much larger number of parameters to be tested and optimized : the spatial and temporal alignment of transducers, the choice of filters, cutoffs and slopes, then also fine tuning on cables, amps etc… And finally if one can do digital processing of different nature on different channels… How do you do that ? Does Roon/HQplayers allow this ?

In any case it is hard to make a global statement unless such multi-amping elements have been individually referenced and their global response checked. Maybe a good check could be step-function response of speaker system from listening position ?
Single ampilification eliminates quite a few variables; Yet tweeter placement and even orientation becomes extremely sensitive as one eliminates other caveats, which is not a surprise.

Hard to fully preserve timbre if one does not preserve the attack of note that for our ears is so discriminant. Transient response, very important. No matter how, pre-ringing is NON-CAUSAL and unless it is completely outside our perceived bandwidth, it should sound, for that very reason, artificial. It corresponds to types of sound that do not exist in nature, or in analog systems unless you twist them to produce that effect after some delay, right ?
Then the listening results counts. I would generally not find multi-track recordings to be the best ones in terms of imaging accuracy and naturalness on a serious system. My gut feel is that such recordings are often mixed to provide “some” instrument presence on average systems of limited dynamics and high levels of intermodulation distorsion. It is also the ability to record everything and do the mix later, with tolerance to error in the recording itself, opposed to say using an artificial head, that can produce marvelous results when placed in the exact right location but can be less convincing if too close (too much direct field) or too far (too much reverberated field). An issue when deciding what to use to record a unique concert… But that takes us away fro this discussion on what digital filter to use. For that purpose doing the selection on natural sounds (shore break, water noises, rain, applaude, etc…) is what I do, in order to refer to familiar sounds that must really sound familiar when reproduced.
I prefer min phase on my system but again I did not pretend to hold an absolute truth on this ; the existence of such a truth in such a diverse context being a true honest question…


Agreed. This is why Linear Phase filters are the ONLY choice. Minimum phase destroys the original waveform and including the transients and redistributes frequencies in the time domain with various delays according to frequency. Preservation of Transient response is exactly why Linear Phase filters are the ONLY choice.

Music is supposed to be band limited to less than Nyquist before digitization (fundamentals of digital recording). Therefore there won’t be any ringing at all provided the filter transition is above the audible range (20KHz).This is why Sony Philips chose a Nyquist of 22.05 KHz so there is just enough room to place a filter - you have up to 24KHz to reach full stopband attenuation before any image can fold back into the audible range.

Maybe for you. But I choose to listen to different filters and pick the one(s) that sound best to me on my system with music I listen too. Right now, that just happens to be a MP filter. The LP version of that filter sounds great too but not as good as the MP version.

1 Like

Linear phase filters don’t come out too well in this review of the Mytek Manhattan II and all its available filters.

That assumes a lot – that Herb Reichert’s listening preferences supersede technical fidelity and that Herb actually could tell audible differences between the digital filters and was not subjectively imagining differences. Probably other assumptions, too.