If one assumes I am magnanimous and give you the convolution file, which is why I used that example rather than EQ.
If you taking about data integrity, bit perfect is necessary but it only applied where you need the exact information for data analysis. In audio, even it is not bit perfect, musical information is not necessary lost. It all depends on type of process you are taking about. For example, if you have 16/44.1 file and sample rate to 16/48kHz, you don’t lose information just because it is no longer bit perfect. That’s my argument.
To me the discussion is closed thanks to you. In the sheet (see: red area) here the company is just honest but confusing for our discussion, they literally say ‘’ Lossy decompression + touchup to Lossless ‘’ Only in the Lossless explanation is still unclear whether the moved information is permanently gone?
The ‘lossy’ decompression that you indicated in red is for the lower 8 bit which carried information above the audio range. As far as 8 bit is concerned any touch up from ‘lossy’ to ‘lossless’ is based on the amount of information it exists. If more information exist and 8 bit is just not enough to code, it will revert to ‘lossy’ otherwise it is ‘lossless’ based on 8 bit code.
Lossless for 8 bit is easy to achieve, you only need 256 data points compared to 16 bit 65536 data points. This is very coarse approximation!
You’re pretty much repeating BS’s argument, and that’s fine. It isn’t worth my time arguing it further. Your 16/44 => 16/48 example is specious and diversionary, inasmuch as I’ll assume you know as well as I do that it depends on how it’s done.
I would consider you an audiophile purist; demand no change of data, bit perfect and lossless way to enjoy your music and that’s perfectly fine. I ditched Tidal MQA in favour of Qobuz Hi-Res, not because it is lossy or partially lossless or lossless, but I don’t like the sound signature of MQA.
It’s more about language than it is about audiophilia TBH: lossy has a meaning, so does lossless. That put aside, I’m a huge believer in DSP.
There’s also a question of permanence: I strongly believe that using computational power to enhance sound quality is the future, especially at the speaker level, and that the possibilities of enhancement will evolve with time, in part thanks to better algos, and in part because we’ll be able to throw more processor power at the problem. This requires that the original files are as untouched as possible.
What a I hear when listening to MQA is lossy distortion. It is subtle but an experienced trained ear can hear it quite easily when comparing to the original lossless file.
DSP is great because a bad room can’t always be fixed. And a terrible speaker can sound a bit better with some DSP.
That said, there is nothing that replaces a high quality speaker that gives pristine low distortion flat uncoloured evenly dispersed response at all SPL levels.
That said, there is nothing that replaces a properly designed and acoustically treated room!
DSP is an excellent band-aid but it isn’t perfect.
Things like this (whether it’s demonstrably psychosomatic or not, eh ) is why the situation’s more ominous than just ditching Tidal for Qobuz.
I’m not a computer scientist, but I simply can’t see anything that MQA does that couldn’t technically be done with existing free and open tools, and that includes authentication: auth on a hash in a metadata field, add a SoX tag or two for the filtering, etc, etc.
Well. If folks can’t hear a difference at all with MQA then what is the point of it? I definitely can. I hear some subtle distortion (phase and poor imaging and loudness dynamic compression) while others hear the Angels singing (that kind of hyperbole is definitely psychosomatic) …the interpretation of distortion as a pleasing effect is well documented - many prefer certain high end tubes over a more accurate sound from SS (myself included).
Thanks, MusicFidelity and all others who participated in this topic.
Completely agreed - I’m not necessarily thinking “DSP as a way to make a crappy speaker acceptable” or “a small speaker big”, but more “make a great speaker even better”, more, to take a practical example, on the lines of what Audeze did with Roon than what Dirac did with the Apple earbuds. My hunch is what we’re seeing from Kii, D&D and B&O is just the beginning.
Yes. And it’s something that needs to be split from mysticism. I don’t see why it’d be heretical to front Kiis with a tube pre, for example, it’s just that I don’t think the “have you ever heard a SET amp” / “SET amps are the best, solid state sucks and you’re deaf and dumb if you don’t like SET amps” type gatekeeping is of any interest.
We talk a lot about Lossy and Lossless, here’s a thing about streaming, Tidal, Spotify, Apple, Youtube and Lossy and Lossless. No Qobuz… Which STREAMING SERVICE SOUNDS the BEST?
I’ve been listening to Apple Music a lot lately. I like it.
I must be a true lossy lover – as opposed to the fake stuff from MQA. Let’s call that glossy from now on.
Yes the audibility of distortion levels of MQA are similar to Apple compression at 256. Not readily apparent without careful examination and knowing what to listen for. Definitely for casual listening AAC 256 and MP3 320 and MQA are perfectly acceptable formats.
However, Apple and MP3 working groups never stated that their product was “master authenticated” or better than high resolution. Therein lies the problem with MQA…inflated marketing claims. Money for old rope…
This is what Bob Stuart wrote:
Convention Paper 9178
Presented at the 137th Convention
2014 October 9–12 Los Angeles, USA
Figure 8. Examples of background noise in 192 kHz 24-
bit commercial releases. Also shown is TPDF dither
noise for 192-kHz 16- and 20-bit quantization. Curves
plotted as noise-spectral-density in 1-Hz bandwidth.
Above we see measurements of noise in recordings,
chosen to range from reissues from 60-year-old
unprocessed analogue tape to modern digital recordings.
Obviously these analyses embody the microphone and
room noise of the original venue, but in some, analogue
tape-recorder noise. Even the best recorder’s noise floor
is above that of an ideal 16-bit channel.
It is worth noticing that a 20-bit PCM channel is more
than adequate to contain these recordings and that
consequently 32-bit precision offers no clear benefit.
3.3. Environment and Microphones
Fellgett derived the fundamental limit for microphones,
based on detection of thermal noise, shown for an
omnidirectional microphone at 300°K in Figure 9 .
Cohen and Fielder included useful surveys of the selfnoise for several microphones . Inherent noise is less
important if the microphone is close to the instrument and
mixing techniques are used, but for recordings made
from a normal listening position then the microphone is
a limiting factor on dynamic range – more so if several
microphones are mixed. Their data showed one
microphone with a noise-floor 5 dB below the human
hearing threshold, but other commonly used
microphones show mid-band noise 10 dB higher in level
than just-detectable noise. This further suggests that
those recordings can be entirely distributed in channels
using 18–20 bits.
3.4. Properties of Music
Content of interest to human listeners has temporal and
frequency structure and never fills a coding space
specified with independent ‘rectangular’ limits for
frequency and amplitude ranges. As we noted in Section
2.2, environmental sounds show a 1/f spectral tendency.
Ensembles of animal vocalizations and speech have selfsimilarity which leads to spectra that decline steadily
with frequency. Music is similar but the levels decline at
a progressively increasing rate
And your point is ?
officially released, 24 bit, 192khz MQA-authenticated , recording of Like a Virgin ? Your example of an old track on tape and Bob Stuart thought about it. Just info.
Then the relevant link was his own post on that specific recording, which I linked to yesterday.
Whether it’s a blog post or an AES paper doesn’t change anything to Archimago’s findings.