Well… There is a measurable difference between an mp3 decoded with LAME and one decoded with Fraunhofer, just to mention two. Analyse with an oscilloscope if you wish.
Then, as to compressed, lossless and lossy. Lossy means that in compression, parts of the original are left out. Lossless compression should provide enough information to restore what is removed in compression. The key word being “should”.
Both kinds of compressed data will need to be decompressed. Just as something can go wrong in compressing the data, something can go wrong in decompressing the data.
Only with an uncompressed WAV-file can one be sure that the source is intact, provided the file is not corrupt. If anyone wants to believe that fourier transformation software cannot fail, please go ahead. Mr. Murphy and myself will be sitting in the corner chuckling.
In terms of converting source to sound regardless of content density, I absolutely do not need to distinguish between lossy and lossless. The conversion process is identical, except for the codec used. In terms of content density I do, but that is not what I was saying.
The point I was trying to get across is that we do not live in analog times anymore. Reproducing sound is not a matter of elementary physics anymore where movement was converted to electrical current and then back again.
Now there are countless intermediary data transformations between source and sound. Each step is a possible cause of error. MQA seems to add a few intermediary steps.
For some reason, many contributors seem to disregard the simple fact that all that data manipulation is a vulnerable chain of processes that can easily be disturbed.
As for mendaciousness, the same can be said for the countless and superfluous remasters.
Since when is LAME a decoder ? I thought Lame Ain’t an MP3 Encoder…
About this you’re wrong. Sorry. It doesn’t should. Lossless codecs demonstrably reproduce exactly the data that was compressed. Every time. Thousands and thousands of times over if you want them to. A lossless codec can’t be bargained with. It can’t be reasoned with. It shows no signs of pity, or remorse, or fear, and it will absolutely not stop, ever, until it spits back your data, perfectly unmangled. That’s what lossless means, and that’s what the difference between lossy and lossless is.
I know it sounds like magic, but imagine what’d happen if, say, your bank compressed its data (which banks do), and all of a sudden, your balance had one less zero on it. Same thing.
If you have examples of input => lossless codec => output triads that demonstrably differ in any other way than metadata, please, do share. I’m sure there’s a whole bunch of people that’ll be fascinated by your results.
I really don’t understand your focus on errors, it really has nothing to do with FLAC or lossy compression schemes. The decoders will very clearly fail if there is corruption or errors.
MQA is destructive right from the original downsampling through the folding and finally via the upsampling to restore the original sample rate. It isn’t done in error but by design.
I agree the master is by far most important and 16/44.1 will deliver a good master wonderfully.
My oh my, the audiosphere is populated by truly amazing creatures.
Primo: codec stands for cod(er)dec(oder). This is a piece of software that contains the algortihms both for encoding and decoding a compressed file. Basic IT.
Secundo: the world is rife with examples of software errors. I do not feel compelled to provide examples of audio software errors. Corrupt files are a fact of life. Corruption need not specifically occur in the compression or decompression stage, it can also occur in the transfer stage or during disk maintenance, or somewhere in the digital to analog conversion process, or (fill in blanks as desired)… Compression and decompression are two weak points in the chain. Granted, compression usually only happens once, so failure can only happen once. Decompression on the other hand happens every time the file is processed, so it can (this is not the same as will) fail. There are very few absolute certainties in life. IT is by no means a field where absolute certainty rules.
Tertio: for those who seem to be intent on picking a fight where there is none to be had: I am not advocating MQA nor am I campaigning against it. I am just pointing out that in every complex system, opportunities for failure are cumulative and that since MQA seems to be an even more complex intermediary process than the processing of “standard” audio files (lossy or lossless), the possibility of errors increases. Furthermore I state that there appear to be different, manufacturer specific implementations of MQA (hence the hardware-software reference in my earlier posts). It follows that MQA is something of a black box experience for lack of certainty (if only by dissemination, advertent or inadvertent).
Quarto: I am not getting into a discussion about whether or not MQA is destructive or not. I haven’t seen the math behind MQA, so I don’t feel qualified to express an opinion on this subject.
There are a lot of smart experienced people on these forums. I may not always agree but I respect all those who express their valid opinions. I feel I learn a lot more here than on say Computeraudiophile (same few people all the time). I feel these forums are more open-minded.
This is right. This of course is intentional and is part of the whole project of MQA and it’s DRM intentions. A proprietary black box is exactly what MQA and part of the industry wants as they feel it will fix problems such as piracy.
If MQA had been honest from the beginning and just admitted that what they have is a kind of super MP3, then the consumer reaction would have been… different.
Hey! MQA if it’s not all $voodoo$, show us the math!
It does stand for that. Thing is, LAME is not a codec. LAME is an encoder. Maybe you meant that a .mp3 encoded by LAME sounds different than .mp3 encoded by Fraunhofer’s own encoder, and we might very well agree on that, but that is not what you said. Of course, you might have referred to one of the two pieces of software I could find that seem to use LAME as a decoder (mpg123 or MAD decoder), but that’s pretty far from a common use case. There’s also no need to be haughty or rude about it, I often make mistakes, so, certainly, do you, it’s fine, and there’s certainly much more we agree on than we dissagree on.
Indeed. One uncorrected bit read error in every 10e14 bits read, according to Toshiba’s specs for consumer drives. For the more technical, that’s one bit every 11TB or so, and for the less technical, assuming you were to duplicate a complete human body, that’s a one cell mistake over the entire copy. Yeah, in practice, there’s certainly more. But you know what ? It won’t make an audible difference. Even if you feel like being super-pedantic about it.
If MQA is disengaged, then the PCM filters will show up and selectable. I always did that whenever I playback PCM. If you didn’t disengage MQA, the default is ‘minimum phase’ and you can’t select the PCM filters. If you accidentally playback PCM in this way, then it uses ‘minimum phase’ filter (I feel sorry for those who didn’t read the manual). The reason why Mytek does this is to avoid noise clicks during the changeover of different digital filters from PCM to MQA or vice-versa.
I know this sucks; it is not a seamless switch over, rather a manual change. Some manufacturers don’t even bothered and default everything to MQA filter. Alas, why bother MQA in the first!
I have no intention of being haughty nor do I have the intention of being rude. However, when I want to state something in neutral language, I use a higher register than usual. Probably a side effect of writing a lot of business reports. Higher grammatical registers tend to formalism.
Hold on here. You are arguing that uncompressed music files are less likely to be corrupt than compressed files. You are just plain wrong on this point. As @Xekomi pointed out, uncorrected bit read errors certainly do happen. But, write errors that are not noticed and corrected or reported and aborted are not at all common.
Also, compressed files are less likely to experience an uncorrected bit read error than uncompressed files as there is less data to read from the hard drive. The idea that once the file is read that bits will flip or that decompression code with have errors is ludicrous. If that were the case, computers would be so unreliable as to be unusable.
So no, compression and decompression are most certainly not “weak points” in the chain. To suggest shows a complete lack of knowledge on the subject and is laughable.
No I’m not. I’m arguing that every time a file is decompressed, the decompression algorithm has to run to generate a PCM stream. This is an extra step in the signal path compared to generating a PCM stream from an uncompressed file, which is just a basic I/O operation.
See above: the bitstream read from a compressed file will have to undergo a Fourier transform to restore the binary representation of the uncompressed waveform before it is passed along in the chain as a PCM stream.
You seem to believe that a compressed file undergoes no transformation before being converted to analog. I have no idea where you picked up this - let me borrow from your vocabulary - ludicrous idea from.
Read/write errors are more common than you might think. This is why there are checksums and redundant data clusters in data files. Yes, also in audio files.
For some reason you seem to have imagined that I said that the decompression algorithms can be faulty. That is not so. I said that the decompression process is a weak point.
After all, most hardware in the chain handles this process with processors (DAC chips) that don’t even come close to the processing power of your average PC. AFAIK there are no elaborate error trapping and retry routines baked into DAC chips. I don’t even know if Burr Brow et all do firmware updates. Is that even possible for DAC chips?
Frank, you are embarrassing yourself. You should stop before you make yourself look worse.
The FLAC decompression step is simple and reliable and does not take much CPU time.
Compressed FLAC files are decompressed in a bit perfect lossless process. You seem to think this is some kind of magical imperfect transformation process. It’s not. Nor is the decompression process a weak point no matter how many times you say it is.
Data files do not have redundant data clusters. FLAC and WAV certainly do not. FLAC files do have a simple checksum so the file integrity can be checked with the appropriate tool but the files cannot be fixed. WAV does not have this feature. Some operating systems, such as ZFS, do detect silent data corruption and will attempt to fix it using redundant storage. Most operating systems used by consumers do not offer any of these features.
DAC chips perform real time computing. In other words, they are as powerful as they need to be to perform their function in real time. They do not need to be more powerful than they are.
From what I’ve read the reason they do it is because it requires a very expensive unique software solution to be built into the DACs, so almost no one has implemented such a solution. Both dCS and iFi say their solutions for seamlessly switching from MQA to non MQA files/filtering required thousands of hours of programming time.
The filter issue is because MQA have built-in certain slow smooth and leaky minimum phase anti-alias filters into their format design. It is rather a stupid approach. The anti-alias filters don’t need to be part of a format. They should be selectable independently as they are in all other cases. The filters are now a user selectable conversion parameter.
Generally you have the proven (50 years of audio engineering) low distortion linear phase sharp filter and recently a plethora of silly leaky and poor performing filter options like minimum phase slow roll off. Each of the optional filters are a big backwards step in audio performance. MQA even chose one of the worst performing filters…go figure.
You state this with absolute certainty. Is this your own personal analysis of the filters using your own expertise and engineering background, or is this a point of view put forward by a third party and that you are simply propagating?
It’s the consensus. Leaky (always relative to the actual half band you have to work with), slow roll off (an thus by design going to let in artifacts to audio band), even so called “minimum phase” out of phase filtering schemes have always been a minority opinion. In the eccentric world of Audiophiledom these minority concerns are, what’s the word, “popular”, but nowhere else.
Bob S of course knew how to play the Audiophile fiddle…
Perhaps, even if a minority opinion, “minimum phase” filters mustn’t be inferior in general. E.g. Ayre Acoustics use this filters for years now for their excellent DACs that always has been reviewed as one of the best sounding at that time or at least in their price range. Not many DACs are able to reproduce such a “natural” sound. Perfect measurements are not necessarily equivalent to perfect sound or what our hearing sense as “natural”.
As you know for sure, Charlie Hansen from Ayre was one of the most severe and knowledgeable critics of MQA and the credibility of Bob Stewart before he passed away much too early.