All files have a checksum, the level of correction will depends how much data is being lost. If the loss is too great it just virtually impossible to get it fix.
Not all transmission and storage is a perfect medium, they all suffer reliability issues. The best part is digital data has a ability to do error correction; so most of the time, some loss of data can be fully corrected.
The British Library recently made an assessment of the suitability of FLAC as an archival format and concluded:
“The evidence discussed above presents very few risks associated with the format and
continues to demonstrate a growth in terms of commercial availability, but with relatively little adoption amongst the archival and broadcasting communities. Much of this may be due to the de facto predominance of the WAV format, meaning that there is little desire to change or convert, even if FLAC were to offer benefits for organisations, such as those with storage capacity concerns.”
The whole report can be found here.
Really? Are you sure about? Most file systems do not checksum files and a checksum means nothing in regards to error correction.
FLAC files are created with checksum so the file’s integrity can be checked at any time. WAV file’s do not have this feature. The only way to checksum a WAV is to use an external utility to generate the a checksum using something like MD5. Then, at some later point, you can use that utility to verify the checksum is the same. But that is a manual process.
There is no way to reliably fix a corrupt file with just a checksum. Robust schemes use Raid/parity to retrieve good data. In other words, a checksum has nothing to do with error recovery.
Of course not. But, they are extremely close to perfect.
Digital data does not have the innate ability to do error correction. It needs to be built in to the operating system and that is not the case with consumer desktop operating systems.
Also, let me clarify. Some error correction schemes do use checksums. But, the presence of a checksum does not mean there is error correction or that error correction is possible.
To clarify my explanation earlier, checksum is used to check whether the file is modified in some way. If there’s are errors, the system will attempt to do some form of errors correction.
I believe recording industry will never want to compressed their ‘crown jewels’ for the purpose of archiving. There’s certain risk factors they simply do not undertake.
That does seem to be the case. Interestingly, the main risk the British Library identified with lossless compression was not corruption but provenance.
MQA files via Tidal/Roon come in a Flac wrapper according to my signal path. If the Flac file were not bit perfect surely the MQA signal would be compromised and not authenticated? This is clearly not the case.
One of the arguments for MQA. The British Library do not mention it at all in their assessment of FLAC. Instead, they seem resigned to highly manual methods for validating the provenance of losslessly compressed source material.
“FLAC files received from third-party sources shouldn’t be assumed to contain high-quality audio. Such files could be checked, where necessary, to ascertain whether their level of quality is consistent with migration from a high-quality audio source, or a lossy format.”
Given the controversy surrounding MQA and the conservatism of the archivists towards even lossless compression it seems unlikely that MQA will be making much traction in that industry.
Of course, a Flac file could contain anything but that is not the issue.
The file in the Flac wrapper is of equal quality to the WAV file that was losslessly compressed within it. That’s as I understand it and have no evidence to disagree from my listening experience.
What system? Tell me, what system you have that does error correction?
Say - just out of curiosity, is it possible to make an uncompressed file from an existing FLAC?
Sure. A FLAC file can be converted to a WAV or even and uncompressed FLAC.
and importantly, that WAV file (or uncompressed FLAC file) will contain bit-identical audio content to a WAV file that was ripped from the CD (assuming the original FLAC file was not itself corrupted).* Lot’s of tools that can confirm this. That’s of course why all these files are referred to as “lossless”.
*and as pointed out earlier, one of the most important benefits of a FLAC file is that it has self-contained CRCs. If one has copied many thousands of files onto a new backup HDD, you can easily run a utility to do a batch check as to whether any of the FLAC files report corruption. This is not possible with WAV files (or ALAC files for that matter). For this reason alone, FLAC wins for me.
That’s analog lossy instead of digital lossy
While there are CRCs, there is not actual redundant error correction data (AFAIK). FLACs framing however means that only a frame is lost due to corruption and not the remainder of the stream. This is where it differs from the zip analogy (when zipping a single file).
FLAC is more like taking a raw audio file and first chopping it up into separate files (frames) of often 4K or 8K in length and then zipping all these into an archive (such that each frame follows the previous in time sequence if streamed).
A loss of a frame I think results in a drop-out of about 1/50th-1/20th of a second (@ 44.1K/16), depending on frame size.
For archival purposes, most likely the underlying storage technology will have its own redundant error correction coding and as many such technologies are block based anyway, in the worst case of not being able to recover a block of a file, then the impact on either a wav or FLAC may be similar. The data density of FLAC would mean potentially more of the original source material would be affected by a storage block loss as it may impact several frames.
Re extra decoding (CPU load) causing digital noise - possible on a poor digital system with poor separation between digital and analog components of a DAC, but with many decent modern designs intended for music listening (rather than just gaming), I don’t find this is an issue.
FLAC is “losslessly” compressed, hence the smaller file size. The analogy to PKZip is apt, though FLAC uses specific characteristics of audio files, so in general you would not achieve the same level of compression by PKZipping a WAV file as by FLAC compressing it. But assuming there are no errors in the storage or transmission of bits, either one is fully reversible. You can always recover an exact unique WAV file from the FLAC file. Whether the processor load in decoding the FLAC file on the fly for listening affects sound quality is another issue and is the subject of religious wars. But if you ever require exact WAV files from your FLAC files you can always get them, and probably even convert your entire collection with one simple batch command. There is no conceivable way a WAV file converted to FLAC and then back again could sound any different from an original WAV file, assuming there has been no data corruption. Meta data is a whole other issue though.
I theory–at least according to some theories–you can obtain better sound quality by ripping the CD since playing the CD on the fly in real time is subject to read errors, jitter, etc.
Here’s how to ease your soul: arrange a listening comparison, level matched, same DAC, preferably switching on the fly, preferably by someone else, and most preferable of all, “double blind”, so that neither you or the switcher know which is ripped and which is original. (In my case I have an advantage in that my OPPO 205 disk player is also my Roon endpoint, though switching them rapidly would be a PITA). If after extended tries you don’t hear a difference, under any circumstances, never worry about it again. That’s the approach I have taken to auditioning CD tweaks, boutique cables, etc. and it has served me well.