What file format sounds best?

My library has been ripped over a (long) time. I started in apple formats such as ALAC and AIFF. For awhile, I used MP3 and then FLAC to save disc space. But when storage got so cheap that it no longer mattered, I noticed that when some files were converted to .wav, they sounded better.

On formats that were lossless / red-book standard (44.1 / 16-bit) the wave conversion was straightforward. But now I’m running into some high speed / high-bit FLAC files that I’d also like to convert to wave. What are the limitations of wave format files, and will converters (such as dB Poweramp) automatically downconvert to a usable wave speed?

There is no universal standard for tagging of WAV files. I don’t know if Roon can read tags in WAVs

FLAC is lossless and after decoding is indistinguishable from WAV. You can convert WAV to FLAC and back to WAV a million times and it is still indistinguishable from the original WAV you started with.

With Roon, the decoding of FLAC files happens on the core. Hence, the minimal computer load by decoding the FLAC has no effect whatsoever on the endpoint over the network.

After decoding on the core, the RAAT protocol sends the resulting PCM data to the streamer. Whether this PCM data came from a WAV file or the same WAV encoded to FLAC is also indistinguishable.

If you previously heard a difference, it had absolutely nothing to do with WAV or FLAC file format as such. However, there is a slight possibility that CPU load of FLAC decoding directly on underpowered endpoints might have an effect on the endpoint’s analog stages, if the endpoint does the decoding by itself. E.g., when getting the FLAC directly from the streaming service; however this is not how Roon works.

4 Likes

If decoding was underpowered, it would cause dropouts, not “worse” quality.

4 Likes

It would cause dropouts if the digital work did not happen fast enough. If you get (caution, bad word), EMI in the analog stages because the CPU runs full tilt, then you would not get dropouts

1 Like

It’s that boogey man of noise that’s banded about in some sectors as the result of the extra cpu cycles required. Most modern devices mind have more than enough to cope.

I just found this article which I found highly amusing .

2 Likes

Yeah right LOL, I read that claim somewhere else as well, black cover art sounds better than white
:man_facepalming:

1 Like

It’s worth noting that transcoding is never going to restore fidelity that has been lost by compression (for example, transcoding mp3 to wav is just going to give you a lossless copy of the compression artefacts already baked in to the mp3).

There are edge cases where the processing on the endpoint works better with some transmission protocols than others (for example, first generation Naim streamers sounded better when fed wav rather than flac, despite the data being bit-equivalent, because of the additional processing required to unpack flac; the solution was to transcode flac to wav on the fly on the upnp server, rather than convert the files - second generation streamers designed the problem out).

If you do convert the files, you are baking in whatever characteristics are present in the transcoding software. Again, as an example, not all mp3 decoders are the same, later versions often performing better, the way they deal with inter-sample peaks being one issue.

TL;DR…

If you are using Roon, I’d suggest you don’t convert your files, but instead use Roon’s conversion DSP to feed your endpoint with whatever gives the best result - which is likely different depending on endpoint and preference!

Whether you can actually hear these differences is dependent on many different factors - not least whether you expect to be able to… or not… sometimes people can genuinely hear differences, but not necessarily for the reasons they think.

Edit - I type too slowly…

2 Likes

I measured one of my DACs on a laptop while idle and while in a game and I didn’t see any blip in the THD+N. I don’t believe in EMI influencing the D/A stage of a good DAC in a typical residence.

Once compressed, there is only one way to decompress, so there should be no difference between decoders.

I didn’t say that this actually happens. Some people say it makes a difference on certain weak machines that are maxed out by decoding, whether for instance a UPnP server on a NAS transcodes FLAC to WAV or serves the FLAC. This may well be mass psychosis, but I allowed for the slight possibility because I didn’t prove otherwise, and your laptop example isn’t a directly comparable case.

Obviously, but I thought the topic was about lossless formats and WAV/FLAC/ALAC/AIFF equivalence

Without going down this rathole, I think we can agree that Roon does all the decoding on the core, so it shouldn’t matter whether you’re playing a compressed file or a file that was decompressed beforehand. The OP is not going to see any benefit whatsoever.

Not as obvious as you might hope - and I did start typing this before your post - but not fast enough!

1 Like

Absolutely :slight_smile:

3 Likes

Um. I don’t think so, but happy to be proven wrong. Can you link a reference?

I don’t think you need a reference. Use two different pieces of software to decode any mp3 to WAV and compare the bits.

The psychoacoustic models are in the mp3 encoder and encoders most definitely can have an effect on SQ. However regarding decoders, Wikipedia:

Decoding, on the other hand, is carefully defined in the standard. Most decoders are “bitstream compliant”, which means that the decompressed output that they produce from a given MP3 file will be the same, within a specified degree of rounding tolerance, as the output specified mathematically in the ISO/IEC high standard document (ISO/IEC 11172-3). Therefore, comparison of decoders is usually based on how computationally efficient they are (i.e., how much memory or CPU time they use in the decoding process). Over time this concern has become less of an issue as CPU clock rates transitioned from MHz to GHz.

Exactly in principle, but even if there are differences due to the tolerance the SQ effect would be negligible (though a simple checksum comparison would show different files). In this case, loading both resulting WAV files into something like Audacity and subtracting one from the other would show minimal difference

(To avoid confusion, this only applies to lossy formats. There would be absolutely no difference between an original WAV and the resulting file from the same WAV to FLAC to WAV conversion)

1 Like

Thanks, folks - I DO realize that if I start with a lossy file, I can’t recreate quality that has already been discarded. I also realize that Roon converts formats on the fly at the main program so that endpoints don’t have to.

But I’m wondering if actually the Roon core may be more heavily loaded than might be suspected. For simple playback, and even when doing format conversion, I’d think that the Roon core (pretty much regardless of platform) would have clock cycles to burn. HOWEVER - If one is using the DSP features of Roon (that also have to be applied “on the fly” during playback), those spare processor cycles don’t evaporate like water on a hot plate.

With some speakers that I’ve used before in my room, I’ll have as many as four or five DSP filters working simultaneously. Each will have a different center frequency, a different Q, and different amounts of gain or cut. With the DSP calling for this much processor power, I’ve got to wonder if asking the core to ALSO do file-type conversion isn’t just taxing the system a bit much?

Not having any test equipment, this isn’t something I could actually measure. My ears have told me, though, that sticking to simple wav format usually produces the cleanest sound. To date, I’ve noticed nothing but positive effects from the application of DSP, but I don’t think I’ve actually tried DSP plus simultaneous format conversion.

My previous DSP experiments were with an older Mac mini. With my current i7 NUC, I’m thinking that I probably have a more generous headroom in terms of processor cycles to spare?

No it wouldn’t. On my laptop, similar to a regular NUC CPU (8th Gen i7), decoding a 35 MB flac file with a 5 minute song takes 0.5 seconds. This is using the time command to time the decoding of a flac file. The result of time is at the end. The real value is the summary of elapsed time by user and system operations.

time flac --decode "d1t01 - 2raumwohnung - in wirklich - da sind wir.flac"

flac 1.3.4
Copyright (C) 2000-2009  Josh Coalson, 2011-2016  Xiph.Org Foundation
flac comes with ABSOLUTELY NO WARRANTY.  This is free software, and you are
welcome to redistribute it under certain conditions.  Type 'flac' for details.

d1t01 - 2raumwohnung - in wirklich - da sind wir.flac: done         

real	0m0,532s
user	0m0,427s
sys	0m0,095s

If you are using DSP to change the sound, the DSP itself surely has orders of magnitude more effect than any CPU load side effects would ever have. (And with Roon, such a side effect would be zero because the decoding and DSP happens on the core and is 100% isolated from the endpoint, at least when using RAAT)

DSP uses a lot more CPU than decoding, so it’s very unlikely it will push it over the edge. You can of course apply your DSP and then look at the performance factor during playback.

It looks like you’ve made up your mind already.