Myself and many others have begun using Audiolinux with great success. It’s not expensive, has great support and isn’t difficult to install or update. Most importantly the music sounds great using this low latency OS.
OK, I’ll bite: what exactly does ‘low-latency’ for an OS have to do with SQ?
I thought it was important for music production, for synchronization of multi track recordings etc. but benefits for music playback?? I can’t see it myself…
Men were sent to the moon using what amounted to an Atari 8-bit gaming console.
Now, using 21st century hardware, one needs a low latency OS to get the best SQ when converting one’s music files?
BTW - Devices become I/O bound before they become CPU bound.
Throughput and latency (and CPU utilisation) may be important when shifting lots of data across a network or for real-time broadcasting but I don’t see how this will impact “sound quality” in a Roon ecosystem. Most music listening does not need low-latency; we’re listening to recordings not real-time performances.
As I understand it the lower the latency the less distortion. There are many people who are reporting lower latency hardware and software resulting in better sound quality. There are influences that are more important to reduce or consider before or when attempting to reduce latency, such as noise from sources like power. Low power, impedance and latency are the biggest considerations for hardware.
The answer to your question is that it helps improve SQ. If you want to go to the moon with an Atari when the technology and knowledge has advanced beyond that it’s your choice.
Perhaps what Slim was wondering was, what’s the causal hypothesis about how a low latency OS would result in lower distortion?
I think there is some confusion between latency and jitter.
The former is simply a delay in time. If a few milliseconds pass between your pressing Play and being able to hear any sound, that would be an example of latency (perhaps as simple as drive access time).
Jitter is inconsistency in the timing of the reproduced samples. For CDs, each sample (which describes the amplitude of the signal at that point in time) should last exactly 1/44,100th of a second. If this timing varies a bit from one sample to the next, it distorts the resulting waveform. Jitter is quite audible on a decent system. No golden ears required.
IMO, one of the many brilliant decisions the Roon team made was to send only the data, without embedding a clock signal, to Roon Ready devices when using the network connection. This allows us (for example) to use our own expensive and highly accurate clocks when reproducing the music rather than having to be dependent on an external, incoming word clock that may not be nearly as good. In this way, I can have okay sound in my kitchen while cooking without having to spend a lot of money; but I can have amazing sound in my living room when listening to my music system.
There is no relationship between latency and ‘distortion’ for digital audio.
Well I didn’t correlate latency and distortion until after he commented. Doesn’t matter really. As for a causal hypothesis, the theory is based on experience of several people testing and communications from some streamer/endpoint manufacturers.
This post is the beginning of a discussion that continued for a couple weeks. I suggest reading the posts from @romaz specifically.
I could be wrong and it may not actually be distortion as I was repeating what someone else said, but I would like to know why you don’t believe there’s a relationship.
no confusion between the two
Is that an oxymoron? Surely a decent system would eliminate jitter? But truth has it that jitter is only audible when the timing errors are significant. Actually by significant I’m referring to jitter of maybe 1 or 2μs (millionths of a second.) Take the afore mentioned Allo DigiOne and it has jitter 0.4ps (million millionths of a second.)
Anyway, I’m reliably informed that tapping your feet while listening to music means there’s no jitter.
Thanks, I think I understand this better. I think the “latency” referred to is the tendency of a busy operating system to have to wait a millisecond or two to switch a new task into ‘running’ because all of its threads are busy doing something else. Reducing the number of things the OS is doing should reduce that tendency. So, sort of a commercial alternative to ROCK, as I understand it. Be interesting to hear more.
And the “distortion” resulting from too much “latency” would be non-linear snaps, crackles, and pops, I assume.
Most of the threads are suspended most of the time. A super-duper OS that supposed eliminates threads and thereby improves sonics, is unnecessary. Any modern CPU has more than enough juice to process audio files without resorting to eliminating threads that aren’t going to be executed most of the time.
Oh yeah, the time penalty wouldn’t be milliseconds, but nanoseconds. How much would depend on the clock, since that is what’s timing the fetch and execution of instructions.
One would certainly think so. However, that rather busy AudioLinux web page has lots of measurements and graphs which may demonstrate otherwise. Can’t say that I saw that, in my admittedly superficial scan of the page. Don’t seem to be any head-to-head tests against ROCK, for instance. And the fact that they bundle a desktop into it makes me wonder.
One thing to remember is that lots of people in the world are not using modern CPUs. My Roon Core was recently running on a Core 2 Duo. So perhaps AudioLinux is aimed at not-so-modern CPUs.
This is all immaterial with RAAT. Take a look at this:
Of course, you wouldn’t want to pipe TV Audio through a buffering scheme like this. But for music listening, streaming latency doesn’t matter because we are free to pre-fetch data out of your files–and we can solve user experience latency while keeping our large buffer sizes by buffering faster than real time, so even though we have 10 seconds of buffer in the chain, playback still starts in a few hundred milliseconds or less.
I think you described it well and better than I. I’d love to read a comparison between AudioLinux and ROCK and may have to try that myself. One of the benefits of AudioLinux is that you can run HQPlayer on it. This is not possible with ROCK.
You can strip down AudioLinux even further, and run it without the desktop as command line, and headless.
If you are going to redefine the meaning of latency and distortion to suit your theory then anything is possible in your world. Those words have very specific meanings, as does the word buffer. As Martin posted above, use of appropriate transmission protocols removes any issues with ‘latency’, however you want to define it.
Not my theory and I didn’t redefine the meaning of anything. I’m well aware of the meanings of these words.
The comment I made was that one thing caused another. I also said I could be wrong as I was re-stating what someone else said. You may be surprised about what you don’t know. How do you know for certain that latency doesn’t impact sound quality.
In discussions like this it is important to keep the apples and the oranges distinct.
There are two general ways for a computer to send audio information in Roon:
A direct connection, usually by USB or SPDIF; and
A network connection, usually via Ethernet. Roon supports a number of particular network protocols, but let’s talk about RAAT for current purposes.
A direct connection carries with it the possibility of the computer affecting SQ. There are discussions about the effect of the protocol, the power supply and other sources of interference including CPU activity. You can spend as much as you want tweaking and improving your computer to avoid or minimise the affect a directly connected computer has on SQ.
A network connection using RAAT transmits packets using TCP/IP. These packets are then buffered in the Output device (in my case a microRendu) which has a direct connection to the DAC.
The use of TCP/IP as the network protocol completely isolates the Output device from any timing issues in the Core computer. No one worries about the effect of latency of TIDAL’s servers on SQ because the file is streamed asynchronously using TCP/IP. You could, in theory, use carrier pigeons and smoke signals to transmit the packets. Sometimes I think TIDAL does .
So for a network connection using RAAT I would discount Core computer latency as a factor that could affect SQ.
What about a Roon Bridge device ? Well it is a computer and it is directly connected, so the possibility of influence exists. That is why people use minimal hardware and software in such devices. RAAT helps here by enabling the best clock in the system to control the timing. Using a minimal OS for such devices is conventional wisdom.