Do router and ethernet cables affect sound quality?

Does anyone know where the Tidal servers are located?
Would I get a better signal if I lived closer?
If they are in the states how may routers, optical routes and more routers and the last mile of thin cheap wire has the signal actually been though till it gets to my house in the UK?

How could I influence those bytes to improve sound quality over the last 10 feet.

I don’t think you can unless you have a really poor home network.

2 Likes

Although I haven’t added it up, i’d bet home network problems are the #1 trouble issue for Roon.

4 Likes

Thank you for this thread it is an absolute joy! :laughing:

One important clarification I feel should be made…

If you believe that ethernet cables or switches can affect sound quality then you are in the anti-science, anti-vax, flat earth camp.

Science/physics, information technology, electrical and electronic engineering say you are wrong.

So, own it. :yum:

4 Likes

I have shown several measurement that shows how much RFI can be picked up by twisted pair and sent from a switch, so you can believe the flat is earth for all I care.

1 Like

I will leave the flat earth biz to you, you have it covered admirably.

1 Like

Time to permanently shut this one down.

1 Like

Why?? It’s gold!

1 Like

I agree @Jim_F. The beauty of “muted” in forums…

1 Like

Those “measurements” are not relevant to this topic since they concern interference on a wide band receiver–the Ethernet cable isn’t receiving (picking up) the RFI it is emitting it!

As I tried to explain earlier in this thread, it is normal for Ethernet to radiate over a range of frequencies (analogous to mains hum.) However, this isn’t in the audio frequency range. Rather, the digital signal has a very broad frequency range, e.g. 1MHz through 100MHz for Cat 5e and to 250Mhz for Cat 6. Whilst 50-60Hz is audible, Ethernet causes out of band noise that is above the range of human hearing and does not cause audible interference. Moreover, such noise can be filtered without infringing on the audio frequency band.

By all means experiment and buy whatever takes your fancy. However, please don’t mislead those seeking advice with inaccurate–or just plain wrong–information.

9 Likes

@Phil_Ryan, no, your proximity to a streaming service is not important, other than its better if its somewhat consistently far away in the time domain and adequate real bandwidth exists. You may be able to improve performance by choosing a different streaming application, as time and bandwidth do change with Internet services, and that can be addressed in the application.

In the case of Roon, I expect Roon Core buffers the bits from Tidal first, and then sends them over RAAT to RoonBridge/Output. If so, Tidal could be on the moon, it really wouldn’t matter - as long as the moon stayed about the same distance timewise away and Roon Core was designed to handle that distance - RAAT already handles the LAN, so the last 10 feet of digital is already pretty optimized.

But even if time and bandwidth do change constantly, the nice thing about streaming non-live music/video etc. is you can bulk fetch the next bits and buffer them locally to remove that problem completely. Those bits are hanging around on storage somewhere at the service, so its a simple file transfer in data terms. Most media playback systems today use algorithms to dynamically adjust the buffer size to suit the requirement, and fetch the bits well before they’re needed, but not too much before, to save bandwidth and use the buffers efficiently. You can see this happen on YouTube. When you first press play, there is a brief pause while the timeline at the bottom starts to turn grey, this is the YouTube player initially buffer filling. Then video starts to play and the timeline turns red up to where you are in the playback. While that’s happening the timeline continues to grow beyond the current playback point, often in big leaps, which is the YouTube player fetching bits in anticipation of you wanting to watch the rest, but more importantly to ensure the buffer is sufficiently full that you won’t experience playback issues - all the clocking etc. is done by the player/PC/mac/Phone - the network is irrelevant, as long as it’s enough. This is a well proven approach, and is taken to extremes for video playback over the Internet on services like Netflix, due to the unknown and changing bandwidth between the service and the viewer. It’s why the video can be blocky for a short time, then clean up - its the player using ABR, adaptable bit rate, to choose the “best” video quality based on the network throughput it measures. Network throughput has only a tenuous connection to link speed for traffic from the Internet. ISPs oversubscribe links - 200:1 was common at one point - and packets are lost, causing back-off algorithms in the network protocols it kick in, so the actual throughput received may be considerably lower than the link speed purchased.

In your scenario, if you are timewise close to Tidal, and the actual bandwidth available is large compared to the streaming requirement, small buffers are needed - this is the case for LANs. By contrast, if you are far away, or the bandwidth available is not many multiples of the bandwidth needed, then large buffers are used, which is likely to be the case when fetching over a WAN/Internet. Roon Core probably fetches Tidal traffic using relatively big buffers (my Qobuz player on my phones behaves like YouTube, so I am sure Tidal is similar and Roon copies this behavior), and then RoonBridge probably does something similar as well, albeit using smaller buffers because, LANs are significantly more predictable, closer timewise, have oodles of unused bandwidth and very low loss - at least compared to the Internet. What I don’t know about RAAT is whether Roon Core pushes the bits to RoonBridge, or RoonBridge fetches the bits.

It’s only at the DAC that clocks need to be precise - and this is where you do care about “signal” - it’s so that the linear operation of the actual D to A conversion is fed with bits at the right time - if that clock is poor, then that will be audible. A good (but not perfect) analogy is a leaking bucket. The rate the water drips from the bucket is constant - which is like bits clocked out of the buffer and fed to the D to A process inside the DAC - as long as there is always enough water in the bucket. But you can add water to the bucket in large amounts and pretty irregularly to keep it from ever being empty and ensuring it never overflows.

Back to audio, with synchronously connected systems (SPDIF/TOS, AES/EBU etc.) the clock needs to be accurate on the streaming player, because the DAC will sync its clock to the received bitstream using a PLL. But if the DAC is connected asynchronously by USB, the clock only needs to be very accurate on the DAC. BUT, asynchronous USB is not as asynchronous as Ethernet, and there is some expectation of when frames should be available - its a non-trivial protocol and there is more variability in implementation, or so it seems.

But for well implemented, asynchronously connected DACs, all the preceding equipment: switches, routers, cables, servers, services etc., need only by fast enough/reliable enough to ensure that the DAC is never starved of bits to perform optimally, and that’s “pretty easy” to do - the streaming player/application just needs to use an appropriately sized buffer and use an algorithm that can adapt to the expected variations. Of course, once the bits are converted to analogue, all bets are off - we’re back to pure HiFi at that point.

In terms of fixing a given home network - a brand name switch (you don’t need Cisco: Linksys, Netgear, TP-Link, Ubiquti etc. would be perfect), pre-terminated cables and a drill (to put holes in your walls) is all your need to fix it - and a willingness to drill those holes, pull cable etc.

2 Likes

The improvements we heard were more air/ space around instruments and voices, greater clarity and more detail. The end result to us was that the music seemed more present, more live.

2 Likes

Geoff, possibly a “Clever Hans” test and not a true blind test but I did what I reasonably could not to influence the results. I did not listen to the wireworld cables in advance so I had no preference ( though one could argue that since I knew the price of each cable I subconsciously had certain expectations - although I was skeptical there would be a difference) and I simply told my wife and friend, this is cable 1, this is cable 2, etc (and mixed up the order in which they were used for subsequent listening tests)and didn’t let them see the cables. If you think any subtle cues I subconsciously gave my wife influenced what her ears were telling her, you don’t know my wife.

1 Like

Peter, the point is we don’t need to know your wife to know that neither you nor your wife (or anyone else for that matter) can be conscious of “subconscious clues” or “subconscious expectations”. The adjective “subconscious” is commonly defined as “operating or existing outside of consciousness”…

10 Likes

Also comparison to flat earthers is unfair, because they are doing experiments trying to prove their claim.

1 Like

That’s circular reasoning: I know I can hear the difference. I know I can trust my ears. So the difference must be measurable. They just don’t know how to measure it yet… :thinking:

4 Likes

Well written!

About measurement: some people have way to much faith in measurement when it comes to determining sound quality. Measurements should be seen as a baseline, not a proof that something sounds good or bad.

Lets give an example: Raspberry Pi 4 (which I have and use). According to measurements, its perfect for sound, no visible noise etc. And yet, when one listens to it, it has sharp higher frequency and somewhat digital sound. I run mine with an USB tweak after, and power it with battery, and both those made very clear improvements. And yet I am pretty sure there are streamers out there that sounds much better.

And this is in front of a DAC that’s known to be very good at cleaning up USB signal: RME ADI-2 DAC.

1 Like

If you read up the thread there have been plenty of people trotting out the flat-earther claim etc the other way around. I was just redressing the balance for those of us on the side of the science, and the objective.

2 Likes

Measurement is not proof? How do you think science and technological progress happens? It happens by measurement, changing something and then measuring again. It doesn’t happen by people wanting or believing it to be so.

2 Likes

Read what I said, I said measurement are not proof of sound quality. They are, however, proof about what they measure (like jitter, noisefloor etc). In my opinion, those measurement are a baseline or indication of how it will sound, but very far from the whole story.

Exactly so, @Magnus. Conversely neither is hearing a subjective change proof that the science or engineering is somehow wrong or limited in its scope/understanding of digital audio (not that I think you’re saying that).