The joy of networking cables

There are many support and other threads on the forum that devolve into “fix your network,” to mixed responses, especially from the original poster who is having a mysterious problem with their Roon setup. I thought I was immune to this, as I’ve put some effort into building robust networks in both my Roon locations. Until…

Both my headphone setups use Roon>HQPlayer>NAA endpoint>DAC>amp configuration. Roon and HQPlayer run on separate Ubuntu Server systems; the NAA endpoint is a UP Gateway mini PC running the Signalyst NAA image. This worked well for quite a while. But recently I started having problems where play would get stuck or hiccupy, and eventually the only way to revive the setup was to reboot the HQPlayer server. Since I had upgraded HQPlayer to version 5 recently, suspicion immediately fell on it. There was a configuration issue around IPv6 that didn’t seem to matter in version 4; fixing it seemed to improve things, but not fully fix the problem. So I started digging into Roon logs and HQPlayer logs. Roon logs did not show anything glaringly obvious, but HQPlayer logs showed that the NAA endpoint would sometimes disappear, after which it would be impossible to continue play. I wondered if this was some new HQPlayer bug, but the HQPlayer developer suggested that there might be a networking problem. Strange, I hadn’t changed the network since 2022. But I went digging into the HQPlayer Linux server logs just in case, and guess what, Ethernet would momentarily drop out at exactly those times were play failed. What could that be? A problem with the server or the Ethernet switch? Much simpler: I replaced the Ethernet patch cable between the server and the switch with a new good quality one from Blue Jeans, and that’s it!

How many frustrations on this forum may be due to bad Ethernet patch cables? We all have a box somewhere with random Ethernet patch cables from old devices, and it’s just too easy to grab one when installing new gear. The problem in this case is that the old cable was not totally bad, which would be obvious as the server would be unreachable. Instead, it probably has a frayed or loose wire that causes the connection to drop momentarily. One give away was that when the connection came back up, it would be at 100Mbps, not 1Gbps.

How would I have figured this out if I didn’t have the skills to make sense of logs?

6 Likes

@Fernando_Pereira - Thanks for sharing your experience.

I don’t have the knowledge to read logs, but the past have learned me, that it is always a good idea to have a couple of completely new and certified/testet Ethernet patch cables and fiber optic cables.

Torben

1 Like

Changing cables is perhaps the easiest thing to try when the network demons come to visit. After that is replacing wireless or EoP connectivity with a standard cat5/6 cable. Folks just need to acknowledge that advice to troubleshoot is just that. It isn’t criticism of the fancy cable or audiophile switch you are pulling out. You are just going back to a baseline that will help you determine where your problem might be. I personally wouldn’t be into logs until after this simple stuff is tried!

1 Like

One factor is, that female type RJ45 connectors usually have pretty low mating cycle specifications, so it happens more often than not, that some contacts intermittently and very briefly lose connection, degrading signal quality.

The other problem are manufacturing tolerances for both female and male connectors.

Background story:
At a former job, designing and manufacturing fire detection related electronics, we had problems with a batch of the device’s internal unshielded patch cable connector housings being on the slim side while the PC boards’ female receptacles were at the other end of the specifications, causing brand new controllers to intermittently lose communications.

This problem did not show during the 14 day manufacturing run-in testing, but at a customer’s location only - a real big pain, being overseas!
It was later shown, that signal quality was just good enough to not show with shorter server connections and a less electrical noisy environment, but failed only in the customers harsher environment.

With digital transmission, noise isn’t a problem, until you cross the rather high threshold of tolerability where it suddenly fails completely, causing drop-outs.

2 Likes

I have a load of these blue jean cables, I think each of mine arrived with a test result print-out, the only down-side is they can be difficult to install as they are very stiff.

1 Like

This. I’ve seen this over and over in the past 30 years. Oxidation, I think, but very slow, until suddenly things fail completely. I’ve finally taken to making my own Ethernet cables (and testing them).

1 Like

RJ45 connectors are ‘contact wiping’ by design, so all it may take is to unmate then remate the connector to remove the oxidation layer, if any. Personally I have never seen any oxidation on old Ethernet cables, though admittedly they all would have had very low number of matings so the thin gold layer will be intact (and shiny).
Inside the connector it may be a different story but these are usually insulation displacement connection types which should be gas tight and not prone to oxidation.