Honestly I facepalm every time I hear this nonsense about USB degrading the audio. It’s a gross misunderstanding of the USB specification and how digital audio works. In fact, ethernet would suffer from similar issues. For context, I’m a software engineer w/ around 8 years exp in high performance and low latency computing so I’m not just making stuff up here… I’m intricately familiar with what’s involved.
With USB, a form of error correction called CRC (cyclic redundancy check) is used to detect corrupt packets. For data, a 16 bit error correction is used. With USB (assuming a typical digital PCM output), that means that if, for some reason, electrical noise or an act of god (eg; cosmic radiation) interferes with the transmission of a packet, it will be dropped (unless there’s some kind of retransmission logic). You won’t hear a somehow “lower fidelity” audio stream. It will just cut out. If you continue to have bad packet loss (eg; a faulty connection, too long a run of wire, etc.) you will hear constant stuttering and dropouts. Typically USB DACs won’t buffer internally because they want to output audio in the lowest latency possible. So they’ll just cut off like that.
With ethernet, there are additional layers of checksums (both at the link level and at the transport level) but the behavior can still be the same. Either (if using UDP) the corrupted (or missing) packet is dropped or it’s retransmitted (which in practice means a dropout unless you have a sufficient buffer… and then you’ve eliminated the ability to play audio in low latency eg; for playing movies or games).
Thing is, ethernet can absolutely be noisy! For context, even on a trading system processing 10gb/sec of data on absolute top of the line networking hardware with custom ASICs for error correction you’d still encounter a corrupt packet every few weeks. On a home network with consumer grade gear? 100x that. Where ethernet shines is long runs of cable. And obviously host to host communication.
What about jitter/errors in the clock domain? Excellent question. So, while over ethernet you can send over data in large chunks which are then played using the output device’s clock via USB audio is typically sent as PCM which is dependent on having a stable clock. The solution? Almost every modern “audiophile grade” DAC reclocks the data using its own internal clock. So if you’re using a poorly designed USB DAC you could definitely have a degradation in sound quality this way. If you’re using a streamer/USB DAC with its own internal clock then no it’s not possible.
The biggest gains you’ll get from a streamer are if it has a better internal DAC. Conversion to the analog domain is actually capable of introducing distortion even in the best designed systems. Well, and the convenience aspects that streamers offer.