I am a committed RoPieee user and supporter, and I don’t mind the Raspberry Pi assembly process at all (now that I have a collection of brass spacers and whatnot). But, I have been wondering whether there is any technical advantage of using a commercial [EDIT: meaning no DIY component] product as an end point? Some assumptions:
RAAT from ROCK core, connected over ethernet
Digital output S/PDIF to capable DAC via AES/EBU (for example)
Other features of a network streamer not used, eg Roon only
Display information not essential (use RoPieee + Pi display for track / album viewing purposes)
Are there technical features or improvements that commercial products offer, given these assumptions?
I assume you’re using some form of HAT given S/PDIF output?
Anyway, in my Darko review of the Pro-ject S2 Stream Ultra network streamer I referenced this article (or rather post) in which the designer of the product explained the technical limitations of a standard Pi and how these were overcome in the Stream Ultra (which is based on a Pi, hence the post title).
The Stream Ultra is USB output only, hence my first question, but I think the technical explanation is useful
While we’re clearly beyond cable snake oil in terms of engineering effort, I see a rather interesting word salad about design process, with no actual demonstration that any of this has any impact at all.
As long as the dac’s USB input is reasonably well-designed, has any streamer manufacturer been able to reliably demonstrate a difference at DAC output ?
I own the USBridge as well (and had heard about the pi’s USB output issues).
I’m sincerely curious about these things, and not playing mean objectivist here. I just, genuinely, don’t for the world of me understand what differences these things actually make. I’m ready to gamble a little bit to get a USBridge over a Pi, but outside of the (sometimes truly) superlative casework, I just don’t get what it is the higher-end stuff brings to the table. “It sounds better to me” (and the accompanying “not everything is measurable”) is so easy to achieve with a bit of suggestion that it’s unsatisfactory to me as an explanation to what’s going on here - I completely believe you when you say it, I just wish the manufacturers would take the time to give us an explanation that can’t easily be discarded with a brush of a PSYCH101 hand.
I’d be interested in your thoughts about what the downstream gear would most “need” in terms of a digital signal that end points would vary in providing, given RAAT and Roon Ready end points. In other words, from an engineering perspective, what do some commercial network streamers change / add / create, and why?
In my case, one of my downstream devices is a Bel Canto DAC 2.7 via AES/EBU, if that sets context.
I think the question should be: is there any technical advantage of using a retail product as an end point? HiFiBerry, Allo et al are businesses too.
We both use S/PDIF and some would argue that this interface is archaic. Yet when paired with a quality DAC performance is on par with the best USB streamers.
So, does the Pro-Ject Stream Box S2 Ultra, for example, perform better than the AURALiC Aries, AURALiC Aries Mini, Metrum Ambre, Metrum Baby Ambre, HiFiBerry Digi+ Pro, Allo DigiOne, Allo DigiOne Signature etc.?
The only one way to find out is with measurement or auditioning–not so easy–but I would hazard a guess that the differences between each are not going to be earth shattering when pairing with a good DAC. @RBM sums things up nicely in this post:
Most of the streamers listed use the RPi (or RPi Compute Module) and all designers seem to understand the shortcomings of these computers and address them–in their own way–using their unique designs. That usually means improvements to timing, noise and power supply. Since the DAC doesn’t control the clock of an S/PDIF connection, I think a lot of the claims some manufacturers make about their device is moot because the age old issues of jitter and noise are often addressed by the DACs input circuits, e.g. clock recovery, RF filtering and reference power supplies.
I’d like to try the Baby Ambre, but I think it is hard to justify the price for a potentially small gain. So my money (for now) will go toward new speakers.
I absolutely don’t feel qualified to opine, but I don’t see why anything that’s known to properly reject jitter over SPIDF, or that isn’t known to be absolutely terrible over USB (a certain ironically named brand is known to have issues there if I’m not mistaken) would have any issues at all or sound any different, in a measurable way, driven by a 10k ethernet-to-USB bridge vs a $100 ethernet-to-USB bridge. I simply have not seen direct proof there’s any difference, and so don’t see why there should be any reason at all to even worry about it. I’d be surprised if there were any measurable differences significant enough to be audible.
I can also see where going USB and using really nice crystals inside the higher-end dacs would make intuitive sense, given they’d possibly be better than on a $30 SPDIF HAT (or at least, I’d hope they’d be better). But beyond that, within reason (i.e, not completely broken PSUs on the diy side) and as long as the discussion is bridges and not units with really nice integrated DACs…
Electrical connections (USB, S/PDIF coax, AES, I2S) that carry encodings of digital signals also carry some electrical noise. DACs can be sensitive to that electrical noise leaking into the analog signal reconstruction. Some otherwise good DACs are less able to filter that noise on some inputs than on others. To avoid obvious listener bias, I’ve tested various sources with various DACs with an independent listener who is not informed of the nature of the different setups they are listening to. They were able to consistently differentiate between sources in some cases, which led me to adopt certain digital sources for certain DACs rather than other sources. For the DACs I own or have owned, S/PDIF from a decent streamer dominated USB (meaning that it was always as good or better). For a couple of DACs that take I2S, I2S from a purpose-designed source worked best. I also own DACs where USB is as good as S/PDIF or AES, even with a relatively economical custom USB source (such as Allo USBridge or Pi 2 Designs 502DAC, with LPS). But I’ve given up USB direct from the Pi long ago given the above experiments. YMMV.
The technical advantages are not always consistent or easy to identify as they can vary between streamers and connected DACs. The practical advantages of a properly engineered streamer (or in my case networked DAC) are easy to identify. A single mains powered box. Ethernet in, balanced audio out. DSD up to 128 handled (512 via USB), MQA handled. Roon endpoint handled. Can do pre-amp duties, has a good headphone amp and last but not least, it doesn’t look like a dogs dinner of cables, PSU’s and little black boxes. Obviously the downside is you have to pay for the trouble they have gone to to make it all work well.
Solid standard measurements are table stakes. Devices that fail them should be suspect. But assuming that those measurements give a full characterization of a device’s effect on what is heard is foolish. Much is still unknown about human sound perception. Traditional linear signals and systems theory is highly simplifying even relative to what is already known, such as psychophysics experiments that show hearing temporal resolutions that in theory should not be possible.
The problems with measurements are simple. Not all measurements have relevance to sound, and not everyone is well versed in how to interpret them. Nothing ever ‘fails’ measurements. There is no predetermined threshold below which it is all good, and above which it is all bad. For stuff like an SBC, measurements are virtually useless because they vary so much in use. One person struggles with 24/96 while another is streaming DSD 256 without issue. Try to remember, measurements are one tool in a box of mixed measures both objective and subjective.