Of course in networking we have UDP, as an unreliable option to the reliable TCP, because of cost and perf. And of course reading an audio CD is unreliable, because 1982. But I have never heard of an unreliable option in USB, and Wikipedia doesn’t seem to mention it.
But in any case, my points were that the issue of separation is more about noise than errors, and jitter is a distortion problem unrelated to bit errors.
But I’m pretty sure. And I suspect you are reading USB data standards vs. audio standards. But that is not definitive, is it? So if someone INTIMATELY knowledgable of USB steaming audio wants to chime in, please do.
UDP is subject to packet ordering (or loss, or duplication) issues, but not corruption. It still has a checksum that can be checked for corruption of bytes – slightly different than SPDIF’s transmission where actual bits can be read incorrectly.
USB has a similar thing, where Isochronous and Bulk transfers both are CRC checked. Bulk transfers will also do the TCP-like guarantee of delivery. Audio goes over Isochronous transfer mode.
That is correct. It’s not USB Audio however, it’s USB Isochronous transfers. USB Audio happens to use Isochronous transfers. This is important to note as it would theoretically be possible to build a USB audio protocol (would require special driver) sitting on top of Bulk data transfers (which does not have packet loss) or have USB Audio itself build a reliability mechanism with retransmits. Both of these solutions however, are bad for the real-time nature of audio, so loss is actually the best solution. This is why VOIP protocols usually use UDP instead of TCP. There are bad consequences to ensure reliable delivery in a real-time system.
It should also be noted that USB packet loss is very rare, and requires either faulty cable or a badly behaving chip/driver. Some noise isn’t going to cause packet loss, and $5 Monoprice cables are more than adequate at providing shielding.
If your computer has an analog output, the signal sent from it is an analog signal. If your output is digital (hdmi, spdif, usb, network, etc…), then they signal sent from it is digital.
The FLAC holds digitized audio data, and that data at some point must get turned into an analog signal to drive the magnets in your speakers. That is the job of a DAC (Digital to Analog Converter) chip. Somewhere in the path from the FLAC to your magnets, you must have a DAC. (Microphones have the other side, an ADC chip (Analog to Digital Converter).
Danny - thanks. Part of my background is realtime market data delivery protocols for trading floors, which generally do not have packet level delivery guarantees. As you indicate, delivery guarentee (at the packet level anyway) would degrade that performance such that it would not be fair to call it realtime. But I digress.
So it sounds like the way a DAC gets digital audio has no delivery guarantees, but does have CRCs. OK. So the DAC knows if something is amiss with a packet. And while it does not know if a packet was missed, that’s apparently rare. Awesome.
So if packets rarely get missed, and the DAC has the ability to reconstruct bad packets (as determined by a negative CRC check), then were does USB based jitter arise from? Those bad packets? Noise? Both?
This thing about USB really being an analog signal is a red herring, created to sell $500 USB cables. It is correct in a trivial sense, in that electricity is an analog phenomenon, voltage levels going up and down. But USB and Ethernet solely represent digital values, zeros and ones, even though those zeros and ones are represented by millivolt levels. Ignore that, not relevant to this discussion.
The point is, USB and all other protocols we talk about has extremely small likelihood of packet loss, and zero risk of bit errors. There is certainly the potential that the cable drags noise into the DAC, which is a delicate mix of digital and analog electronics; a good DAC should not be sensitive to that, but it is possible that budgets and small form factors interfere. And jitter introduces audio distortion, if the DAC takes its timing from the signal; the solution is not to do that, asynchronous protocols or reclocking.
So overall, a storm in a teacup. Sells cables.
As for me, I’ve spent some moderate money on striped cables (USB and network and power) because belts and suspenders.
I bought a Mac mini today and just installed Roon. Holy crap I am impressed. The sound is great and the user experience is second to none. All of a sudden, all our other Macs can control the music everywhere.
There are those that swear that even on an isochronous USB data transfer, the SQ from the DAC can be improved upon by making the source data into the DAC ‘better’. Better seems to come from; galvanic isolation, re-clocking, re-packetising, using lower powered computers, as well as other unknowns, and adding ultra clean linear power supplies to any kit involved in these processes, as well as expensive cables linking them all together.
This all fascinates me and I like to try and understand things as best I can, yet the more I try and read and ask questions, the more it seems to be an area of utter confusion - at least for me personally! I can see why - on one hand digital is digital so if there’s sound coming out the speaker end then it got delivered OK and there is no ‘quality’ level on 1’s and 0’s in that respect. But I’ve read a few articles now, including those by John Swenson, where this confusion is discussed in enough length and with enough authority that it seems completely plausible that things aren’t so simple and that noise is ultimately what it’s all about. Not everyone seems to agree though. Plenty of people (cable & streamer manufacturers etc) seem happy to keep us all uninformed (or misinformed) if it’s of benefit to their sales. For example, is it really conceivable that something should cost, ten, twenty, or thirty thousand pounds, when it’s sole mission is to get a digital data stream across a wire in isochronous mode without it having noise or jitter or whatever? Probably not. But these ones always seem to be ‘the best’ or ‘most natural sounding’.
Enough people hear a difference using high-end streamers that I can’t convince myself it’s all psychoacoustics or bias or whatever, yet it’s pretty hard to find out the exact processes involved. I sometimes hear an improvement after I reboot my Mac mini. Did I really hear it, or is it just an expectation? I’ve no idea, and there’s probably no way to know, which I think is part of the problem in understanding.
It’s good this topic gets discussed in an open way that encourages learning and understanding, rather than those that say it’s impossible vs those that say they hear the difference as often ends up the case throughout the web. Hopefully one day soon we’ll all be able to deal with the issue easily and cheaply, so that it isn’t an issue, or DACs will just do it for us. I’ve got high hopes for the Pi for some (probably unfounded) reason!
A fellow forum member pointed me to the John Swenson links which are below. For completeness it’s worth noting that he is involved in the design of a product which aims to address these issues. I bought one, and personally found it made no perceivable difference in my setup so removed it, didn’t miss it and sold it. Also many of the commenters (including Gordon Rankin) seem to disagree with Johns assumptions. I’m still open minded though.
It’s an extremely contentious area but personally I’m very sceptical about megabucks streamers and the whole travelling circus around computer audio.
There are significant numbers who apply analogue paradigms to digital and expect the effects to be the same.
All the objective evidence from programs like Audiodiffmaker (which has really made proper comparative tests viable without the need for complicated ABX protocols) is that cables, operating systems (once correctly set up), disk types, magic stones and Hi-res vs 16bit (from the same master) make no difference.
Agreed. Luckily I’m at the point in my life when my ears are nowhere near as good as they were, so there seems little point in spending megabucks anyway. It’s also a bit ironic in that it’s taken this long to arrive at the point where we seem to be financially comfortable.