You can learn to recognise Jitter if you wish. It’s an academic exercise for the professional but in the real world, with good quality Hi Fi, jitter is understood and largely eliminated. Unless of course; it isn’t…
That’s the $$$ question. Are we all assuming or are we interested in testing these things out ourselves. Forgetting the asynchronous or network portion of the rig, I wonder how many people have tested for jitter or differences in their audio portion of the stream. For example, if you have a network box and a DAC in two different boxes, you might use a SPDIF or AES cable.
Now people commonly say they hear differences between SPDIF and AES cables. Let’s look at how they do those tests (which IMO is flawed). They’ll buy 1.0 meter of brand A and 1.5 meter of brand B with different connector, and again 1.5 meter of brand C. They’ll say brand A is better than B, but not better than C. One is better in one area, but not in another.
This is very flawed. No one controls for length or electrical properties of wire. This is partially how I first learned of differences between cables.
For my SPDIF and AES cable, I ordered 3 SPDIF and 3 AES cables from the same place (Pro Audio LA). All SPDIF cable cables used the same connector and wire (Mogami 2964) but only varied in length. Similarly, 3 AES cable were made out of the same wire (Mogami 3173) and same Neutrik connector, but only varied in length. Length were 1.5 feet, 3 feet, 6 feet, 10 feet and 18 feet.
I initially ordered different lengths as I wasn’t sure of the location or how close or far the equipment would end up from each other as I didn’t have a rack in that room at the time or the space.
For months, I used the 1.5 feet SPDIF cable as the gear was stacked on top of each other. One day I had to move one piece of gear so I had to move it far away and I was forced to use a 6 footer. Immediately I noticed a difference in sound. My first reaction was that this difference was that this had to do with distance and separation. I didn’t even consider the cable as a factor since it was playing without any dropouts. I did this back and forth for a day or two, until I finally thought: Why not put the gear on top at the same place and just swap out the cable. Guess what, the difference still remained.
So I put it up on the forums to ask what’s happening and that’s when I learned about reflections and return loss and impedance mismatch etc. Before that, I had no idea that digital cables could even make a difference. Had zero knowledge about any of this. It came purely from an accident.
I wonder how many have done this kind of test here. Take any well measured cable from a reputable company like Mogami, use same connectors, just different length for their AES or SPDIF cables. Do all these cables sound the same? Does one sound better than the other? Which one do you think is more correct sounding?
I bet most people just picked a cable and a length that worked for them and forget about it. Does it work perfectly for them? Sure. Does it sound good? Sure. However, is their system immune to jitter that may be caused by differences in cables and/or the design between the two pieces of equipment? Are they assuming that’s immune or have they tested for this…I personally like testing out these assumptions.
You do these kinds of tests long enough and they just become part of your normal listening. I would’ve likely failed all these tests 10 years ago. Listening skills are definitely not set in stone!
FWIW, I recently had a chat with the guys at Grimm about AES cables and length in ‘general/average cases’ and they agree with me on this.
Personaly I watch for listener fatigue. If I don’t get any, my system is doing great. Otherwise, the quest is never ending.
Individually we must decide when enough is enough and stop chasing the dragon. I also appreciate that chasing the dragon can be so addictive.
It actually can, hence why packets at times are considered lost and will be resent.
However, once up in the TCP layer or above, it has no effect on the end result so it’s pointless in the context of audio streaming over TCP. With UDP, it could theoretically affect the end result but only with a really poor application layer implementation combined with a really shitty network and it still wouldn’t present itself as jitter in your digital audio within your DAC.
So yes, you may have jitter in your network but abut that jitter is not the same as jitter in the digital audio domain.
It’s not an assumption however.
Oh boy. And how do you think you are seeing pictures on a screen with your eyeballs? Are they digital photons?
I shot a video where I’m using JRiver and playing back 24/192 over my network. While the music is playing back and I’m narrating I’m removing the Ethernet cable but the music still plays.
Where’s the jitter? That’s what subjectivist that erroneously throw jitter argument around need to understand.
I welcome a blind evaluation by you.
Good luck Anders. You are going to be dealing with people that are going to be unable to ever figure out the logic being presented here.
No, but AE67 said all the pixels were correct. That could only be correct if the data arrived at the laptop perfectly. The display could be displaying colors slightly differently and the viewer would never know. In fact, this happens all the time as the display color table determines what colors the viewer actually sees. Color matching is a big deal to a photographer.
Since you just quoted me partially and then describes what happens when playing from a buffer;
Are you saying that network jitter cannot exist and packets are never being resent in network layers 4 or below or that it doesn’t affect the audio quality?
If the latter, wasn’t that exactly what I said here? “However, once up in the TCP layer or above, it has no effect on the end result so it’s pointless in the context of audio streaming over TCP. With UDP, it could theoretically affect the end result but only with a really poor application layer implementation combined with a really shitty network and it still wouldn’t present itself as jitter in your digital audio within your DAC.”
It is correct that the behavior you describe does not present itself as jitter of an audio signal, which means it is irrelevant in this discussion.
That’s what I mean when I say there can be no jitter of the signal on the network.
If you introduce the concept of jitter on the physical signal, it is conceivable that this could exist but it is at a meaningless layer in the stack. It does not present itself at the IP level, because the IP signal is not presumed periodic, and therefore the exact shape and timeliness of the physical electrical signals on the wire are irrelevant. They are as irrelevant as it would be to argue that the signal is granular and noisy because it is carried by electrons — correct but irrelevant.
And a request for a packet retransmit does not exhibit as jitter, because at the packet level the signal is not presumed periodic and can therefore not have periodicity errors.
I’m not convinced. I think the logic is pretty clear: if a signal does not claim to be time accurate, you can’t accuse it of being inaccurate. And just in case, let’s not confuse people by saying things that are flagrantly wrong. There are things in audio on which people reasonably disagree. But there is no disagreement here, nobody claims that the network can introduce jitter to the audio signal.
Among others, IETF RFCs 3366, 3393, 5920 & 6349 are touching jitter in TCP.
If someone still thinks jitter at network layer causes audio degradation in Roon and similar network audio applications, I suggest you read them.
In a multiple copy stack with many FIFO (Clock Domain Boundaries) buffers it doesn’t even make any sense, excepting smoke screen, to bring it up.
Tidal on Windows and OS X caches the entire track. What does any Tidal user care that if in the first 15 seconds of an 11 minute track TCP detected an error and a packet had to be re-transmitted?
I am saying it doesn’t affect audio quality at all. I did a wireshark capture of 240GB of music streaming and 0 lost packets.
My Roon ROCK NUC buffers the entire song from Tidal within the first seconds of play…I can shut down my fiber ISP and it plays just fine the whole song. And second to that my SOtM sMS-200ultra buffers 3-4 seconds during playback via RAAT so even there it is possible to pull the LAN cable and it will still play.
(This is just an interesting observation about jitter in general, not network jitter which doesn’t exist. It’s really an interesting illustration about audiophoolery.)
I saw a writer in an audiophile journal say, “once jitter has been introduced I can’t be removed.” This sounds reasonable, it’s the way we used to think in analog audio, that’s why you need to set up your cartridge and arm correctly, can’t fix it if it’s already distorted.
But wrt the claim about the persistence of jitter, I refute it thus:
Hook up your disk player to a DAC that has a buffer. A big buffer, 2 GB (affordable: $10 at Amazon.) Play the entire disk and store it in the buffer. Disconnect the player, put it in a burlap sack with two bricks, row out and sink it in the middle of the lake. Go back and play the music out of the buffer. I argue that the jitter of the original player data stream is no longer evident.
Digital is not like analog.
So we agree.
It seems right, but it is wrong, because werewolves do not exist.
If they did exist, would they affect child mortality in developed countries?
I don’t know.
Now how about them super network cables?
In addition to losing a nice disc player, you would be cited for improper electrical dumping by the Bureau of Land Management or other government regulator.
That said, I’m not sure exactly how much jitter we can hear anyway, unless it were severe. I use good quality Blue Jeans Cable ethernet cables. Affordable and pre-tested prior to shipment.