Core Machine (Operating system/System info/Roon build number)
Apple Mac Mini (late 2012), OSX El Capitan 10.11.6, Roon 1.6 build 416
approx 47,000 tracks in library on separate (wired) network share
Network Details (Including networking gear model/manufacturer and if on WiFi/Ethernet)
Wired network with Meraki router
Audio Devices (Specify what device you’re using and its connection type - USB/HDMI/etc.)
Various endpoints, mostly Squeezebox
Description Of Issue
Over the last month or so, I’ve noted the core uploading approx 2GB data daily to identifier.roonlabs.net, which I don’t recall happening previously. What is this dataflow doing, and is it expected behaviour?
(Perhaps coincidentally, Roon can often take up to 30-60 sec to play an album when selected (this is more obvious when using an ipad/iphone remote).)
We have a theory of what is going on here, and it’s not exactly what I would call “expected” behavior but it’s also sortof normal. Let me attempt to explain:
identifier.roonlabs.net is part of our web infrastructure in charge of identifying your albums, based on some combination of the lengths of the files, tags, possibly file/directory names. (I think, the details of how we identify things aren’t my part of the app really) To do that, Roon the app sends identifier.roonlabs.net all that information.
Separately, there is or was what I believe is a bug in iTunes which generates ID3v2 tags which claim to be v2.3 but actually have v2.2 headers. This results in tags which claim to have, for example, a 4mb album title. I don’t know for sure how these files are created, or if it’s really an iTunes bug, but I’ve seen mp3 files with this problem.
We had a weekend maybe a month ago in which a single 20mb request sent repeatedly was seriously slowing down all of Roon’s web infrastructure, and had to return a bad request error for anything more than I think 4mb.
Combining these, Roon reads out a very long tag, sends it to identifier.roonlabs.net, the identifier returns an error, and Roon retries. That leads to uploading really a lot of data.
We have what we believe is a set of fixes to mitigate this sort of problem at the app, including better parsing for partly broken file tags and sane limits on what we actually read, but I don’t think they’ve been released yet. If you don’t mind, I’d like to enable extra diagnostics on your account to attempt to figure out if this theory is correct, and if so what file is causing the issue. Then I’d really appreciate it if we could get a copy of the file or files, so that I can test our mitigation.