Will Metadata Source issues ever get fixed?

I can appreciate the frustration here, and can assure everyone it’s not falling on deaf ears. We launched Roon roughly 14 months ago, and have worked day in and day out since to grow and improve the product. I won’t go into all the visible changes and improvements but I will mention that nearly all of this has been done by a team of 10 people, including business development and support staff.

This isn’t to give ourselves a pat on the back, but to make the point that anything which hasn’t been completed over the last 14 months isn’t for lack of trying – for fun, I just took a quick look in our bug tracker, and we’ve closed over 2000 issues since our launch, including hundreds of metadata fixes. :tada:

Passing on errors to our metadata providers is absolutely part of the gig and, on some of these issues, I think @DrTone’s frustration is justified – we can do better. But they’re not all that clean cut, and I’d ask everyone to keep in mind that when we fix metadata issues in our automated systems, not only do they fix hundreds of small errors, but the results are often transparent. We make a small change in our code, and the next day hundreds or thousands of albums get fixed, improved, or linked with additional data.

These small changes can pay huge dividends, so decisions about where to devote effort tilt towards fixes that impact large swaths of content. This is NOT to say we’re not interested or wiling to report isolated data errors back to our metadata providers – we absolutely are. The point is that our number one priority is always on improving our product and technology at its core, since that maximizes the return on our efforts.

We have spent some time in the past couple of months growing the team, since that is the only way for us to pick up the pace–this is always a trade-off, as ramping up new employees takes a ton of time and resources from everyone else. One of our recent hires is a developer who is fully dedicated to Roon’s metadata experience and data services–this is a rare luxury on a team where everyone wears many hats.

Immediately after 1.2 shipped we began laying groundwork for major improvements to our metadata system. We’ve built out new development, operational, and deployment infrastructure, including, most importantly, a system for putting sweeping metadata changes through extended testing. Much of this work was an exercise in paying off technical debt, but it’s a crucial step, and one that’s required for some of the larger-scale work that’s needed.

Many bugs were fixed as part of that work, and metadata rollouts are now happening in an automated fashion, on a reliable, schedule that matches the frequency at which we receive data from our sources

We have several more fixes in the pipeline. These changes relate to album-level equivalence, improving our automated equivalence determinations to link up tens of thousands of albums with higher quality metadata, and improving issues related to incorrect or missing album reviews. Careful validation and testing is currently underway, as over 100,000 albums will be positively impacted once these changes roll out.

One of the most frustrating parts of working on metadata is: when it’s working properly, it’s invisible. New users never see old problems, and existing users benefit from transparent background metadata updates. People notice when servers are unstable or slow, not when things are running smoothly. We only rarely announce fixes because they roll out to our users on a staggered basis, so there isn’t a good discrete point in time to make an announcement. That doesn’t mean that work isn’t happening.

You’re probably saying, great but none of that addresses Dr Tone’s specific issues. I can understand these are pretty easy to notice, especially if they appear to have gone unresolved for so long (and since there’s no way to resolve them in-app at the moment). While improving our metadata system as a whole is a higher priority than addressing isolated reports manually, you’re absolutely right that we should be able to report errors back to our metadata providers and get the fixes done in a timely manner.

I spent some time investigating the three issues referenced above, and here’s what I found:

This issue is still open in our bug tracker because it’s something that we know how to fix, but which requires some delicate architectural changes to ensure we get it right. This is a unique case in that two entries are marked equivalent in our system, but both are credited on the same album. We have some related work planned for the coming months, and all that work will happen together.

This issue was tracked in our equivalence fixes project, and a fix was checked a few months back. I downloaded a copy of this album today, and I’m not seeing this issue, so it’s possible some bad data is cached here. Have you tried to re-identify the album? If that doesn’t resolve it, let me know, and we’ll have a look at what’s going on in your library.

Finally, I have to offer a mea culpa on this one, as there seems to have been a miscommunication. The error was passed to a developer on our team when it was originally reported but unfortunately, it was passed back to me for a clarification and I missed it – no easy way to explain that other than to apologize for the oversight.

Our metadata providers do have a good record of getting these fixes implemented quickly, so I made sure this was reported to them today, and my hope is that we get corrected data in the near future.

So, to be clear – please continue to report the errors you see. Sometimes they’re our errors – equivalence issues we can fix now and use to improve Roon down the line, or small bugs that can be fixed quickly. Sometimes they are aggregated into larger issues that will be resolved when we have enough information to make a broad, sweeping fix. And sometimes they are errors in the data we get from our providers. We actually had a meeting earlier this week to ensure our new support and dev staff have a clear flow for this process, even before @DrTone’s post.

We are committed to improving this process, and we’re going to continue to try and resolve each report we get as quickly as transparently as possible. Thanks all!

fin

6 Likes