Roon Search Overhauled [2022-12] Update

Hi All,

It’s been just about a year since I last wrote about search, so I figured it was a good time to provide an update on what’s been going on behind the scenes.

The Roon team has been working on improving the search engine throughout 2022, and last week, we rolled out a new architecture for search. If you are running Roon 2.0, you have been using the new stuff for several days.

Previously, the cloud performed a search of TIDAL/Qobuz libraries and the Core performed a search of your personal library, then merged the results.

Now, the Core gathers potential matches for the search query and submits them to a cloud service. The cloud service then searches the TIDAL/Qobuz libraries if needed, ranks and merges the results, and returns the final list to the app.

This change will allow us to deliver better results and improve the search engine more quickly. For more information, keep on reading.

Automated Testing Tools

One of the challenges of improving a search engine is understanding the effects of each change that we make. A change that improves one search might accidentally make other searches worse. In order to make progress, we need to be able to test our search system and understand the intended and unintended consequences of each change.

We introduced an automated testing system in 2022 to help us with that. This system allows us to test thousands of searches in the cloud without using the Roon Core. Our test infrastructure will help us make sure we’re not making things worse as we continue to improve the search engine.

Shorter Cycle Time for Improvements

By moving the search engine to the cloud, we can release improvements without shipping new versions of the Roon Core and apps. This reduces the time it takes to deliver improvements to our users, and allows us to iterate quickly on specific issues when they are reported.


While our primary goal is to make our users happy, it’s also helpful to have objective signals that show we are making things better. The new search system includes an array of metrics that allow us to monitor search results quality on an ongoing basis. This helps us see if our changes are helping, and also helps us catch any accidental issues that might degrade the quality of search results for our users.

As an example, the “Average Click Position” metric measures the average position where a user clicks when they select an instant search result. For example, a result of ‘1’ means that the user selected the first item in the list.

These are the results from last week’s rollout. You can see that the number dropped quickly around the time of our rollout, which shows that our changes were an improvement.

Smarter Ranking

One of the challenges with the old search system was merging lists that had been ranked separately in the Core and in the cloud. It was difficult to compare scores from different ranking systems, so we relied on heuristics that tried to balance the ranks, popularity information, and text match accuracy.

This often led to less-than-ideal results, and troubleshooting issues was time-consuming because we had to replicate each user’s library in-house in order to see the issues. Since search ranking is now performed in the cloud in a unified way, this class of tricky issues has disappeared.

More Powerful Algorithms

Today, state-of-the-art search systems use Machine Learning to deliver results. It was difficult to use these techniques in the old Roon Core, so one of the main goals of the re-architecture was to enable this.

In the future, we will be able to tailor results to individual users. For example, a Beatles fan typing “John” into the search box probably expects “John Lennon” to be the top result, while a Jazz listener might expect “John Coltrane”.

This will also enable the implementation of modern search techniques like spelling corrections, search suggestions, semantic search, and gradient boosting, none of which are practical within the Roon Core.

Instant Search Improvements

After deploying the new architecture internally in September, we started the process of tuning it to perform better than the old system. We focused on improving the instant search dropdown, making sure that fewer characters are required to reach the desired result, and that the results continue to feel sensible as more characters are entered.

In addition to automated testing, we have a weekly review process where the product and search teams come together to examine a set of 50 representative searches and discuss how the results changed because of the previous week’s work. This helps us understand the tradeoffs and make decisions that prioritize the user experience.

Dozens of Smaller Improvements

As part of this work, we made many of smaller improvements to the search system. Shorter queries like “john” or “pink” should now return more coherent results. The system is not directly auto-correcting misspellings, but it is more tolerant of misspelled terms in multi-word searches. It is also better at prioritizing exact matches, deduplicating similar results, and choosing the best version of an album or track when there are multiple versions available.

While things have definitely gotten better in the past year, 2022 was mostly about laying the groundwork. We plan to make faster and more visible progress in 2023 and beyond.

I want to thank everyone for their patience as we work on improving the search engine, and for all the feedback in last year’s thread. It has been very helpful in understanding our users’ perspectives, and we hope you will continue to use our product and provide feedback in the future.


Infrastructure is destiny. Especially so in the world of search. That “average search position” graph is the most optimism-inducing thing I’ve seen in Roon-world in months. not because it’s good enough (it never is) but because you are able to watch and are watching and are making changes to make that (and other metrics) improve, week on week, user type and use case by use case. Thank you @brian @zenit and team. LFG.

I also don’t know how to communicate to folks who haven’t led or worked on this kind of work just how important this kind of structural investment is. This is why we have to “pay the price” of having search only work when we have an active internet connection. And it’s 100% worth it in my mind, though clearly not for every person’s use case or needs. But man, a better world lies in this direction.


I search for actual obscure band names and the results usually give lots of less obscure bands but not with the actual name I search on. That makes no sense. I just searched for “The The” and got a ton of bands that start with “The” but none called the name that I actually searched for. How can that be correctly written?


I’m having the same issue with exact matches not being found. I guess I don’t understand how things work, but it seems getting exact matches right should be one of the most basic things to get right.

I don’t mean any disrespect to Roon and their efforts, but we keep hearing about all this fancy stuff going on in the background which supposedly greatly improves search, but the most basic search still doesn’t work too many times.


Thank you for sharing how you approach this and the transparency on how this impacts the user experience.

You have an amazing product and I appreciate the continuous improvement.



Exact matches can be difficult because there are many tradeoffs to consider. Even if it seems basic, it can be challenging to get it right.

The music catalog we are searching is very large, with hundreds of millions of items. There’s a very good chance that when you’re part way through typing something, you’ve already hit an exact match for a lot of irrelevant results as well.

A simple example: there are 100+ artist entries for “John” in our database. But when you type “John”, you should get a list of notable Johns, not a list of people, albums, and tracks called exactly “John”.

Screenshot 2022-12-12 at 9.13.12 PM

There are also several people with the exact name “Miles Davis”, but you shouldn’t have to wade through the less relevant ones to get to Miles’ content. A result like this is a lot better for 99.999% of people searching for Miles than if we simply prioritized exact matches:

We are constantly balancing the value of exact matches against the problem of managing the “noise” in the music catalog. I’m sure there are cases where we’re not there yet, and we’d love to hear about them, I just wanted to explain a little bit about why this problem is not as simple as it might look at first glance.


Maybe allow quotes like Google to find the exact match?


Thanks for the info. While I understand very generic search terms (ie “John”) are a challenge, more specific search terms are also sometimes missed. For example, my most recent search fail, “Fiddler on the Roof”

Shouldn’t that search string be specific enough? :thinking:

I’m not sure exactly what you’re seeing. My results for that search look reasonable to me, but maybe you are expecting something specific that is not coming up the way you think that it should.

Anyways, to your question–first, lets put this in context, each day, we receive approximately 10 searches for “fiddler on the roof”, thousands of searches for “miles davis”, and tens of thousands of searches for “john”.

There is a lot of potential harm in turning the “exact match” knob if it means making 10,000 searches worse in order to improve 10, and that’s how this would end up working.

The best way to solve this has nothing to do with the fact that it is an exact match. I mentioned gradient boosting above. This is a machine learning approach that analyzes where people navigate after performing certain searches. It then learns to rank search results to prioritize the ideal results according to our user base as a whole.

This is a far more elegant solution than simply turning up exact match sensitivity, it scales gracefully to both common and uncommon searches, and it almost never does harm.

The new architecture that we have deployed is a prerequisite to using that technique, and this is something we intend to explore in the future.


Please see the screenshots I posted in my support post.

To summarize, I have two versions of Fiddler on the Roof in my local library. Search only finds one (using the Filter box instead finds both, btw).

Did you see the screenshots I posted in the thread I linked?

I have two versions of Fiddler on the Roof in my local library (both identified by Roon), yet search only finds one.

We are pretty aggressive about not presenting search results that look too similar to each other textually, because it’s easy to end up in a situation where the screen is overwhelmed by slight variations of the same thing.

I believe that we also won’t show two albums that are grouped as alternate versions side by side in search results, but I’m not 100% on that.

I think one of those mechanisms is preventing you from seeing both, but it would take more investigation to know exactly how. I am not getting a feeling that the search engine itself is involved in the problem from what I’ve seen so far.

1 Like


It’s my perception that things have slowed down. I have NUC 10i7 ROCK 32 gb RAM and a Windows 10 i7 as Remote

I choose Mozart 225 = 200 CD set , and filter for “Oboe”

Each letter no doubt starts the search but I try “oboe” Roon “hangs” before I can get past “ob” and takes around 30 seconds to show the full 4 letters and another 2 minutes to filter the list .

I go another route

Composer > Mozart > Compositions Filter “Clarinet Conc” - almost instant to ONE result ie correct

Now I hit the hyperlink top see the recordings Roon takes 25 seconds to return recordings !!, do it again after restarting Roon its almost immediate . Is this simply caching ?

Revert to using Oboe and its almost instant - Caching again ?

A previous thread is highlighting search issues with BIG boxes ie a nominal 100 or more. Mozart 225 will be involved in my search as its in my local library.

This seems to have deteriorated from yesterday even

Quick jump menu is incredibly slower than it was before this change.

first go it cant find any match to something in my library with just 3 words you can see me filtering it in tracks to highlight it

Added one more word it found it

then subsequently found it with the first 3 so this is inconsistent. on first searches.

Any improvements are welcome, thanks for the overview.
Roon search still seems to have difficulty with incorrect spelling.
It would be ideal if Roon was a little more flexible.

Green day, entered Greenday:




No doubt John Lennon is in that database and arguably he is the most “notable” (musician) John. But he isn’t in that drop-down. What criteria is being used to decide which John’s are privileged in the drop down? Implicitly you have already excluded even more notable (non-musician) John’s like John F. Kennedy.

Also Johnny Mathis doesn’t appear in my drop down of notable "Johnny"s. I couldn’t really find his version of his famous Chistmas hit “When a Child is Born” via search either. I eventually got there through the artist browser, but I would hope for something easier.

1 Like

This might be a naïve question(s);
Does search use AI? Is it learning and improving results based on user selection?

This is relevant, as posted above.

1 Like

Not trying to be negative just providing a data point

I have all Beatles and all nearly all John Lennon’s albums. I have 5 Coltrane albums.