These assumptions don’t resemble the truth–allow me to clarify–
First–the machine learning models and radio algorithm are 100% ours, based on our data assets, our IP, our development work, and our expertise. We’re not using someone else’s API’s to implement the radio stream or trying to sell streaming services.
The choice to make Roon Radio require a streaming service was a product/technical decision, not a business decision. There were no business guys in the room, and business was not a motivating factor. I know this because I am the person who made the decision.
The only reason I care if you have a streaming service subscription is so that you can get more out of Roon, because having access to all of the music on earth makes Roon a more compelling experience. There is nothing else in it for us. It’s not like we are getting a cut of your TIDAL subscription.
Having everyone pick tracks against the same “all the music on earth” library was a huge enabler for making the radio experience good. All of a sudden, we could have clear discussions with people about what was going wrong, replicate the problems ourselves, and fix them. That is impossible when everyone’s individual library is forcing them into a slightly (or wildly) different result.
We fought through a lot of this while building the old in-library radio feature two years ago. It’s a real struggle to make good picks when you have to filter it through the contents of a library. Stuff that worked well in house failed in the field all the time. The old in-libray radio feature works best within a certain band of library sizes/compositions and performance falls off outside of that. It’s almost inevitable–if there isn’t a good variety of good picks to be made without just playing the same 1-2 albums that match the seed in a small library, the algorithm is going to have to start making crappy picks or it’s gonna have to repeat itself a lot!
We did briefly try using the new algorithm for in-library radio. I don’t think it made it past internal testing into alpha/beta, even, because it was so bad. After filtering down the picks from the new algorithm through all but the largest few libraries (1% of people), the experience was garbage. It turned out to be better to just leave the old implementation for that case, since it was already doing a decent job, and reserve the new stuff for the situations where it performed the best. So that’s what we did.
Hope that clears things up.