HQPlayer / iZotope upsampling + Roon metadata = Killer combo [VST/AU Plugins]

Thanks @brian for the clarity and the time you took to respond. I now understand the complexity a bit better - it’s hard to detach yourself from the paradigm of one player in your computer and that’s that… :neutral_face:

Common problem with these is that the “DSP plugin” environment tries to hide information about both the source format details and also the DAC details. This goes precisely against what HQPlayer is built on. It wants to know as much as possible about the source and also as much as possible about the DAC…

Typical primary driver for selecting between linear- and minimum-phase filter, based on my research and testing is genre of the source content. To some extent, this can be handled automatically using adaptive algorithms, but best selection criteria would be “multi-tracked studio rock” vs “minimal mic natural acoustics classical”. This is kind of information Roon could provide towards HQPlayer in future…

1 Like

Picking this up and pairing it down… Forget about an open plugin…

It is pretty clear that upsampling improves the sound of redbook files. Both with HQPlayer and iZotope in Audirvana you can clearly see this. I understand the diversity of scenarios you are bringing up, but having some form of upsampling - either in the RoonBridge or RoonCore, would be a huge upside. If that upsampling is content aware via metadata and render aware via a knowledge of the output device, that would make it even more powerful.

Yeah, I agree.

I brought up the diversity to give you an idea of all of the things we consider before making product decisions about this stuff, not to suggest that we couldn’t do anything until we did everything.

Supporting 3rd party plugins definitely complicates things more than supporting a few things that we manage more closely. And up-sampling is at the top of that list. It’s a standard feature in all of the players we compete with, and it should be for us too.

2 Likes

Thx @brian. Good to know upsampling is a priority - the only reason I sometimes switch to Audirvana to be honest.

This actually made me realize you also have such an issue of where and how to implement software MQA decoding… It would have to be done on the bridge side as you need knowledge of the max sample rate usable by the output device - either that or relay that info back to the core.

Indeed @brian, it is very good to read that upsampling is a top priority for Roon.

Actually, out of curiosity, where is transcoding currently handled? Eg: when you play a dsf file to a RoonBridge that doesn’t handle dsd?

All processing is in the core/server right now with the exception of software volume adjustment (only when that is enabled, of course)–which is done as close to the output device as possible to reduce user interface latency.

Ok understood. So RoonBridge is speaking back to RoonCore and tellling it what the renderer is capable of. That sounds like it’s all you’d need.

Yeah, in fact one of the most important behind-the-scenes reasons to use RAAT for everything is that it makes the device level information fully available to the core.

But then doesn’t this clear the path for upsampling or even VST/AU plugins available on the core? I understand you might not want to go with the most universal implementation but it’s clear that at least some basic DSP can be done on the core side.

But then doesn’t this clear the path for upsampling

That path was always clear–the Core is the best place for the DSP to live whenever it’s practical. If/when Roon provides built-in upsampling capabilities that DSP will happen in the core.

or even VST/AU plugins available on the core?

VST/AU are a special case because of their platform dependence and user interface requirements–knowing about audio device capabilities doesn’t do anything to help with this.

I think I see the point of confusion, so let me make something clear: The VST/AU plugin can run on any RoonBridge, not necessarily the one that has the relationship with the audio device.

Audio being handled in a VST/AU would travel like this: Core -> RoonBridge+VST/AU Plugin -> Core -> Audio Device

Audio using Core-Based DSP would travel like this: Core -> Audio Device

Each arrow could represent communication within internal to your computer OR communication via your local network. So, basically…you can set it up however you want.

Examples:

  • If you want to run the plugin and the core and the audio device on the same hardware (Audirvana’s architecture), you can do that. Fire up RoonBridge right next to Roon and use the plugin there.

  • If you have a core on a Windows PC, and a Mac plugged into a USB Headphone Amp in my office, your should be able to run your favorite Crossfeed AudioUnit on the Mac and then drive the headphone amp. This architecture allows for that, too.

  • If you have a RoonServer appliance, Room Correction DSP in a Windows-Only VST, and a SonicOrbiter SE in the Living Room, you can set up a Windows PC to run the VST, set it on the shelf next to the RoonServer appliance, and stream audio through the VST to the SonicOrbiter SE.

  • Replace “SonicOrbiter SE” with “Squeezebox Transporter” in the above example…and it still works.

This is a lot of architecture algebra. When you build in the right points of flexibility, the simple cases are easy, the complex cases are possible, and legacy concerns (like supporting old plugin systems that were built for a smaller world) are compartmentalized. Not simple stuff to get right–but I think this solution works.

5 Likes

So does this mean that one day soon I’ll be able to use my Dirac Live VST/AU plugin?

1 Like

I thought I was following along just fine, then it got a little convoluted with all of the up sampling and such. So, I’m going to necro this thread.

Can you walk me through this?

I have a Windows 10 PC. I have a SonicOrbiter i5. I have a mRendu going from my router to my DAC via USB. Lastly, I have a room correction VST.

So, on what device to I install which software to run the VST and what is the path? I mean, for instance Dirac VST would have to be ran through a software, wouldn’t it, to even output a signal?

I’m sorry, but I got a little mixed up between the talk of RoonServer and RoonBridge.

Thank you kindly for your time.

Best,
Nick

You walked into a theoretical discussion about how we might implement VST support, not a description of existing functionality. Roon does not actually support loading VST’s.

The architecture behind VST’s is obsolete–it assumes that a single app running in one place is doing everything from the user interface to the DSP–basically how things worked in then 90s when Steinberg invented the standard. Roon is a distributed system–lots of little pieces of software running in different places working together. VSTs do not scale well to this model–hence the complex discussion above about how to fit them in. Other reasons VSTs are obsolete: they don’t support DSD or 64bit sample widths.

The best path forward for accomplishing Room correction in Roon today is to generate/load a convolution filter.

(Note: I’m not saying that we will, or won’t implement VST support in the future. The theoretical approach outlined above is no less valid than when I wrote that post last year, and while VST has serious shortcomings, there are probably some plugins that will never exist in another form).

1 Like

Thank you, Brian. No wonder it was confusing. Makes sense now.