Automated AI Based EQ

I am by no means an expert, :wink: but hear me out for a sec… I am a big fan or Roon’s DSP Parametric EQ section. However, I do listen to quite diverse music styles, ranging from 70’s prog rock to modern jazz, fusion and world/classical music. I found that, in order to enjoy this diverse recordings, I need to tweak the EQ quite a lot. I was wondering if it is possible to develop an automated EQ that is tuned to your personal preferences. How would it work?

  1. When you startup the (existing) DSP Parametric EQ, Roon suggests (randomly) 10 tracks from your library. (Ideally these tracks would be as sonically diverse as possible)

  2. For each track of those tracks, you tweak the parametric EQ to your personal preferences

  3. The EQ records these settings, linking them to the frequency bands in the 10 songs.

  4. This info creates the algorithm for an automated EQ based on your preferences.

I am wondering if this would be possible to develop in practice…

2 Likes

That would be excellent even if we would still be a prisoner and slave to roon for as long as we intend to enjoy music. I also have to EQ all the time just to get through a song. Heaven knows I’m at a severe disadvantage with Naim and PMC, two audio brands that do not make music…

You might want to check this out.
@miguelito has had this on his wish list for a while.

1 Like

To be honest, this is too unstructured to be useful in my opinion.

I would however (as I did in the linked thread) advocate for track-based DSP.

To accomodate your ask you could add rules to enable the eq setting per track by selecting from metadata such as year, label, artist, genre, etc. I have an “80’s” parametric eq saved and it works pretty well with most of the 80’s pop tracks.

So you’d have the option to assign an eq per track/album or to autoselect.

Now to be honest, since the Roon team places user experience above anything else (and I don’t disagree), I would expect that even track-based DSP is unlikely unfortunately.

1 Like

So long as changing DSP requires rebuilding and refilling the audio pipeline (resulting in a gap), then automated DSP changes seem to me to be contrary to an enjoyable uninterrupted listening experience.

As I understand it, Roon already does levelling at the endpoint in order to avoiding refilling the audio pipeline, however pushing DSP to the end point would only be practical for more powerful (CPU/DSP wise) endpoints. Perhaps with R-PI4 this may be more practical, however it would probably be a significant architecture change to the overall audio pipeline.

Personally I would welcome being able to push some DSP to the endpoint if only to allow real time change of parameters without interrupting audio. I would be quite happy that only powerful enough endpoints would support such behaviour, so R-PI3 - probably no, R-Pi4 maybe for some DSP, etc. but this also becomes much more complex for Roon to support and test.

Couple of things:

1- Agree that if someone is applying different auto-DSP settings to consecutive tracks, then it would likely break gapless playback. But my need is really on a per-album basis, in which case this is a moot point.

2- The biggest hurdle, I think, is that by design DSP is applied in the core post endpoint routing (no extra endpoint work). I don’t think leveling is applied at the endpoint other than possibly receiving a message to change volume from the core. By design, DSP is linked to the endpoint, so it is likely past the endpoint routing in the code. Changing / adding DSP pre-routing is a bit of a project: it would require both audio routing and GUI changes.

3- It might be possible to add an auto-DSP by endpoint, but it would be less than a great user experience as the DSP would generally be meant to correct the source rather than endpoint per-se. Practically this means you would have to add this auto-DSP stage to each endpoint, which is annoying.

So I think the Roon team is caught between a proper rewrite, adding global DSP, and the straightforward but less than great user experience of having to add the global auto-DSP to each endpoint.

I don’t expect the Roon team to do any of this frankly.

Per albums makes more sense.

Re DSP - DSP could be applied anywhere that has sufficient processing to do so and operates on a platform that is practical to support. Real time adjustment is best performed when that DSP is performed close to DAC as the further in the digitsal chain from the DAC, then the more buffering there is likely to be.

Adjustment is simply a parameter passed to an API via direct call, or as a message over a network, received and then passed to an API. This has been reality in DAWs, synthesizers, professional signal processors etc for many years.

Of course, this requires that the DSP implementation is able to accept parameters in real time. Many library implementations cannot which may be a significant problem requiring a new library be found/implemented, or at least re-implementation of the wrapper around the DSP processing. Having turned an implementation of open source DSP code in c++ into an implementation suitable for a DAW plugin with real time control in the dim and distant past, I know first hand what a pain that can be :slight_smile:

Miguelito, I think you are missing my point. I am not so interested in “track” based EQ, but rather in “user-based” EQ. I guess this makes more sense, since EQ settings are very subjective to the listener. I wouldn’t want Roon people to decide how much extra low end a track would need, based on some kind of reference point. My point is: there is no objective reference point in this. It is completely subjective, depending on the user. By learning the preferences of the listener (with some initial input from the listener), a smart DSP could adjust the settings per track, to the users preferences. Ideally what the automatic DSP should now is: “with a song with this little low end in this frequency range, Andre usually wants to add 5Db around the 150hertz band”.

Currently DSP is applied in the core. Changing that to move to the endpoint at this point would be insane - imagine all the embedded code you’d have to change.

1 Like

Yeah but I don’t think that the machine learning process would work with 10 tracks. That is my point. And since you’d have to tell the training algo what you like per track, that is just unworkable.

you might have a point: it might need more tracks to train. I’d be happy to do a 100 tracks, to get to a smart well-trained EQ :slight_smile:. “And since you’d have to tell the training algo what you like per track, that is just unworkable.”: well, you can keep it simple by adjusting each track on only 3 or 4 frequency ranges. I am not saying that is perfect, but it would get reasonable results. At least, it would be subjective, closer to my preferences, than any other external reference point.

I don’t think it’s viable. Can you imagine the frustrating user experience when all of a sudden you get a completely wrong eq? There’s no way this works.

To put in persective, for a particular genre and style within the genre you would prob want 100 tracks at the minimum. If you consider say 5 styles per genre, and 5 genres, that would be 2500 tracks. And after all that work, you would likely get a terrible user experience every time the algo sets something which is completely off. Not worthwhile.

The automated EQ getting it wrong some time? Well, maybe. But isn’t that a hell of a lot better than getting it wrong most of the time, for instance, in my case of almost every track recorded before 1990, with very limited low end? Remember, this is about personal preference. “Wrong” is only “wrong” If I don’t like it. Simply manually correct the mistakes and the AI learns from that as well.

You try to complicate things with genre and style, but that is really not necessary. Just pick 100 tracks you like best, across all styles. Than style is not so much an issue.

What I think would be interesting is to have a DSP sharing method where I can load, say, the Jackson5 Motown curve that someone with good ear has deviced, or for example a voted top eq for INXS’s “Kick” album, which def needs some help.

1 Like

Right. And that’s why I have an 80’s curve to fix the low end. This curve works with most tracks that need it, there seems to have been a period where everyone was rolling off the bass by about the same. I call it “Mastered for Walkman”.

Getting the DSP setting wrong sometimes is worse than no DSP in my opinion. My contention is that given realistic sizes of training sets, it would get it wrong most of the time.

funny enough I’ve got exactly the same solution as you have :grinning: “80’s boost” preset.
I’ve got several presets like that. Training sets might be an issue. But considering the amount of time I take now, with tweaking the curves, I might as well put the effort in training an algorithm.

1 Like

Something i keep getting asked for with deep harmony is a means of assigning EQ presets to the remote so that users can switch preset. Unfortunately not currently possible.

But maybe making it possible might be a half-way house solution for some people.

1 Like

I read a bit about Deep Harmony.

Would that work with Logitech endpoints only?

Just because I dont want to pollute this thread with deep harmony stuff, i will refer you to this thread:
Roon Extension: Deep Harmony - rich feature set for Logitech Harmony - Tinkering - Roon Labs Community

The short answer, deep harmony is an extension to integrate logitech harmony hub remote control to allow Harmony to control Roon (transport, volume, some favourite and playlist shortcuts) and also allow Roon to control your Hifi devices (volume and source integration).

1 Like