I frequently have to correct artist credits for tracks and albums because Roon is doing a very poor job of differentiating artists with the same name. I thought I would give Valence a shot to report a bad band image and this fundamental issue persists over there, too:
There are 5 different artists named Andromeda on this list. How am I supposed to guess which is the right one?
If needed and appropriate I add the same number as discogs has to the name. After that I cleanup and merge artists wherever possible. It‘s painful but there seems to be no alternative.
I’ve raised this more times than I care to remember, good luck getting anyone to care enough to do something about it.
I’ve been adding [I] [II] [III] to artist names to differentiate but matching Discogs is a good idea. Those local edits don’t seem to carry over to Valence edits, though.
On the album side you can get Roon to differentiate by adding the artist and and albums to MusicBrainz then giving it a few days for Roon to integrate into its own metadata store. At the very least at that juncture you’ll be able to assign the album to the correct namesake.
As for sorting namesakes in Valence’s art director - that’s the part where I wish you luck. But it’s fun trying to figure out which is which when there are no clues, isn’t it.
Roon used to boast about their object model database that happily dealt with all this ‘same name’ confusion but things seem to have deteriorated somewhat in recent years.
I have over 25,000 edits on MusicBrainz.
It would be really helpful if Roon implemented the Disambiguation field from the Artist. That would make it easy to distinguish and give users a way to submit/correct data when needed.
That’s a good point. There are often ample hints within albums that should also assist with assigning an album to the correct namesake - composition matches across albums, genres, common performers & other credits should provide strong hints of which artist to assign an album to, but to dare I’ve not seen any evidence that this type of data has been leveraged to improve matches.
There’s also ample opportunity for confidence scoring and prompting a user to make the final determination when confidence is below a threshold, but again there’s no evidence that’s being considered. I’d think coupling this to machine learning would deliver good results. Like all things commercial though I guess they’re asking is the juice worth the squeeze.
Always a good idea to tag @metadata_support as they do not always get to read all the posts. I’ve been waiting for a fix for this for a long time as well