Pause Roon database if storage offline

Yesterday I switched Roon Core to my Ryzen based desktop with the underlying music located on a server on my network, accessed via NFS. This morning I fired up my desktop PC without the NFS shares being online and Roon effectively dropped the entire library - bringing up the machine hosting the underlying music triggered a complete rescan and reimport of the library.

Unless I’m mistaken the current behaviour means that if a user has a music drive fail they automatically lose their Roon database too.

Would it not be possible to pause the DB under such circumstances and have the user confirm “trash and start over” only if necessary vs bringing the storage online and then bringing the database online too?

Hi @evand,

When the underlying files in a Roon watched folder are not available on the file system the albums are no longer displayed in the GUI however, nothing is removed from the DB…

… unless the user goes to:

Settings --> Library --> Library Management --> Clean Up Library …

And selects “Clean Up Library”


My files are on a QNAP, on the odd occasion I’ve fired up the Roon Core without the NAS being online,
I’ve seen this … but after switching on the NAS Roon repopulates the GUI, no files to clean up and all is well.

I’m wondering if somehow Roon now thinks these file are on a different path and thus has reimported (as new music) them rather the reinstating them.

One for @support, so I’ve moved your topic over to the support category of the forum.

Yip, that’s exactly how I’ve understood it, and I could confirm the same visually. The rescan and reimport started a soon as I ran:

mount -a

causing the NFS shares to be mounted to the same mountpoints they were mounted to the previous night. So whilst the DB was intact Roon was intent on redoing the work it took the previous day to do.

Actually, I’m not prepared to leave it running overnight to redo all this and thrash my ssd needlessly. I’ve stopped and disabled core on this machine for the moment.

I suspect it’s an oversight/bug in Roon possibly linked to an umounted NFS share appearing empty as opposed to a SMB share being inaccessible when the target is offline.

That’s plausible, let’s see what the Roon chaps can add.

Hi @evand,

I have a few questions:

  1. What kind of OS is the Ryzen running, is it Windows and if so which edition?

  2. What is the network setup like? What is the model/manufacturer of your networking gear and how is your Core connected and how are your NFS shares connected?

  3. Where is the media stored? You mentioned NFS, but what is it on, a NAS, another PC? If on another PC please provide details of the PC and how it’s connected to network.

  4. How can we reproduce this behavior on our end? What are the exact steps we need to take to get this reproducible in the lab? From your post it seems like:

  • Have NFS shares active
  • Shut down Core
  • Unmount NFS shares
  • Turn on Core
  • Mount NFS shares

Am I missing something here?

  • Both Ryzen and the PC on which the media is stored are running on Arch Linux 5.4.1-arch1-1.
  • Network is CAT6 wired, both machines on network via same Netgear GS108. DHCP server provided by BT router
  • NFS shares were mounted via Ryzen’s /etc/fstab /alib/ext4a nfs noacl,nocto,noatime,rsize=32768,wsize=32768 0 0 /alib/ext4b nfs noacl,nocto,noatime,rsize=32768,wsize=32768 0 0 /alib/ext4c nfs noacl,nocto,noatime,rsize=32768,wsize=32768 0 0

To reproduce behaviour:

  • setup a Linux PC, mount media drives into a folder, say /alib via fstab (e.g. /alib/disk1, alib/disk2 etc.)
  • configure and export NFS shares of aforementioned mount points
  • configure Core PC to mount aforementioned shares via fstab
  • point Roon on Core PC to point to the NFS share root mounted via fstab. In example above it’d be /alib
  • let Roon build it’s database
  • shut down Core PC
  • shut down media PC
  • boot core PC leaving media PC off
  • start up media PC
  • run mount -a on the Core Pc, forcing a mount of the media PC shares in fstab
1 Like

Thanks @evand,

Thanks for the clarification/steps, I will pass this information on to QA.

1 Like

Hi @evand,

I spoke to QA regarding this issue and they were wondering if you could please provide a set of your Core logs by using these instructions?

Do you recall around what time + date you tried to mount the NFS share so that we can cross-reference in the logs?

@noris, I’d have to recreate the issue and copy the logs because in my setups the logs are written to tmpfs rather than SSD. I’ll point to a small subset of my library, trigger the issue and grab the logs.



Reproduced issue pointing to a small subsection of my library. fstab, core logs and client screendump here:

1 Like

Thanks for sending that over @evand, it has been attached to your case notes and is pending review by QA.

@noris, anything ever come of this?

Hello @evand,

I took a look at our internal tracker today, and I can see that your ticket is still in our review queue.

This means our technical team is still planning to look at this, but we don’t yet have a timeframe for when that’s going to happen.

Once the ticket has been reviewed, I’ll be sure to follow up. Thanks in advance for your patience!

1 Like

Hi @evand,

QA spent some time trying to reproduce the issue this week, but they have not been able to get any consistent results so far. We’re still looking into this, if we have any further questions I’ll be sure to let you know.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.