Running roon core on Linux as a non-root user

Core Machine (Operating system/System info/Roon build number)

Linux x86_64 4.13.0-46-generic, custom built system

Network Details (Including networking gear model/manufacturer and if on WiFi/Ethernet)


Audio Devices (Specify what device you’re using and its connection type - USB/HDMI/etc.)


Description Of Issue

Having issue running roon core as a non root user on linux box. I do care about security. These days it is a must.
Looks like roon core does need root access to mount network directories. From debug log:

01/02 19:35:58 Debug: [roon/cifs] domount: PASSWD="******" sudo -nE /sbin/mount.cifs "//" "/mnt/RoonStorage_1c79e29bfc1bebb44d6dd3c3a81aafc883ce86cd" -o guest,nounix,iocharset=utf8,user="roon",domain="WORKGROUP",vers=2.1,uid=1001,gid=1000
01/02 19:35:59 Debug: [roon/cifs] returned 1: sudo: a password is required

To go around this issue I can manually perform a mount (remote network path is defined in /etc/fstab with “user” option), but then it raises another issue. Roon core treats this as local directory and wants to track changes in real time (performs real time watching) all the time. I want to avoid this and let my NAS drives spin down whenever possible.

Any other ideas?


I have moderate Linux skills & make no claims on being a guru so take any suggestions I give with that in mind.

To make sure I understand your problem correctly, if you run Roon as root, then you can use the GUI Roon client to add folders and network storage because the Roon core process has the authority to make configuration changes to the operating system. If Roon is running with lesser privileges, this doesn’t work.

If this is an accurate description of the problem, then if you’re anything like me, you rarely mount folders (i.e. your storage locations are fairly static until you decide to make a configuration change). In my system, I manually setup an automatic mount in /etc/fstab on my NUC running RoonServer to connect to my remote storage on a separate Ubuntu Server. When I went to add this remote storage from the Roon GUI as a local folder on my NUC, Roon didn’t have to make any Ubuntu configuration changes to my NUC because these had already been done manually by me.

The thing that surprises me about your post is that Roon treats your remote mounted folder as a local directory and always tracks your changes. My RoonServer only does that to the local physical drive and gives me the option of manually rescanning my mounted remote drive. I don’t know why your system behaves differently.

On my system this behavior appears to be defined in the /var/roon/RoonServer/Database/Registry/Storage folder. The local drive is defined as “type”: “Attached” with “rescandelay”: 4 in the configuration files there. My remote drive is defined as “type”: “Share” with “rescandelay”: 0.

Possibly manually changing those values will give you the results you’re looking for. I’ll let you tell me how awesome or awful that suggestion is. :wink:

1 Like

Hi Darryl,

you do understand it correctly. It is all about permissions. I wish roon core (and all my other software) not to run as root user.

I do have static entry in /etc/fstab related to my music library that is located on a different device – a network attached storage system with HDDs. I do mount it over NFS for simplicity (SMB is… a windows thingy :mask: ) and my linux roon core user can mount this directory as well by itself by doing a simple

mount /mnt/nfs/music

(this filesystem is defined in /etc/fstab with “user” option – this permits any user to mount it).

By doing that the roon core treats this location as a local directory. It has no clue the path is mounted over the network. Following that, local directories are watched constantly with real time policy and it cannot be changed. I submitted a feature request for that, as it should be a piece of cake to implement.

In the meantime I had a look at the loc_* files in Database/Registry/Storage directory. In my case the nfs location is described like that:

“id”: “f87fe2d0-687e-47a4-bc85-83ee3856faa4”,
“version”: 2,
“rescandelay”: 4,
“location”: {
“drive”: {
“type”: “Attached”,
“volume”: {
“id”: “LINUXROOT”,
“title”: “/”,
“subtitle”: null
“isdir”: true,
“path”: “/mnt/nfs/music/!!Classic”
“ignoreitunes”: true,
“ignoreplaylists”: true,
“ignorepatterns”: [

As you can see the type is Attached and rescandelay is set to 4.
I will manually edit the file and see how it works.
I wish we could have most of the stuff in roon core set up from text console… but maybe it is just me.

Thanks a lot for the hint!

After modifying the storage configuration file and replacing Attached with Share, the folder was completely gone from roon core settings page. I had to get back to Attached and my watching interval is set to real time :confused:


Something just sounds wrong here.

Surely, Roon’s “Watched Folder” functionality is implemented by something like inotify. It would be grossly inefficient to be constantly polling the file system.

But if Roon is using inotify, then it’s not constantly accessing the remote-mounted directory (which is the behaviour you claim to be seeing).

Instead of replacing Attached with Share, did you try just setting the rescandelay to 0 only leaving everything else that works as-is? That might be worth a test.

Hi Jacques,
I am not sure how inotify events are handled over NFS and if the implementation is the same regardless of NFS version etc. I need to dig the subject deeper.


it seems to work, but in a different way than described.
Rescan delay set to 0, roon core says it is watching files in realtime. I put some new FLACs into the folder and it has no clue of the changes until I manually execute Force Rescan. Seems to be for me, but I’d prefer a setting to disable realtime updates anyway.


I think you’ve probably got it now. “Realtime” watching is surely implemented via inotify. But inotify doesn’t work (at all) over NFS (the remote kernel has no way to communicate FS updates to the local kernel over NFS).

Supposedly, it does work over SMB, but that’s not been my experience.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.