Roon Causing Constant NAS HDD Access

You can SSH from the Terminal app on your Mac. The form is “SSH userid@IP address”, e.g. MYID@192.168.0.13. Once you’re in, it’s like being on the machine. Super easy.

BTW - SSH was originally a Linux command, i.e. macOS, also. In order to do it in Windows one needs an extra piece of softrware.

Thanks for that Slim!

After running the utility QNAP links to above, I discovered that it takes about 3 minutes for my NAS to settle out after quitting Roon. The utility no longer reports anything to the Terminal screen or the Log it generates.

Do I post the log here, send it to Roon support or send it to QNAP support?

Since you’re here, might as well check with @support.

So I quit Roon, waited 5 minutes and started the utility blkdevMonitor_20151225.sh. I let it run for 5 minutes but it didn’t generate any data. I stopped the utility and started Roon with Storage disabled and waited 15 minutes for things to settle. Then I started the utility again and let it run for just 3 minutes. This is just the1st page of over 25 pages of the resulting log file for @support to review:

Hi John,
I have not encountered the problems you describe, but find your situation interesting nonetheless. I have a Qnap TVS-471 NAS (Roon Core resides here). I have gb Ethernet as well, but hardwire both my Macbook Pro (using HQPlayer) and Oppo UDP 205 (processing via USB DAC) - no WiFi on either because I found that I would experience occasional dropouts or annoying artifacts. I have the recommended configuration that Roon describes in the following link:

https://kb.roonlabs.com/Roon_Server_on_NAS

Have you considered applying this configuration (i.e. place your Roon core on the NAS - it works quite well for me)? The problem you describe sounds like either a grounding issue or RF noise from your NAS and/or Mac. For starters, you might consider keeping your NAS on a separate electrical circuit than your computer or playback system.

Just some food for thought (perhaps you already thought of these ideas already).

Good luck!

Gorm

Hi Gorm,

Thanks for the suggestions! My TS-431P doesn’t meet the recommended requirements. The “noise” is coming from the NAS itself as the disks are being accessed constantly while Roon is running.

@support I just sent the blkdevMonitor_v2.log file to QNAP via a support ticket. If you want it also let me know the email to send it to.

How old are your drives. Could a drive be failing?

Only when Roon is running? I quit Roon and the disk access stops. The drives are probably 2-3 years old.

QNAP got back to me just saying to not run Roon. They did not specify what was causing all the disk access. I don’t think they want to spend resources on it.

For now, I don’t run Roon unless I want to use it.

I have exactly the same issue. Roon core on Linux Mint 18.3, watched folders for storage on a QNAP TS-453A.

The NAS used to sleep when not actively in use but over the past few months I can hear the disks being accessed every 10 seconds or so, just like the OP states.

I’m on my second year of Roon subscription. This issue was definitely not present until a few months ago.

The behavior is constant. Roon doesn’t need to be in active use (i.e playing music or scanning for new material), it keeps going as long as Roon Server is running. I can submit logs to @support if required.

Glad to know I’m not crazy or that I have some unusual set of circumstances creating this behavior from Roon.

I just restarted my NAS and noticed this pop-up on my screen:

22 AM

Roon was not running at the time and had been shutdown for about 12 hours.
I don’t know what this is. I did not create this Share on my NAS.

Hello @John_Conte,

The tech team is looking over the logs generated by the QNAP, I will be sure to update this thread if we see anything that could be causing this issue to occur for other users.

-John

@John_Conte and @support I ran the QNAP utility. Here is my output, Very similar to John’s above. RoonServer running but idle.

===== Welcome to use blkdevMonitor_v2 on Fri Mar 23 06:16:55 GMT 2018 =====
Stop klogd.sh daemon… Done
Turn off/on VM block_dump & Clean dmesg
Countdown: 3 2 1
Start…
============= 0/100 test, Fri Mar 23 06:17:03 GMT 2018 ===============
<7>[3258877.001990] smbd(25162): dirtied inode 209715220 (smbXsrv_open_global.tdb) on dm-0
<7>[3258877.002003] smbd(25162): dirtied inode 209715220 (smbXsrv_open_global.tdb) on dm-0
<7>[3258877.002209] smbd(25162): dirtied inode 209715213 (locking.tdb) on dm-0
<7>[3258877.002215] smbd(25162): dirtied inode 209715213 (locking.tdb) on dm-0
<7>[3258887.034553] smbd(25162): dirtied inode 209715213 (locking.tdb) on dm-0
<7>[3258887.034562] smbd(25162): dirtied inode 209715213 (locking.tdb) on dm-0
<7>[3258902.002627] smbd(25162): dirtied inode 209715220 (smbXsrv_open_global.tdb) on dm-0
<7>[3258902.002640] smbd(25162): dirtied inode 209715220 (smbXsrv_open_global.tdb) on dm-0
<7>[3258902.002847] smbd(25162): dirtied inode 209715213 (locking.tdb) on dm-0
<7>[3258902.002852] smbd(25162): dirtied inode 209715213 (locking.tdb) on dm-0
<7>[3258912.023250] smbd(25162): dirtied inode 209715213 (locking.tdb) on dm-0
<7>[3258912.023260] smbd(25162): dirtied inode 209715213 (locking.tdb) on dm-0
0887] kworker/u8:2(12367): WRITE block 106691880 on dm-0 (8 sectors)
<7>[3258882.575663] jbd2/dm-0-8(8237): WRITE block 3854839904 on dm-0 (8 sectors)
<7>[3258882.612076] jbd2/dm-0-8(8237): WRITE block 3854839912 on dm-0 (8 sectors)
<7>[3258882.612101] jbd2/dm-0-8(8237): WRITE block 3854839920 on dm-0 (8 sectors)
<7>[3258882.612299] jbd2/dm-0-8(8237): WRITE block 3854839928 on dm-0 (8 sectors)
<7>[3258887.001579] kworker/u8:2(12367): WRITE block 6711337520 on dm-0 (8 sectors)
<7>[3258887.034268] kworker/u8:2(12367): WRITE block 106691880 on dm-0 (8 sectors)
<7>[3258887.034347] kworker/u8:2(12367): WRITE block 1768524688 on dm-0 (8 sectors)
<7>[3258892.575421] jbd2/dm-0-8(8237): WRITE block 3854839936 on dm-0 (8 sectors)
<7>[3258892.597460] jbd2/dm-0-8(8237): WRITE block 3854839944 on dm-0 (8 sectors)
<7>[3258892.597486] jbd2/dm-0-8(8237): WRITE block 3854839952 on dm-0 (8 sectors)
<7>[3258892.597691] jbd2/dm-0-8(8237): WRITE block 3854839960 on dm-0 (8 sectors)
<7>[3258897.033309] kworker/u8:2(12367): WRITE block 6710886656 on dm-0 (8 sectors)
<7>[3258897.056437] kworker/u8:2(12367): WRITE block 6710886664 on dm-0 (8 sectors)
<7>[3258897.056541] kworker/u8:2(12367): WRITE block 6711337520 on dm-0 (8 sectors)
<7>[3258897.056553] kworker/u8:2(12367): WRITE block 106691880 on dm-0 (8 sectors)
<7>[3258897.056643] kworker/u8:2(12367): WRITE block 1768524688 on dm-0 (8 sectors)
<7>[3258902.575183] jbd2/dm-0-8(8237): WRITE block 3854839968 on dm-0 (8 sectors)
<7>[3258902.611967] jbd2/dm-0-8(8237): WRITE block 3854839976 on dm-0 (8 sectors)
<7>[3258902.611992] jbd2/dm-0-8(8237): WRITE block 3854839984 on dm-0 (8 sectors)
<7>[3258902.612732] jbd2/dm-0-8(8237): WRITE block 3854839992 on dm-0 (8 sectors)
<7>[3258912.001982] kworker/u8:2(12367): WRITE block 6711337520 on dm-0 (8 sectors)
<7>[3258912.023007] kworker/u8:2(12367): WRITE block 106691880 on dm-0 (8 sectors)
<7>[3258912.023070] kworker/u8:2(12367): WRITE block 1768524688 on dm-0 (8 sectors)
<7>[3258917.582835] jbd2/dm-0-8(8237): WRITE block 3854840000 on dm-0 (8 sectors)
<7>[3258875.959803] md9_raid1(2207): WRITE block 1060216 on sda1 (1 sectors)
<7>[3258875.959834] md9_raid1(2207): WRITE block 1060216 on sdb1 (1 sectors)
<7>[3258876.036892] md9_raid1(2207): WRITE block 1060232 on sda1 (1 sectors)
<7>[3258876.036907] md9_raid1(2207): WRITE block 1060232 on sdb1 (1 sectors)
<7>[3258875.959810] md13_raid1(2236): WRITE block 1060256 on sda4 (1 sectors)
<7>[3258875.959840] md13_raid1(2236): WRITE block 1060256 on sdb4 (1 sectors)
<7>[3258876.052955] md13_raid1(2236): WRITE block 1060272 on sda4 (1 sectors)
<7>[3258876.052967] md13_raid1(2236): WRITE block 1060272 on sdb4 (1 sectors)
<7>[3258872.831878] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258872.831908] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258875.959806] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258875.959810] md13_raid1(2236): WRITE block 1060256 on sda4 (1 sectors)
<7>[3258875.959838] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258875.959840] md13_raid1(2236): WRITE block 1060256 on sdb4 (1 sectors)
<7>[3258876.052955] md13_raid1(2236): WRITE block 1060272 on sda4 (1 sectors)
<7>[3258876.052967] md13_raid1(2236): WRITE block 1060272 on sdb4 (1 sectors)
<7>[3258882.575706] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258882.575737] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258882.850640] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258882.850671] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258887.001620] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258887.001649] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258887.235535] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258887.235565] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258892.575462] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258892.575491] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258892.834401] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258892.834436] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258897.033353] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258897.033384] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258897.257295] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258897.257321] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258902.575226] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258902.575256] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258902.851158] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258902.851185] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258912.002022] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258912.002054] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258912.223942] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258912.223972] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258917.582876] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258872.831878] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258875.959803] md9_raid1(2207): WRITE block 1060216 on sda1 (1 sectors)
<7>[3258875.959806] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258875.959810] md13_raid1(2236): WRITE block 1060256 on sda4 (1 sectors)
<7>[3258876.036892] md9_raid1(2207): WRITE block 1060232 on sda1 (1 sectors)
<7>[3258876.052955] md13_raid1(2236): WRITE block 1060272 on sda4 (1 sectors)
<7>[3258882.575706] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258882.850640] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258887.001620] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258887.235535] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258892.575462] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258892.834401] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258897.033353] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258897.257295] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258902.575226] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258902.851158] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258912.002022] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258912.223942] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258917.582876] md1_raid1(8003): WRITE block 7794127504 on sda3 (1 sectors)
<7>[3258872.831908] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258875.959834] md9_raid1(2207): WRITE block 1060216 on sdb1 (1 sectors)
<7>[3258875.959838] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258875.959840] md13_raid1(2236): WRITE block 1060256 on sdb4 (1 sectors)
<7>[3258876.036907] md9_raid1(2207): WRITE block 1060232 on sdb1 (1 sectors)
<7>[3258876.052967] md13_raid1(2236): WRITE block 1060272 on sdb4 (1 sectors)
<7>[3258882.575737] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258882.850671] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258887.001649] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258887.235565] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258892.575491] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258892.834436] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258897.033384] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258897.257321] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258902.575256] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258902.851185] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258912.002054] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[3258912.223972] md1_raid1(8003): WRITE block 7794127504 on sdb3 (1 sectors)

============= 1/100 test, Fri Mar 23 06:17:43 GMT 2018 ===============
indent preformatted text by 4 spaces@

@support Is there any progress with this issue?

The constant writes to the NAS by Roon must have an adverse effect on the longevity of the drives.

Hello @eclectic,

Can you take a screenshot of your “Storage” tab in Roon settings? We have some NAS related fixes lined up for the next release of Roon which may help things, however I’d still like to be able to pass on as much data to the team as possible to make sure we can nail down what is going on here.

-John

@john Thanks for the reply.

Is this what you want?

Hello @eclectic,

Yes, thanks!

-John

@John , here is my screenshot too. I figure more info is better.

@john, as I stated earlier in this thread, if I Disable my Storage completely, the NAS access continues even if I restart Roon. Here is that screenshot:

But if I then restart my iMac and then start Roon with Storage Disabled, there is no NAS access! Perhaps this is a clue to what may be happening?