Roon Core idle activity

I am running Roon Server native on my NAS. I would REALLY like for the system to spin down its spinning disks if i am not utilizing Roon at the moment.
Is this doable?
I am running three drives in a stripe set + an SSD for the RoonDatabase. Roon is keeping the system from resting the drives. The RoonAppliance and RAATServer seems to be the culprits, writing to their respective logfiles (RoonServer_log.txt & RAATServer_log.txt).

When i stop the Roon Server the NAS spins down it’s drives after my expected timeout, as expected and wanted.

Roon constantly checks for new metadata. Surely it’s worse to spin up and down all the time?

Has Roon finished analyzing your library? I think the disks will spin down as expected once analysis is complete.

Thsnks for your input @Ludwig Roon shouldn’t poke the drives if no data changes. After indexing and library population is complete it ought to “be quiet”. Otherwise the drives of the internal database will be kept spinning 24/7 and aside from wear and tear they will drain power, generate heat.
I realize i may be asking for something that cant be done while maintaining the Roon experience but…

And yes, all indexing and grooming is finished.
I stopped Roon Server and after 5 minutes the drives spun down and power consumption dropped from 45W to 22W… Thats where i want to be when not playing music.

Are you 100% sure about analysis being done?

The only way i know of is looking in Settings-Library:

And of course in the main screen top right:

QNAP has offered a script to identify which process is preventing drive spin down(Apologies for this long excerpt, but all of these begin with Roon Appliance (and sometimes RAATServer):

============= 8/100 test, Mon Sep  4 21:57:14 CEST 2017 ===============
<7>[66519.614814] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66519.614824] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66519.614826] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66521.091747] Threadpool work(30696): dirtied inode 2496056 (000003.log) on dm-0
<7>[66521.091756] Threadpool work(30696): dirtied inode 2496056 (000003.log) on dm-0
<7>[66521.091758] Threadpool work(30696): dirtied inode 2496056 (000003.log) on dm-0
<7>[66534.624879] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66534.624886] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66534.624888] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66549.634838] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66549.634847] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66549.634849] RoonAppliance(27698): dirtied inode 2494338 (RoonServer_log.txt) on dm-0
<7>[66551.206754] Threadpool work(30693): dirtied inode 2496056 (000003.log) on dm-0
<7>[66551.206763] Threadpool work(30693): dirtied inode 2496056 (000003.log) on dm-0
<7>[66551.206765] Threadpool work(30693): dirtied inode 2496056 (000003.log) on dm-0
<7>[66504.664389] jbd2/dm-0-8(4500): WRITE block 46608080 on dm-0 (8 sectors)
<7>[66504.664395] jbd2/dm-0-8(4500): WRITE block 46608088 on dm-0 (8 sectors)
<7>[66504.664398] jbd2/dm-0-8(4500): WRITE block 46608096 on dm-0 (8 sectors)
<7>[66504.664401] jbd2/dm-0-8(4500): WRITE block 46608104 on dm-0 (8 sectors)
<7>[66504.664405] jbd2/dm-0-8(4500): WRITE block 46608112 on dm-0 (8 sectors)
<7>[66504.664407] jbd2/dm-0-8(4500): WRITE block 46608120 on dm-0 (8 sectors)
<7>[66504.664617] jbd2/dm-0-8(4500): WRITE block 46608128 on dm-0 (8 sectors)
<7>[66509.664888] kworker/u8:0(11113): WRITE block 79694016 on dm-0 (8 sectors)
<7>[66509.670703] kworker/u8:0(11113): WRITE block 79694872 on dm-0 (8 sectors)
<7>[66509.670712] kworker/u8:0(11113): WRITE block 80215352 on dm-0 (8 sectors)
<7>[66509.670718] kworker/u8:0(11113): WRITE block 8 on dm-0 (8 sectors)
<7>[66509.670721] kworker/u8:0(11113): WRITE block 8388704 on dm-0 (8 sectors)
<7>[66509.688898] jbd2/dm-0-8(4500): WRITE block 46608136 on dm-0 (8 sectors)
<7>[66514.671225] kworker/u8:0(11113): WRITE block 11017040 on dm-0 (8 sectors)
<7>[66519.691559] jbd2/dm-0-8(4500): WRITE block 46608144 on dm-0 (8 sectors)
<7>[66519.693675] jbd2/dm-0-8(4500): WRITE block 46608152 on dm-0 (8 sectors)
<7>[66519.693812] jbd2/dm-0-8(4500): WRITE block 46608160 on dm-0 (8 sectors)
<7>[66526.701427] jbd2/dm-0-8(4500): WRITE block 46608168 on dm-0 (8 sectors)
<7>[66526.703589] jbd2/dm-0-8(4500): WRITE block 46608176 on dm-0 (8 sectors)
<7>[66526.703720] jbd2/dm-0-8(4500): WRITE block 46608184 on dm-0 (8 sectors)
<7>[66529.617205] kworker/u8:0(11113): WRITE block 11017040 on dm-0 (8 sectors)
<7>[66534.620513] kworker/u8:0(11113): WRITE block 79694016 on dm-0 (8 sectors)
<7>[66534.622512] kworker/u8:0(11113): WRITE block 79694872 on dm-0 (8 sectors)
<7>[66534.622545] kworker/u8:0(11113): WRITE block 11563288 on dm-0 (8 sectors)
<7>[66534.622558] jbd2/dm-0-8(4500): WRITE block 46608192 on dm-0 (8 sectors)
<7>[66539.696889] jbd2/dm-0-8(4500): WRITE block 46608200 on dm-0 (8 sectors)
<7>[66539.702918] jbd2/dm-0-8(4500): WRITE block 46608208 on dm-0 (8 sectors)
<7>[66539.703055] jbd2/dm-0-8(4500): WRITE block 46608216 on dm-0 (8 sectors)
<7>[66544.627207] kworker/u8:0(11113): WRITE block 11017040 on dm-0 (8 sectors)
<7>[66549.634874] jbd2/dm-0-8(4500): WRITE block 46608224 on dm-0 (8 sectors)
<7>[66549.640731] jbd2/dm-0-8(4500): WRITE block 46608232 on dm-0 (8 sectors)
<7>[66549.640853] jbd2/dm-0-8(4500): WRITE block 46608240 on dm-0 (8 sectors)
<7>[66554.634852] kworker/u8:0(11113): WRITE block 79694016 on dm-0 (8 sectors)
<7>[66506.272642] smbd(14635): dirtied inode 486539284 (smbXsrv_open_global.tdb) on dm-2
<7>[66506.272646] smbd(14635): dirtied inode 486539284 (smbXsrv_open_global.tdb) on dm-2
<7>[66511.275453] smbd(14635): dirtied inode 486539277 (locking.tdb) on dm-2
<7>[66511.275457] smbd(14635): dirtied inode 486539277 (locking.tdb) on dm-2
<7>[66521.280684] smbd(14635): dirtied inode 486539284 (smbXsrv_open_global.tdb) on dm-2
<7>[66521.280689] smbd(14635): dirtied inode 486539284 (smbXsrv_open_global.tdb) on dm-2
<7>[66526.282056] smbd(14635): dirtied inode 486539277 (locking.tdb) on dm-2
<7>[66526.282061] smbd(14635): dirtied inode 486539277 (locking.tdb) on dm-2
<7>[66536.290634] smbd(14635): dirtied inode 486539284 (smbXsrv_open_global.tdb) on dm-2
<7>[66536.290638] smbd(14635): dirtied inode 486539284 (smbXsrv_open_global.tdb) on dm-2
<7>[66541.294532] smbd(14635): dirtied inode 486539277 (locking.tdb) on dm-2
<7>[66541.294536] smbd(14635): dirtied inode 486539277 (locking.tdb) on dm-2
<7>[66551.297112] smbd(14635): dirtied inode 486539284 (smbXsrv_open_global.tdb) on dm-2
<7>[66551.297116] smbd(14635): dirtied inode 486539284 (smbXsrv_open_global.tdb) on dm-2
<7>[66556.297448] smbd(14635): dirtied inode 486539277 (locking.tdb) on dm-2
<7>[66505.864878] kworker/u8:0(11113): WRITE block 15569556688 on dm-2 (8 sectors)
<7>[66505.864904] kworker/u8:0(11113): WRITE block 15569559944 on dm-2 (8 sectors)
<7>[66510.866217] kworker/u8:0(11113): WRITE block 15569543048 on dm-2 (8 sectors)
<7>[66510.866233] jbd2/dm-2-8(4535): WRITE block 8670059136 on dm-2 (8 sectors)
<7>[66510.866243] jbd2/dm-2-8(4535): WRITE block 8670059144 on dm-2 (8 sectors)
<7>[66510.866902] jbd2/dm-2-8(4535): WRITE block 8670059152 on dm-2 (8 sectors)
<7>[66515.867540] kworker/u8:0(11113): WRITE block 15569256712 on dm-2 (8 sectors)
<7>[66516.682766] jbd2/dm-2-8(4535): WRITE block 8670059160 on dm-2 (8 sectors)
<7>[66516.682781] jbd2/dm-2-8(4535): WRITE block 8670059168 on dm-2 (8 sectors)
<7>[66516.683450] jbd2/dm-2-8(4535): WRITE block 8670059176 on dm-2 (8 sectors)
<7>[66520.868881] kworker/u8:0(11113): WRITE block 15569556688 on dm-2 (8 sectors)
<7>[66520.868910] kworker/u8:0(11113): WRITE block 15569559944 on dm-2 (8 sectors)
<7>[66525.870212] kworker/u8:0(11113): WRITE block 15569543048 on dm-2 (8 sectors)
<7>[66525.870233] jbd2/dm-2-8(4535): WRITE block 8670059184 on dm-2 (8 sectors)
<7>[66525.870243] jbd2/dm-2-8(4535): WRITE block 8670059192 on dm-2 (8 sectors)
<7>[66525.870867] jbd2/dm-2-8(4535): WRITE block 8670059200 on dm-2 (8 sectors)
<7>[66530.871519] kworker/u8:0(11113): WRITE block 15569256712 on dm-2 (8 sectors)
<7>[66531.694767] jbd2/dm-2-8(4535): WRITE block 8670059208 on dm-2 (8 sectors)
<7>[66531.694782] jbd2/dm-2-8(4535): WRITE block 8670059216 on dm-2 (8 sectors)
<7>[66531.695532] jbd2/dm-2-8(4535): WRITE block 8670059224 on dm-2 (8 sectors)
<7>[66535.872868] kworker/u8:0(11113): WRITE block 15569556688 on dm-2 (8 sectors)
<7>[66535.872893] kworker/u8:0(11113): WRITE block 15569559944 on dm-2 (8 sectors)
<7>[66540.874195] kworker/u8:0(11113): WRITE block 15569543048 on dm-2 (8 sectors)
<7>[66540.874225] jbd2/dm-2-8(4535): WRITE block 8670059232 on dm-2 (8 sectors)
<7>[66540.874236] jbd2/dm-2-8(4535): WRITE block 8670059240 on dm-2 (8 sectors)
<7>[66540.874827] jbd2/dm-2-8(4535): WRITE block 8670059248 on dm-2 (8 sectors)
<7>[66545.875531] kworker/u8:0(11113): WRITE block 15569256712 on dm-2 (8 sectors)
<7>[66546.690763] jbd2/dm-2-8(4535): WRITE block 8670059256 on dm-2 (8 sectors)
<7>[66546.690779] jbd2/dm-2-8(4535): WRITE block 8670059264 on dm-2 (8 sectors)
<7>[66546.691393] jbd2/dm-2-8(4535): WRITE block 8670059272 on dm-2 (8 sectors)
<7>[66550.876872] kworker/u8:0(11113): WRITE block 15569556688 on dm-2 (8 sectors)
<7>[66550.876905] kworker/u8:0(11113): WRITE block 15569559944 on dm-2 (8 sectors)
<7>[66555.878204] kworker/u8:0(11113): WRITE block 15569543048 on dm-2 (8 sectors)
<7>[66555.878226] jbd2/dm-2-8(4535): WRITE block 8670059280 on dm-2 (8 sectors)
<7>[66555.878237] jbd2/dm-2-8(4535): WRITE block 8670059288 on dm-2 (8 sectors)
<7>[66555.878857] jbd2/dm-2-8(4535): WRITE block 8670059296 on dm-2 (8 sectors)
<7>[66504.872583] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66509.664911] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66509.897927] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66514.671259] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66514.874248] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66519.691594] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66519.901593] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66526.701461] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66526.912460] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66529.617238] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66529.820233] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66534.620535] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66534.834569] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66539.696937] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66539.911923] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66544.627242] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66544.834229] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66549.634911] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66549.848572] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66554.634898] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66554.840894] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66504.872583] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66509.664911] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66509.897927] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66514.671259] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66514.874248] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66519.691594] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66519.901593] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66526.701461] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66526.912460] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66529.617238] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66529.820233] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66534.620535] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66534.834569] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66539.696937] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66539.911923] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66544.627242] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66544.834229] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66549.634911] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66549.848572] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66554.634898] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)
<7>[66554.840894] md2_raid1(4210): WRITE block 97321744 on sda3 (1 sectors)

From your log, it looks like the offending Roon activity is RoonServer_log.txt. Are you running your Roon DB on a spinning disk? Roon’s DB (where its logs live as well) should be on an SSD for many reasons, but this being one of them.

Thank you for your feedback @danny
The Roon database is located on an SSD in the fourth compartment of the NAS and all the music is on the three spinning disks of the first compartments. But, disk activity is disk activity and the server wont spin it’s disks down if one of them is being accessed. The same will most likely happen in a ROCK server or any other server where music files are local. (Perhaps not if they are on an USB?

Is it really necessary to write log data while the server is idling? To me it’s ineffective and a waste of resources…

I consider it a bug on your nas’s part if writing to an SSD spins up other drives.

Aside from expected behaviour, what is the point of writing to the log contiously in Roon, if you dont mind explaining?
Can you for sure say that an internal spinning drive in a ROCK NUC spins down when log data is being written to the SSD/M.2 SSD?

Roon is doing a lot when it’s not playing, like metadata updates that don’t touch the spinning drives. However, if it is truly idle, then it writes resource usage status periodically to the log. The fact that nothing else is logged during these stats often helps us find the cause of issues that members raise.

ROCK does not spin down drives as a choice. Spinning up and down drives decreases their longevity quite a bit, and in some circumstances can use more power.

1 Like

Thanks!
I’ll consider my options here, as it is not quite the same getting answers from QNAP (to say the least!)

Maybe you can get the QNAP to mount a tmpfs in that logs folder… It’ll help you in true idle times but not when DB is updating metadata.

It’s still very weird that it’d spin up drives unrelated to the drive being written to. I’d look into getting that fixed.

1 Like