System is crashing/restarting frequently since update to build 667

Core Machine (Operating system/System info/Roon build number)
QNAP TS-473, AMD R-Serie RX-421ND Quad-Core 2,1 GHz CPU, 20GB RAM, running QTS 4.4.3,
Roon Database on SSD (using 10.6GB out of 400GB).
Roon Version 1.7 (build 667), QPKG-Version 2020-07-15
Media Library on NAS (around 57,000 tracks)
Qobuz Studio Sublime Subscription

Network Details (Including networking gear model/manufacturer and if on WiFi/Ethernet)
Unifi Managed Switch, Unifi AP-AC PROs, HP ProCurve Switches

Audio Devices (Specify what device you’re using and its connection type - USB/HDMI/etc.)
Linn Akurate DSM, Linn Majik DS, AirPlay-Devices

Description Of Issue
Dear support team,

I have been enjoying Roon for 1 1/2 years now, however currently my setup has become close to unusable ever since I have updated to build 667. It all started around two weeks ago when I noticed the Roonserver process using up to 90% of the NAS CPU for a few minutes on a daily basis (system restarts every morning). I realized that the system was running a background audio analysis (indicating a very large number of files (around 350k vs. my local 57k files) - I assume that might now also involve Qobuz tracks).
However, despite the scan process was finally completed, my system has been instable ever since. When browsing for music or filling the queue, it randomly restarts and I am looking at a white screen with the roon logo, indicating the restart I assume. When the service is available again, it might work for a while or just restart with the next action I am taking. I have also experienced restarts while playing music!

The log shows errors such as:

Got a SIGSEGV while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application.

Could not exec mono-hang-watchdog, expected on path ‘/share/CACHEDEV1_DATA/.qpkg/RoonServer/RoonServer/RoonMono/etc/…/bin/mono-hang-watchdog’ (errno 2)

I’ll be more than happy to share the total log with you (clearly indicating how often the problem actually occurs) and/or any additional information you might need.

Thank you so much for helping me out here!

Looking forward to hearing from you,

Thomas

Hi everyone,

Sorry I haven’t heard back from you guys, yet. However, here are the log files that might help.

Hoping to hear from you soon - my system is currently unusable :frowning:

Regards,

Thomas

Thanks for sending the log file, @Thomas_Schauer, and sorry for the delay here. We’ll take a look and get back to you soon!

Hi Dylan,

Thanks for looking into it. I hope to hear from your team soon as it’s now been 7 days since I have posted this serious problem. I certainly wouldn’t push this if it were just a minor glitch.

Regards,

Thomas

Hi @Thomas_Schauer,

Can you share a screenshot of Settings > Storage? It looks like there are some problems connecting to the storage location which may be causing the issues you’re seeing.

If you temporarily disable local watched folders and use Qobuz only do you see the same problem occur?

Hi Dylan,

here’s the screenshot you requested.


Will disable the local folders for now and get back to you…

Hi Dylan,

Unfortunately it has happened again, despite turning off the local library:

I have uploaded a new log to the share above for your analysis.

Kind regards,

Thomas

Thanks for sending along new logs — Checking with the team on that and will get back to you ASAP.

@Thomas_Schauer,

We have a meeting scheduled with our development team to discuss next steps on this.

In the latest logs I do notice that there are the same storage traces. I’m thinking that even thought it’s disabled, the fact that the storage is included at all might be triggering this.

While I work with the team on this, can you try removing the watched folder from Settings > Storage completely? Try running with just Qobuz and let me know if that changes anything. While not a permanent solution, it will help our investigation in that it’ll confirm that there is something occurring with the communication with the storage location that is causing this.

Also, just to confirm — Have you tried completely power cycling your QNAP?

Thanks!

Hi @Thomas_Schauer,

I spoke with the team about this today — They’re seeing some traces that seem to point to an incompatibility with another app or service running on the QNAP. Are you running Plex or similar on the QNAP as well?

Hi Dylan,

Thanks for your feedback. I did completely remove the share and Roon has been stable all day long - the log looks perfectly clean. And yes, my QNAP is rebooting every morning!
Concerning your hint, I am not running Plex on the QNAP, nothing really has changed on my setup in the past few months.
Just let me know if I can assist with further information.

Best regards,

Thomas

Hi @Thomas_Schauer,

I appreciate you giving that a try. That definitely seems to point us in the right direction — Something about the communication between Roon and the drive / the media stored on that drive is causing an issue. I’ve filed a report with our senior technical team and I’ll be in touch as soon as I have info from them.

Hi Dylan and Team,

I must say that I am quite disappointed that you haven’t got back to me, yet. It has been 16 days since your last message and 27 since I reported my problem, which is serious for me as the system can’t be used properly.

In the end I was left to trying around myself over and over to find a solution on my own. I am used to that, however with the Roon community, I had believed that would be different.
In the end, I think that cleaning up my library and re-adding the music share, helped. Roon has been running stable again, recently.

Guys, please keep up the good work on the product, and maybe you can use my feedback to keep improving in your support and communication process. I am putting all my trust in you :slight_smile:

Sorry for the lack of an update here, @Thomas_Schauer. You have my apologies for that.

This has been escalated to our senior QA and development team, and they currently have an open investigation into this. I’ve requested an update from them on the current status and I’ll be sure to update you as soon as I can. Thanks, and apologies for the ongoing trouble.