RoonServer High CPU and RAM Usage Ubuntu 22.04 / Roon Remote slow / no response

Roon Core Machine

stephencooper@librenms /v/r/R/Logs> sudo inxi -b
[sudo] password for stephencooper:
System:
  Host: librenms Kernel: 5.15.0-79-generic x86_64 bits: 64 Console: pty pts/4
    Distro: Ubuntu 22.04.3 LTS (Jammy Jellyfish)
Machine:
  Type: Desktop System: ASUS product: All Series v: N/A serial: N/A
  Mobo: ASUSTeK model: RAMPAGE V EXTREME v: Rev 1.xx serial: 140932630402158
    UEFI: American Megatrends v: 3701 date: 03/31/2017
CPU:
  Info: 6-core Intel Core i7-5930K [MT MCP] speed (MHz): avg: 2509 min/max: 1200/3700
Graphics:
  Device-1: NVIDIA GM204 [GeForce GTX 980] driver: nvidia v: 535.86.05
  Device-2: NVIDIA GM204 [GeForce GTX 980] driver: nvidia v: 535.86.05
  Display: server: X.org v: 1.21.1.4 with: Xwayland v: 22.1.1 driver: X: loaded: nvidia
    gpu: nvidia,nvidia tty: 200x25
  Message: GL data unavailable in console for root.
Network:
  Device-1: Intel Ethernet I218-V driver: e1000e
  Device-2: Broadcom BCM4360 802.11ac Wireless Network Adapter driver: wl
Drives:
  Local Storage: total: 2.16 TiB used: 1.37 TiB (63.3%)
Info:
  Processes: 598 Uptime: 6h 20m Memory: 31.25 GiB used: 29.74 GiB (95.2%) Init: systemd
  runlevel: 5 Shell: Sudo inxi: 3.3.13
stephencooper@librenms /v/r/R/Logs>

Networking Gear & Setup Details

Unifi

UAP-AC-LR-Garage Up to date 192.168.2.89 Excellent
UAP-AC-IW-Front Left Bedroom Up to date 192.168.1.24 Excellent
UAP-AC-IW-Middle Bedroom Up to date 192.168.1.25 Excellent
UAP-AC-IW-TV Room Up to date 192.168.1.26 Excellent
UAP-IW-HD Master Bedroom Up to date 192.168.1.27 Excellent
USW-Lite-8-PoE Up to date 192.168.2.149 GbE
USW-Flex-Mini-4 Up to date 192.168.2.85 FE
USW-Flex-Mini-1 Up to date 192.168.2.217 GbE
USW-Flex-Mini-5 Up to date 192.168.2.137 GbE
USW-Flex-Mini-3 Up to date 192.168.2.66 GbE
USW-Flex-Mini-6 Offline 192.168.3.121 -
USW-Flex-Mini-2 Up to date 192.168.2.79 GbE
US-8-150W-Garage Up to date 192.168.3.105 GbE
gateway Up to date 192.168.1.1 GbE

Connected Audio Devices

I do not play audio on the roonserver

bedroom: Yamaha 10MG-XU

stephencooper@stephens-mac-mini ~> aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: MGXU [MG-XU], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

sudo inxi -b
[sudo] password for stephencooper:
Sorry, try again.
[sudo] password for stephencooper:
System:
  Host: stephens-mac-mini Kernel: 6.4.11-200.fc38.x86_64 arch: x86_64 bits: 64 Console: pty pts/0
    Distro: Fedora release 38 (Thirty Eight)
Machine:
  Type: Laptop System: Apple product: Macmini7,1 v: 1.0 serial: C07R52JLG1HW
  Mobo: Apple model: Mac-35C5E08120C7EEAF v: Macmini7,1 serial: C07604403XVG3NCB0 UEFI: Apple
    v: 474.0.0.0.0 date: 08/23/2022
CPU:
  Info: dual core Intel Core i5-4278U [MT MCP] speed (MHz): avg: 971 min/max: 800/3100
Graphics:
  Device-1: Intel Haswell-ULT Integrated Graphics driver: i915 v: kernel
  Display: x11 server: X.org v: 1.20.14 with: Xwayland v: 22.1.9 driver: X: loaded: modesetting
    unloaded: fbdev,vesa dri: crocus gpu: i915 tty: 267x25 resolution: 3840x2160
  API: OpenGL Message: GL data unavailable in console for root.
Network:
  Device-1: Broadcom BCM4360 802.11ac Wireless Network Adapter driver: wl
  Device-2: Broadcom NetXtreme BCM57766 Gigabit Ethernet PCIe driver: tg3
  Device-3: Broadcom NetXtreme BCM57762 Gigabit Ethernet PCIe driver: tg3
Drives:
  Local Storage: total: 223.57 GiB used: 47.03 GiB (21.0%)
Info:
  Processes: 283 Uptime: 8h 4m Memory: total: 8 GiB available: 7.62 GiB used: 2.56 GiB (33.6%)
  igpu: 96 MiB Init: systemd target: graphical (5) Shell: fish inxi: 3.3.29

home office:

Raspberry Pi with Rapsberry Pi DAC PRO, playing back via Yamaha mixer

stephencooper@music:~ $ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Pro [RPi DAC Pro], device 0: Raspberry Pi DAC Pro HiFi pcm512x-hifi-0 [Raspberry Pi DAC Pro HiFi pcm512x-hifi-0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0


stephencooper@Stephens-Air ~> ssh stephencooper@music
Linux music 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr  3 17:24:16 BST 2023 aarch64


The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Aug 26 17:43:52 2023 from 192.168.1.162
stephencooper@music:~ $ sudo inxi -b
System:    Host: music Kernel: 6.1.21-v8+ aarch64 bits: 32 Console: tty 0 Distro: Raspbian GNU/Linux 11 (bullseye)
Machine:   Type: ARM Device System: Raspberry Pi 4 Model B Rev 1.5 details: BCM2835 rev: d03115 serial: 100000008fe05ec8
CPU:       Info: Quad Core Model N/A [MCP] speed: 1800 MHz min/max: 600/1800 MHz
Graphics:  Device-1: bcm2711-hdmi0 driver: vc4_hdmi v: N/A
           Device-2: bcm2711-hdmi1 driver: vc4_hdmi v: N/A
           Device-3: bcm2711-vc5 driver: vc4_drm v: N/A
           Display: server: X.org 1.20.11 driver: loaded: modesetting unloaded: fbdev tty: 267x25
           Message: Advanced graphics data unavailable in console for root.
Network:   Message: No ARM data found for this feature.
Drives:    Local Storage: total: 931.51 GiB used: 3.45 GiB (0.4%)
Info:      Processes: 187 Uptime: 6m Memory: 7.7 GiB used: 671.6 MiB (8.5%) gpu: 76 MiB Init: systemd runlevel: 5 Shell: Bash
           inxi: 3.3.01

Number of Tracks in Library

2035

Description of Issue

The RoonRemote UI either on my Apple iPhone or Apple M2 MacBook Air is unresponsive for 10-60 seconds, with a spinning logo on the UI. The UI will eventually respond, then playback can continue.

Logging into the server, it can be seen that Roon is consuming all RAM (up to 32Gb) and frequently runs up to 250% CPU.

I only have a small library, and I am only playing back on one audio zone at a time. The resource usage on the server seems unreasonable.

stephencooper@librenms ~> sudo systemctl status roonserver
● roonserver.service - RoonServer
     Loaded: loaded (/etc/systemd/system/roonserver.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2023-08-26 11:00:04 AEST; 6h ago
   Main PID: 1939 (start.sh)
      Tasks: 157 (limit: 38262)
     Memory: 1.3G
        CPU: 8h 10min 29.563s
     CGroup: /system.slice/roonserver.service
             ├─  1939 /bin/bash /opt/RoonServer/start.sh
             ├─  1969 /opt/RoonServer/RoonDotnet/RoonServer RoonServer.dll
             ├─655770 /opt/RoonServer/RoonDotnet/RoonAppliance RoonAppliance.dll -watchdogport=35553
             ├─655771 /opt/RoonServer/Server/processreaper 655770
             └─655879 /opt/RoonServer/RoonDotnet/RAATServer RAATServer.dll

Aug 26 17:54:59 librenms start.sh[1969]: 06:54:53.558 Debug: RoonServer, after RoonAppliance exit, exitcode: 137
Aug 26 17:54:59 librenms start.sh[1969]: Error
Aug 26 17:55:01 librenms start.sh[1969]: Initializing
Aug 26 17:55:01 librenms start.sh[1969]: 06:54:55.566 Debug: RoonServer, before attempting to start RoonAppliance binary at path: /opt/RoonServer/Server/../Appliance/RoonAppliance
Aug 26 17:55:01 librenms start.sh[1969]: 06:54:55.572 Debug: RoonServer, after starting RoonAppliance
Aug 26 17:55:01 librenms start.sh[1969]: Started
Aug 26 17:55:02 librenms start.sh[1969]: Not responding
Aug 26 17:55:03 librenms start.sh[655770]: aac_fixed decoder found, checking libavcodec version...
Aug 26 17:55:03 librenms start.sh[655770]: has mp3float: 1, aac_fixed: 1
Aug 26 17:55:07 librenms start.sh[1969]: Running

Logs https://pastebin.com/B04HEcfz

08/26 17:55:04 Info: Starting RAATServer v2.0 (build 1303) production on linuxx64
08/26 17:55:04 Info: Local time is 26/8/2023 5:55:04 pm, UTC time is 26/8/2023 7:55:04 am
08/26 17:55:04 Trace: [RAATServer] detected ALSA support
08/26 17:55:04 Trace: [bits] myinfo: {"os":"Linux 5.15.0-79-generic","platform":"linuxx64","machineversion":200001303,"branch":"production","appmodifier":"","appname":"RAATServer"}
08/26 17:55:05 Info: [RAATServer] creating RAAT__manager
08/26 17:55:05 Info: [RAATServer]     appdata_dir  = /var/roon/RAATServer
08/26 17:55:05 Info: [RAATServer]     unique_id    = 88c6de36-6cac-431c-a20c-87b15a12e613
08/26 17:55:05 Info: [RAATServer]     machine_id   = df83d17e-5676-cf00-25f2-d990461e9f65
08/26 17:55:05 Info: [RAATServer]     machine_name = librenms
08/26 17:55:05 Info: [RAATServer]     os_version   = Linux 5.15.0-79-generic
08/26 17:55:05 Info: [RAATServer]     vendor       = 
08/26 17:55:05 Info: [RAATServer]     model        = 
08/26 17:55:05 Info: [RAATServer]     service_id   = d7634b85-8190-470f-aa51-6cb5538dc1b9
08/26 17:55:05 Info: [RAATServer]     is_dev       = False
08/26 17:55:05 Trace: [raatmanager] starting
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=PCH,DEV=0 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA Intel PCH ALC1150 Analog  
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=PCH,DEV=1 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA Intel PCH ALC1150 Digital 
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia_1,DEV=3 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 0               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia_1,DEV=7 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 1               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia_1,DEV=8 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 2               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia_1,DEV=9 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 3               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia_1,DEV=10 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 4               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia_1,DEV=11 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 5               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia_1,DEV=12 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 6               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia,DEV=3 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 0               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia,DEV=7 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 1               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia,DEV=8 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 2               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia,DEV=9 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 3               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia,DEV=10 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 4               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia,DEV=11 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 5               
08/26 17:55:05 Trace: [raatmanager/linux] FOUND id=hw:CARD=NVidia,DEV=12 usb_id=
08/26 17:55:05 Trace: [raatmanager/linux]       vendor=                               name=HDA NVidia MI 6               
08/26 17:55:05 Trace: [raatmanager] initialized
08/26 17:55:05 Info: [RAATServer] running RAAT__manager
08/26 17:55:05 Warn: [raatmanager] update_bits, json string: {}
08/26 17:55:05 Trace: [raatmanager] starting discovery
08/26 17:55:05 Trace: [discovery] starting
08/26 17:55:05 Info: [discovery] [iface:lo:127.0.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:lo:127.0.0.1] multicast send socket is bound to 0.0.0.0:34226
08/26 17:55:05 Info: [discovery] [iface:eno1:192.168.0.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:eno1:192.168.0.2] multicast send socket is bound to 0.0.0.0:55379
08/26 17:55:05 Info: [discovery] [iface:eno1:192.168.2.10] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:eno1:192.168.2.10] multicast send socket is bound to 0.0.0.0:43982
08/26 17:55:05 Info: [discovery] [iface:eno1:192.168.2.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:eno1:192.168.2.2] multicast send socket is bound to 0.0.0.0:51004
08/26 17:55:05 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast send socket is bound to 0.0.0.0:57663
08/26 17:55:05 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast send socket is bound to 0.0.0.0:40657
08/26 17:55:05 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast send socket is bound to 0.0.0.0:54679
08/26 17:55:05 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast send socket is bound to 0.0.0.0:47325
08/26 17:55:05 Info: [discovery] [iface:docker0:172.17.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:55:05 Info: [discovery] [iface:docker0:172.17.0.1] multicast send socket is bound to 0.0.0.0:33425
08/26 17:55:05 Info: [discovery] unicast socket is bound to 0.0.0.0:9003
08/26 17:55:05 Trace: [raatmanager] starting server
08/26 17:55:05 Info: [jsonserver] listening on port 9200
08/26 17:55:05 Trace: [raatmanager] announcing
08/26 17:55:05 Debug: [discovery] broadcast op is complete
08/26 17:55:05 Debug: [easyhttp] [1] POST to https://bits.roonlabs.net/1/q/roon.base.,roon.internet_discovery.,roon.raatserver. returned after 1194 ms, status code: 200, request body size: 143 B
08/26 17:55:05 Trace: [bits] updated bits, in 1264ms
08/26 17:55:05 Trace: [inetdiscovery] added device raatserver/88c6de36-6cac-431c-a20c-87b15a12e613 in addr:__ADDR__
08/26 17:55:05 Trace: [inetdiscovery] added service com.roonlabs.raatserver.tcp for device raatserver/88c6de36-6cac-431c-a20c-87b15a12e613
08/26 17:55:07 Trace: [jsonserver] [127.0.0.1:43966] accepted connection
08/26 17:55:07 Trace: [jsonserver] [127.0.0.1:43966] GOT[LL] [1] {"request":"enumerate_devices","subscription_id":"0"}
08/26 17:55:07 Trace: [jsonserver] [127.0.0.1:43966] SENT [1] [nonfinal] {"status": "Success", "devices": [{"type": "alsa", "device_id": "hw:CARD=PCH,DEV=0", "name": "HDA Intel PCH ALC1150 Analog"}, {"type": "alsa", "device_id": "hw:CARD=PCH,DEV=1", "name": "HDA Intel PCH ALC1
08/26 17:55:10 Trace: [ipaddresses] enumerating addresses
08/26 17:55:10 Trace: [ipaddresses]    FOUND   lo 127.0.0.1
08/26 17:55:10 Trace: [ipaddresses]    FOUND   eno1 192.168.2.2
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED wlp6s0: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    FOUND   br-5cb0ce3c7e7b 172.18.251.129
08/26 17:55:10 Trace: [ipaddresses]    FOUND   br-8650ad45985e 172.18.251.1
08/26 17:55:10 Trace: [ipaddresses]    FOUND   br-978d53c7177a 172.21.0.1
08/26 17:55:10 Trace: [ipaddresses]    FOUND   br-dcdfe75106a9 172.20.0.1
08/26 17:55:10 Trace: [ipaddresses]    FOUND   docker0 172.17.0.1
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED vethe4bb7ae: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth7eaf70e: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth86e3ec6: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED vethf9e2aea: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth556eb4e: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED vethc05db13: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth273c976: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth25756cc: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth95509da: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth414051a: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth4e624a2: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED vetha849b08: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED vethf50a5d5: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED vethd718142: no ipv4
08/26 17:55:10 Trace: [ipaddresses]    SKIPPED veth660d6ed: no ipv4
08/26 17:55:11 Debug: [easyhttp] [2] POST to https://discovery.roonlabs.net/1/register returned after 839 ms, status code: 200, request body size: 657 B
08/26 17:55:11 Trace: [inetdiscovery] registered 1 devices, 1 services
08/26 17:55:15 Trace: [RAATServer] refreshing @ 10s
08/26 17:55:15 Trace: [raatmanager] announcing
08/26 17:55:16 Debug: [discovery] broadcast op is complete
08/26 17:56:11 Trace: [RAATServer] network reachability changed, refreshing discovery
08/26 17:56:11 Trace: [raatmanager] updating network interfaces
08/26 17:56:11 Trace: [discovery] stopping
08/26 17:56:11 Trace: closing multicast
08/26 17:56:11 Trace: [discovery] closing unicast send socket
08/26 17:56:11 Trace: [discovery] closing unicast recv socket
08/26 17:56:11 Trace: [discovery] starting
08/26 17:56:11 Info: [discovery] [iface:lo:127.0.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:lo:127.0.0.1] multicast send socket is bound to 0.0.0.0:34061
08/26 17:56:11 Info: [discovery] [iface:eno1:192.168.0.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:eno1:192.168.0.2] multicast send socket is bound to 0.0.0.0:34865
08/26 17:56:11 Info: [discovery] [iface:eno1:192.168.2.10] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:eno1:192.168.2.10] multicast send socket is bound to 0.0.0.0:39673
08/26 17:56:11 Info: [discovery] [iface:eno1:192.168.2.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:eno1:192.168.2.2] multicast send socket is bound to 0.0.0.0:40571
08/26 17:56:11 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast send socket is bound to 0.0.0.0:49258
08/26 17:56:11 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast send socket is bound to 0.0.0.0:36550
08/26 17:56:11 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast send socket is bound to 0.0.0.0:42160
08/26 17:56:11 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast send socket is bound to 0.0.0.0:37425
08/26 17:56:11 Info: [discovery] [iface:docker0:172.17.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 17:56:11 Info: [discovery] [iface:docker0:172.17.0.1] multicast send socket is bound to 0.0.0.0:38384
08/26 17:56:11 Info: [discovery] unicast socket is bound to 0.0.0.0:9003
08/26 17:56:11 Trace: [raatmanager] announcing
08/26 17:56:11 Debug: [discovery] broadcast op is complete
08/26 17:56:17 Debug: [easyhttp] [3] POST to https://discovery.roonlabs.net/1/register returned after 659 ms, status code: 200, request body size: 657 B
08/26 17:56:17 Trace: [inetdiscovery] registered 1 devices, 1 services
08/26 18:00:05 Trace: [RAATServer] network reachability changed, refreshing discovery
08/26 18:00:05 Trace: [raatmanager] updating network interfaces
08/26 18:00:05 Trace: [discovery] stopping
08/26 18:00:05 Trace: closing multicast
08/26 18:00:05 Trace: [discovery] closing unicast send socket
08/26 18:00:05 Trace: [discovery] closing unicast recv socket
08/26 18:00:05 Trace: [discovery] starting
08/26 18:00:05 Info: [discovery] [iface:lo:127.0.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:lo:127.0.0.1] multicast send socket is bound to 0.0.0.0:40638
08/26 18:00:05 Info: [discovery] [iface:eno1:192.168.0.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:eno1:192.168.0.2] multicast send socket is bound to 0.0.0.0:50797
08/26 18:00:05 Info: [discovery] [iface:eno1:192.168.2.10] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:eno1:192.168.2.10] multicast send socket is bound to 0.0.0.0:54940
08/26 18:00:05 Info: [discovery] [iface:eno1:192.168.2.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:eno1:192.168.2.2] multicast send socket is bound to 0.0.0.0:57921
08/26 18:00:05 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast send socket is bound to 0.0.0.0:34747
08/26 18:00:05 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast send socket is bound to 0.0.0.0:44927
08/26 18:00:05 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast send socket is bound to 0.0.0.0:35235
08/26 18:00:05 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast send socket is bound to 0.0.0.0:58179
08/26 18:00:05 Info: [discovery] [iface:docker0:172.17.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:00:05 Info: [discovery] [iface:docker0:172.17.0.1] multicast send socket is bound to 0.0.0.0:43434
08/26 18:00:05 Info: [discovery] unicast socket is bound to 0.0.0.0:9003
08/26 18:00:05 Trace: [raatmanager] announcing
08/26 18:00:05 Debug: [discovery] broadcast op is complete
08/26 18:00:11 Debug: [easyhttp] [4] POST to https://discovery.roonlabs.net/1/register returned after 965 ms, status code: 200, request body size: 657 B
08/26 18:00:11 Trace: [inetdiscovery] registered 1 devices, 1 services
08/26 18:03:12 Trace: [RAATServer] network reachability changed, refreshing discovery
08/26 18:03:12 Trace: [raatmanager] updating network interfaces
08/26 18:03:12 Trace: [discovery] stopping
08/26 18:03:12 Trace: closing multicast
08/26 18:03:12 Trace: [discovery] closing unicast send socket
08/26 18:03:12 Trace: [discovery] closing unicast recv socket
08/26 18:03:12 Trace: [discovery] starting
08/26 18:03:12 Info: [discovery] [iface:lo:127.0.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:lo:127.0.0.1] multicast send socket is bound to 0.0.0.0:58887
08/26 18:03:12 Info: [discovery] [iface:eno1:192.168.0.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:eno1:192.168.0.2] multicast send socket is bound to 0.0.0.0:55926
08/26 18:03:12 Info: [discovery] [iface:eno1:192.168.2.10] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:eno1:192.168.2.10] multicast send socket is bound to 0.0.0.0:40070
08/26 18:03:12 Info: [discovery] [iface:eno1:192.168.2.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:eno1:192.168.2.2] multicast send socket is bound to 0.0.0.0:52236
08/26 18:03:12 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast send socket is bound to 0.0.0.0:38445
08/26 18:03:12 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast send socket is bound to 0.0.0.0:40054
08/26 18:03:12 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast send socket is bound to 0.0.0.0:59604
08/26 18:03:12 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast send socket is bound to 0.0.0.0:49786
08/26 18:03:12 Info: [discovery] [iface:docker0:172.17.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:03:12 Info: [discovery] [iface:docker0:172.17.0.1] multicast send socket is bound to 0.0.0.0:54939
08/26 18:03:12 Info: [discovery] unicast socket is bound to 0.0.0.0:9003
08/26 18:03:12 Trace: [raatmanager] announcing
08/26 18:03:12 Debug: [discovery] broadcast op is complete
08/26 18:03:18 Debug: [easyhttp] [5] POST to https://discovery.roonlabs.net/1/register returned after 824 ms, status code: 200, request body size: 657 B
08/26 18:03:18 Trace: [inetdiscovery] registered 1 devices, 1 services
08/26 18:05:34 Trace: [RAATServer] network reachability changed, refreshing discovery
08/26 18:05:34 Trace: [raatmanager] updating network interfaces
08/26 18:05:34 Trace: [discovery] stopping
08/26 18:05:34 Trace: closing multicast
08/26 18:05:34 Trace: [discovery] closing unicast send socket
08/26 18:05:34 Trace: [discovery] closing unicast recv socket
08/26 18:05:34 Trace: [discovery] starting
08/26 18:05:34 Info: [discovery] [iface:lo:127.0.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:lo:127.0.0.1] multicast send socket is bound to 0.0.0.0:34111
08/26 18:05:34 Info: [discovery] [iface:eno1:192.168.0.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:eno1:192.168.0.2] multicast send socket is bound to 0.0.0.0:45402
08/26 18:05:34 Info: [discovery] [iface:eno1:192.168.2.10] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:eno1:192.168.2.10] multicast send socket is bound to 0.0.0.0:46294
08/26 18:05:34 Info: [discovery] [iface:eno1:192.168.2.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:eno1:192.168.2.2] multicast send socket is bound to 0.0.0.0:42569
08/26 18:05:34 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast send socket is bound to 0.0.0.0:42335
08/26 18:05:34 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast send socket is bound to 0.0.0.0:52962
08/26 18:05:34 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast send socket is bound to 0.0.0.0:34400
08/26 18:05:34 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast send socket is bound to 0.0.0.0:51020
08/26 18:05:34 Info: [discovery] [iface:docker0:172.17.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:05:34 Info: [discovery] [iface:docker0:172.17.0.1] multicast send socket is bound to 0.0.0.0:41966
08/26 18:05:34 Info: [discovery] unicast socket is bound to 0.0.0.0:9003
08/26 18:05:34 Trace: [raatmanager] announcing
08/26 18:05:34 Debug: [discovery] broadcast op is complete
08/26 18:05:40 Debug: [easyhttp] [6] POST to https://discovery.roonlabs.net/1/register returned after 924 ms, status code: 200, request body size: 657 B
08/26 18:05:40 Trace: [inetdiscovery] registered 1 devices, 1 services
08/26 18:06:35 Trace: [RAATServer] network reachability changed, refreshing discovery
08/26 18:06:35 Trace: [raatmanager] updating network interfaces
08/26 18:06:35 Trace: [discovery] stopping
08/26 18:06:35 Trace: closing multicast
08/26 18:06:35 Trace: [discovery] closing unicast send socket
08/26 18:06:35 Trace: [discovery] closing unicast recv socket
08/26 18:06:35 Trace: [discovery] starting
08/26 18:06:35 Info: [discovery] [iface:lo:127.0.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:lo:127.0.0.1] multicast send socket is bound to 0.0.0.0:59942
08/26 18:06:35 Info: [discovery] [iface:eno1:192.168.0.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:eno1:192.168.0.2] multicast send socket is bound to 0.0.0.0:45415
08/26 18:06:35 Info: [discovery] [iface:eno1:192.168.2.10] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:eno1:192.168.2.10] multicast send socket is bound to 0.0.0.0:59982
08/26 18:06:35 Info: [discovery] [iface:eno1:192.168.2.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:eno1:192.168.2.2] multicast send socket is bound to 0.0.0.0:42198
08/26 18:06:35 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast send socket is bound to 0.0.0.0:37270
08/26 18:06:35 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast send socket is bound to 0.0.0.0:36404
08/26 18:06:35 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast send socket is bound to 0.0.0.0:49615
08/26 18:06:35 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast send socket is bound to 0.0.0.0:52043
08/26 18:06:35 Info: [discovery] [iface:docker0:172.17.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:06:35 Info: [discovery] [iface:docker0:172.17.0.1] multicast send socket is bound to 0.0.0.0:53197
08/26 18:06:35 Info: [discovery] unicast socket is bound to 0.0.0.0:9003
08/26 18:06:35 Trace: [raatmanager] announcing
08/26 18:06:35 Debug: [discovery] broadcast op is complete
08/26 18:06:40 Debug: [easyhttp] [7] POST to https://discovery.roonlabs.net/1/register returned after 649 ms, status code: 200, request body size: 657 B
08/26 18:06:40 Trace: [inetdiscovery] registered 1 devices, 1 services
08/26 18:07:18 Trace: [RAATServer] network reachability changed, refreshing discovery
08/26 18:07:18 Trace: [raatmanager] updating network interfaces
08/26 18:07:18 Trace: [discovery] stopping
08/26 18:07:18 Trace: closing multicast
08/26 18:07:18 Trace: [discovery] closing unicast send socket
08/26 18:07:18 Trace: [discovery] closing unicast recv socket
08/26 18:07:18 Trace: [discovery] starting
08/26 18:07:18 Info: [discovery] [iface:lo:127.0.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:lo:127.0.0.1] multicast send socket is bound to 0.0.0.0:48764
08/26 18:07:18 Info: [discovery] [iface:eno1:192.168.0.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:eno1:192.168.0.2] multicast send socket is bound to 0.0.0.0:51108
08/26 18:07:18 Info: [discovery] [iface:eno1:192.168.2.10] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:eno1:192.168.2.10] multicast send socket is bound to 0.0.0.0:35989
08/26 18:07:18 Info: [discovery] [iface:eno1:192.168.2.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:eno1:192.168.2.2] multicast send socket is bound to 0.0.0.0:33350
08/26 18:07:18 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast send socket is bound to 0.0.0.0:50185
08/26 18:07:18 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast send socket is bound to 0.0.0.0:56910
08/26 18:07:18 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast send socket is bound to 0.0.0.0:42873
08/26 18:07:18 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast send socket is bound to 0.0.0.0:45153
08/26 18:07:18 Info: [discovery] [iface:docker0:172.17.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:07:18 Info: [discovery] [iface:docker0:172.17.0.1] multicast send socket is bound to 0.0.0.0:49749
08/26 18:07:18 Info: [discovery] unicast socket is bound to 0.0.0.0:9003
08/26 18:07:18 Trace: [raatmanager] announcing
08/26 18:07:18 Debug: [discovery] broadcast op is complete
08/26 18:07:23 Debug: [easyhttp] [8] POST to https://discovery.roonlabs.net/1/register returned after 658 ms, status code: 200, request body size: 657 B
08/26 18:07:23 Trace: [inetdiscovery] registered 1 devices, 1 services
08/26 18:11:21 Trace: [RAATServer] network reachability changed, refreshing discovery
08/26 18:11:21 Trace: [raatmanager] updating network interfaces
08/26 18:11:21 Trace: [discovery] stopping
08/26 18:11:21 Trace: closing multicast
08/26 18:11:21 Trace: [discovery] closing unicast send socket
08/26 18:11:21 Trace: [discovery] closing unicast recv socket
08/26 18:11:21 Trace: [discovery] starting
08/26 18:11:21 Info: [discovery] [iface:lo:127.0.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:lo:127.0.0.1] multicast send socket is bound to 0.0.0.0:40958
08/26 18:11:21 Info: [discovery] [iface:eno1:192.168.0.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:eno1:192.168.0.2] multicast send socket is bound to 0.0.0.0:47898
08/26 18:11:21 Info: [discovery] [iface:eno1:192.168.2.10] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:eno1:192.168.2.10] multicast send socket is bound to 0.0.0.0:51102
08/26 18:11:21 Info: [discovery] [iface:eno1:192.168.2.2] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:eno1:192.168.2.2] multicast send socket is bound to 0.0.0.0:35390
08/26 18:11:21 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:br-5cb0ce3c7e7b:172.18.251.129] multicast send socket is bound to 0.0.0.0:48671
08/26 18:11:21 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:br-8650ad45985e:172.18.251.1] multicast send socket is bound to 0.0.0.0:32975
08/26 18:11:21 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:br-978d53c7177a:172.21.0.1] multicast send socket is bound to 0.0.0.0:33758
08/26 18:11:21 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:br-dcdfe75106a9:172.20.0.1] multicast send socket is bound to 0.0.0.0:44276
08/26 18:11:21 Info: [discovery] [iface:docker0:172.17.0.1] multicast recv socket is bound to 0.0.0.0:9003
08/26 18:11:21 Info: [discovery] [iface:docker0:172.17.0.1] multicast send socket is bound to 0.0.0.0:58678
08/26 18:11:21 Info: [discovery] unicast socket is bound to 0.0.0.0:9003
08/26 18:11:21 Trace: [raatmanager] announcing
08/26 18:11:21 Debug: [discovery] broadcast op is complete
08/26 18:11:27 Debug: [easyhttp] [9] POST to https://discovery.roonlabs.net/1/register returned after 937 ms, status code: 200, request body size: 657 B
08/26 18:11:27 Trace: [inetdiscovery] registered 1 devices, 1 services

I would post the roonserver logs, but the log is too large and the forum prevents it.

Several of us have been reporting this problem, Roon support is looking into it. My report:

1 Like

High Rooners, is anyone looking at this?

The thread @Fernando_Pereira posted is appended with [Investigation], and this indicates that it is with the Roon QA team. However, Roon won’t commit to timelines, so you’ll have to be patient.

Fortunately, the issue doesn’t affect all Linux cores; my server on Ubuntu 22.04 is fine.

I think you meant “Fortunately, the issue does not affect all Linux cores; my server on Ubuntu 22.04 is fine” :upside_down_face: From my experience, one may have a Ubuntu server that runs Roon well for months, but then suddenly it starts chewing through memory; or exactly the same library moved to different hardware triggers the issue. Problems like this are really hard to track down (from decades of experience with complex software+hardware systems). For example, there could be a memory leak associated to a race condition that only arises in very particular circumstances in the surrounding hardware+software environment.

1 Like

Just in case anyone hasn’t tried.

I permanently disable swap.

Never have an issue.

I’m running Windows now though due to various reasons. But my other 2 core machines never had/have issues.

:man_shrugging:

Disable Swap

Run the following command to disable Swap:

sudo swapoff -a

Now remove the Swap file:

sudo rm /swap.img

The next thing we need to do is modify the fstab file so that the Swap file is not re-created after a system reboot.

Remove following line from /etc/fstab

/swap.img none swap sw 0 0

Check Swap is disabled

Run the following command to check Swap is disabled.

sudo swapon --show

I certainly did. :smile:

Hi everyone,

QA is requesting volunteers willing to set an environment variable on their Ubuntu in order to test a workaround as we investigate this issue.

Anyone who is experiencing symptoms and running the main production build of Roon would be helpful to our investigation. Please let us know and we’ll follow up with additional details and instructions.

Hi @connor, I am running the main production build on 22.04.3 with the latest 6.2.0 kernel. I experience occasionally high memory and CPU utilization, and I am very willing to test setting an environment variable on my server. Thanks!

1 Like

Hi @Andreas_Philipp1,

Wonderful! Specifically, we’re looking to set the DOTNET_GCRetainVM environment variable to 0 and then run a Roon session to see how CPU and RAM perform.

Here’s an overview for using assignment expressions for setting values to environment variables.

Thank you for your help!

Will do and report back…

1 Like

Hi @connor, I am reporting back on my experience with the environment variable as requested last week.

To give you some context, first some details on my server:

Asus Prime H310i-Plus R2.0 mini-ITX motherboard, Intel Core i5 8600K processor, 16 GB RAM. 250 GB Samsung 970 EVO Plus NVMe M.2 SSD for the OS and Roon, Samsung 860 EVO 2 TB SSD for 3210 local albums, mostly in Apple Lossless.

The hardware is installed in a passively cooled HDPlex case, and it is connected via Ethernet to my router. I have configured two endpoints – iMac with the Roon app and RPi 4 with DietPi – both WiFi-connected. IPv6 is switched off on the router.

Since setting this machine up in 2020, I have been running Ubuntu Server. First, 18.04, then updating to 20.04 and last year to a minimum server 22.04. I have the Roon Server Logs directory configured as RAM disk, and some months ago I settled on configuring swappiness=0.

My Roon database at this moment manages around 291k tracks; 56k of these are local, the rest from Tidal and Qobuz.

In addition to Roon, the machine runs Network UPS Tools to monitor the UPS it is connected to, and PiHole. I monitor the SMART status of the NVMe SSD and the thermal parameters of the machine, as it is placed at a location where ambient temperature more often than not reaches well above 90F.

The environment variable you asked for was set up in the /etc/environment file:

andreas@symphony:~$ cat /etc/environment 
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
DOTNET_GCRetainVM=0

After reboot, printenv shows that the variable is correctly set.

Since this configuration change, the machine has been up for 2d 18:20h, and during the weekend Roon has been heavily used.

  1. Two hours after rebooting, htop showed Roon during playback to one endpoint with a resident memory size of 5741M and very low processor usage, which is what I commonly see after restarting the Roon Server.

  1. This hasn’t changed after 17h since reboot:

  1. Saturday morning, shortly after the previous htop screenshot, Roon triggered an update on the Qobuz Storage Library. This process is heavy on the CPU and always leads to an increment in the reported RES memory size. I took two correspondent htop screenshots to document that moment:

  1. Saturday afternoon, with 1d 1:26h uptime since the reboot, the RES memory size has climbed to 7982M during music reproduction to one endpoint.

  1. Sunday morning (yesterday), with 1d 17:40h uptime, the RES size was reported as 8157M. This is after several hours of the server being idle during the night. At 3 AM the server processed a daily database backup on a drive connected to the router, and the increment in reported RES memory size comes to me as a surprise. I have previously noticed that running a backup would always lower the reported RES memory size, especially during the first 3-4 days after a server restart. Not anymore with the environment variable set.

  1. Several hours later, at 1d 21:24h uptime, the reported RES memory size has decreased to 7483M during continuous playback to one endpoint:

  1. Sunday afternoon I saved several Qobuz albums into my database. I did this as in my experience, this is what mostly triggers the update process of the Qobuz Storage Library. As you see on the next htop output, this again keeps one of the processor cores at 100% and again increments the reported RES memory size:

  1. Several hours later, after completing the update of the Qobuz Storage Library and during playback to one endpoint, the reported RES memory size has again slightly decreased:

  1. This is the last data point for now, corresponding to this morning after the night, after the 3 AM backup and the Roon server being otherwise idle. The reported RES memory size has grown to 9267M:

I will continue to monitor the behavior of the server and will report back in a couple of days.

I should like to make some additional comments:

  1. The user experience during these 2,5 days since the reboot has been fine, with exception of the time when Storage Library updates are being run. When this happens, all interaction with the user interface on my iMac and Android phone becomes tedious. Page loads, searches, time from click to start of music reproduction, etc, become unbearably slow. This has always been the case, and cannot be attributed to the environment variable experimentally set on the OS.

  2. In my experience, after several days of runtime, all server processing begins to slow down, even if the reported RES memory size is not greatly incremented. While monitoring the Roon server activity tailing the log file, I can see how the Storage Library updates and metadata updates are executed much slower than after a reboot. I suspect that this is what many users with sizable libraries experience, several days after the last restart of the Roon server process. A restart of Roon server always brings things back to normal. I usually restart the Roon server after 6–7 days. This effect is not noticeable with a smaller library. I remember that I didn’t distinctly notice this until growing the library >80k tracks.

  3. With a growing database of Tidal and/or Qobuz tracks, the update process of the storage library will become a nuisance. It will be triggered by nearly every event of saving a new album into the database, and the runtime of the process seems to increase ever more, and even more so after server uptime exceeds 4 or 5 days. As mentioned before, during the time this process runs, all user interaction on the remotes is unbearably slow.

  4. During normal playback of music on the server, I never see any runaway CPU or RAM usage. Playback is well-behaved, but I should add that I don’t use any DSP and usually play to one single endpoint only.

1 Like

I didn’t anticipate reporting back so soon after this morning’s writeup. But, I saved two Qobuz albums into my library and deleted the very same albums from my Tidal favorites, which once again triggered the feared Qobuz Storage Library update process. And I notice that it runs distinctly slower than yesterday, very slow indeed. It has been running now for near one hour, and is still going on. RES memory size has gone up, the server during all this time has one core running at 100%, but this wouldn’t matter, if only the user interaction with the remotes wasn’t affected. Everything is slowed down right, now… clicking on artists and waiting for the artist page to load, loading discographies, loading album pages, clicking start play, etc etc…

Why would this process be running so much faster on a newly started Roon server than on a machine which has been running for some days and which has been heavily used in the meantime? It is not lack of RAM…

This will eventually stop, and user interaction will be normal or at least reasonably fast again… but as soon as another album is saved into the database, or deleted from the database, the process will be triggered again and will run ever so much slower…

Again, my hunch is that other users with bigger databases might experience this effect, when they report slowness and at the same time increased RAM and processor usage on their systems, and I also believe that Linux is not the only affected platform.

Edit: This process is still running… it’s near three hours since it started, and user interaction with Roon is still very much affected…

I just started to play an album… from clicking on the blue ‘Play now’ button to starting music reproduction it took about 20 s. And without restarting the server this will be getting worse.

None of this has to do with the change of the recently set environment variable. This has always been so; only that it may have happened earlier than before, with the server having been restarted only three days ago…

Edit 2: After about 4 hours the Qobuz Storage Library update process ended; user interaction is back to normal. This is streaming a 96/24 album from Qobuz:

Did this also happen before setting the DOTNET_GCRetainVM=0 environment variable?

The theory of the problem that we’re trying to investigate with this environment variable implies that it will effect anyone running RoonServer regardless of platform. It also implies that users with more free RAM on their system will actually tend to see more memory leaks: the thing that’s actually happening is that RoonAppliance is just kinda consuming all the memory available to it.

It sounds like the changes from the DOTNET_GCRetainVM can be summarized something like this:

  1. the memory leak has improved
  2. the performance otherwise has tanked badly enough that it’s not worth it

If you don’t mind running another test for us, can you try setting DOTNET_gcServer=0 and report back?

I’m reasonably confident that it will improve the memory leak, so I’m most interested in effects on performance generally.

Hi @ben, of course I will gladly do the suggested test. I’m currently without Internet access, as there has been some problem at the NAP of the Americas, or so I have been told.

I had to restart Roon server last night, as the Qobuz and Tidal storage library processing would kick off again, and after several hours I simply restarted and in about 20 minutes these processes were done. The reported RES memory size of RoonAppliance never went above 9400M.

And yes, the problem with the storage library processing is an old one. That has nothing to do with the environment variable test. On a server that has been running for a few days, these processes will always perform slower and slower, but always keeping one processor core at 100%. Memory use will go up and will stay up, and the only way to recover snappy performance is a server restart.

With a growing database, the storage library processing becomes something of a limiting factor, as if the computing cost of this process was growing exponentially… What baffles me most, though, is that this processing will always slow massively down, after some days of server uptime. It feels as if the system was reading massively fragmented data… restart the server, and all is back to normal…

As soon as I am back online I’ll start testing with the new environment configuration.

Hi @connor, I am willing to hwlp

Ok, I did that in /etc/environment.

What am I supposed to monitor for?