EarlyAccess: Roon 2.65 Build 1645 : RAM usage feedback

2020 M1 Mac

Mac OS Tahoe 26.4

16GB RAM

Computer is on 16-18 hours a day, Roon running relentlessly.

Looking at Activity Monitor, build 1645, RoonAppliance lingers continually at 1.36GB while roon flickers between around 830MB to 860MB.

Apologies for not first setting a baseline on the prior build…my bad, y’all. I’ll continue to keep an eye on it.

1 Like

On this topic, there is also a quite detailed look at RAM on macOS with B1644 here:

1 Like

Thanks for the feedback. Little above my pay grade, I’m afraid. I’m a simple man, merely responding to roon’s request for EA build RAM usage. Appreciate, it, though. Have a good day!

I thought it’s interesting for Roon Labs, as they may not look in Tinkering

2 Likes

Quick update, a full day later… The RAM usage is now routinely hovering around three gigabytes for each of the two processes. Quite an increase. Is this still within normal usage parameters? Thanks!

For grins and giggles, try this and see if the numbers are close enough for a quick sanity check.

How close does my contrived math get to the real memory reported by Mac OS?

1. Get real memory size for RoonAppliance

  • RoonAppliance is the main process to pay attention to.
  • RoonServer is started by RoonAppliance and it’s memory usage is relatively small for an always-on process that runs in the background.
  • You may want to ignore the process named Roon because it’s the frontend client that you can exit when you’re done playing music

2. Calculate approximate memory use from music library

After several edits, this is turning into an interesting (silly?) exercise of making a real number fit into a back of the napkin calculation, comparing apples and pineapples.

Edit (2026-04-04): don’t try and calculate this way because it doesn’t reflect the current state of memory growing every day

Try taking the log of the sum of items in your music library, then multiply by 0.75

log (3932 + 7812 + 43452 + 152) * 0.75

Try it at quickmath.com

3. Compare Mac OS stats with Roon library memory estimate

Real memory size vs proxy calculation from step two above. Memory use fluctuates and it’s in relation to the size of your music library and server management.

Feel like a couple guys from tinkering might be really trying to get your attention, roon! LOL

My answer to your question today is, maybe? :grimacing:

The math above assumed that memory could be estimated to fall within a predictable range. For example, 3.5 GB +/- 0.25. But that sort of approximation doesn’t fit with the way my Roon’s memory grows daily as you can see in the screenshot below.

I’m gonna leave Roon alone for a week - no restarts, no music playing - starting Monday and see what the memory size looks on Saturday. Maybe it will have settled to a normal range when idling.

1 Like

As you experiment with this, if you don’t mind, feel free to ping me with updates and results. We are starting to slowly land memory related perf improvements. Some of those changes are reducing the number of allocations required for certain operations which should hopefully led to less memory/gc pressure and freeing up some CPU cycles.

1 Like

Maybe we could have these stats available for ROCK?
CPU utilisation & load, RAM usage, CPU & System temp, Fan speed in a web page.

I’ll double check when I am home, but yesterday I saw memory in use upto 7gb with a 40k track library (not my full local library nor with Qobuz). I discounted this initially as I was importing some local files but think it stayed up there. This is on a clean install and database with nothing to clean-up.

I’ve stopped looking at memory use TBH, with 32gb it’s there to be used. Just as long as things are stable and managed correctly.

Running DietPi with Roon server installed.

Edit: my mistake. I had left Qobuz logged in but disabled with 145k tracks still on the db :man_facepalming:

Great idea, perhaps now would be a good time to finally add it?

Checking Rocks memory use is fairly easy.

Access the log files, upload them to an AI assistant of your choice. Ask the AI to extract the memory logs, compile them in order of date/time and plot a graph.

Here is a 1 hour example (the drop off at just after 18:00 was me restarting Roon). Dietpi only keeps an hours logs by default, but Rock should keep continuous logs (20 usually and the longer used, the more data points).

1 Like

Ok, done that - initial results for the last 2 days.
The period of memory usage is the scheduled metadata and library updates at 2am.

The timeline here has been corrupted, but is 4 days, since the application of the new ROCK build.

Some of the analysis

:bar_chart: Spike correlation (memory ↔ operations)

:red_circle: Spike #1 — ~Apr 5 (first big rise to ~14–15 GB)

:chart_increasing: What happens in memory

  • Rapid climb from ~7 → ~14–15 GB

  • Short oscillations at the top

  • Then a drop


:magnifying_glass_tilted_left: What the logs show at the same time

1. Massive metadata queue processing

  • Queue sizes in the tens of thousands:

    • q size=73198
  • Batch processing:

    • processing batch of 70 tracks

:backhand_index_pointing_right: This means:

  • Large in-memory structures for metadata objects

  • Temporary allocations per batch


2. Heavy metadata API traffic

  • Repeated calls like:

    • /updatemetadata

    • /getmetadata

:backhand_index_pointing_right: These:

  • Deserialize large JSON payloads

  • Build object graphs in memory

  • Spike managed + unmanaged memory


3. Search index rebuild activity

  • Example:

    • “added … album / track / performer documents”

:backhand_index_pointing_right: Indexing:

  • Allocates large in-memory inverted indexes

  • Often causes sharp temporary spikes


:brain: Conclusion for Spike #1

Primary driver: Metadata ingestion + indexing


:red_circle: Spike #2 — ~Apr 6 (second peak)

:chart_increasing: Memory behaviour

  • Same pattern:

    • rise → oscillate → drop

:magnifying_glass_tilted_left: Logs during this window

1. Continuous metadata refresh cycles

  • Repeated:

    • _ReadyForFullRefresh

    • _SpinQueue processing

:backhand_index_pointing_right: Indicates:

  • Ongoing refresh of entire library metadata

2. Search index churn (remove + add)

  • Example:

    • “removed … documents”

    • then “added … documents”

:backhand_index_pointing_right: This is key:

  • Remove + rebuild = double memory pressure

    • old structures still in memory

    • new ones being built


:brain: Conclusion for Spike #2

Primary driver: Full metadata refresh + index rebuild


:red_circle: Micro-spikes (throughout)

:chart_increasing: Behaviour

  • Short jagged spikes inside larger plateaus

:magnifying_glass_tilted_left: Corresponding operations

1. Audio analysis jobs

  • Example:

    • analysis completed … FLAC 48kHz 24bit

:backhand_index_pointing_right: These:

  • Load full audio chunks

  • Run DSP / loudness analysis

  • Temporary buffers → spikes


2. Database flush + compute bursts

  • Frequent:

    • [dbperf] flush …

    • [library/compute]

:backhand_index_pointing_right: These:

  • Trigger object graph churn

  • Cause GC pressure (visible in stats line)


:yellow_circle: The drop events (very important)

After each spike, memory drops sharply.

:magnifying_glass_tilted_left: What happens in logs

1. GC activity visible in stats

  • Example:

    • 8.89% of runtime in GC pauses

2. End of batch processing

  • endmutation

  • queue drains

  • fewer active objects


:brain: Interpretation

Memory drops happen when:

  • batch completes

  • GC runs

  • temporary structures are released


:puzzle_piece: Putting it all together

:brain: Mapping (clean summary)

Memory pattern Log activity Cause
Large spike Huge metadata queue + API pulls Object creation + deserialization
Plateau jitter Search indexing In-memory index structures
Sharp spikes Audio analysis DSP buffers
Double spikes Remove + rebuild index Overlapping allocations
Drop GC + batch completion Memory reclaimed

:bullseye: Final takeaway

You now have a direct causal chain:

Memory spikes = metadata ingestion + indexing + analysis workloads

And critically:

Every spike aligns with intensive, bounded operations that later complete


:green_circle: Final verdict (with correlation)

  • :white_check_mark: Every spike has a clear workload cause

  • :white_check_mark: Every spike is followed by release

  • :white_check_mark: No uncontrolled growth

This is healthy high-throughput behaviour, not a leak.

1 Like

So this is a NUC7i7DNK same board as a RevB Nucleus+, but with 16GB RAM and a 250GB NVME SSD.
Library size is 106,867 tracks, 7,924 albums from 3,838 artists and 538 composers.

With the first major spike on Apr 5, at ~14–15 GB, am I

  1. Running out of RAM for ROCK to operate?
  2. Need to increase RAM to 32GB (2x16GB)
  3. Or ROCK is just using the available memory during the schedule update window, and once complete memory is returned, is stable and Roon is available for use.
1 Like

I have a similar size library, server was using 23gb of ram of 48gb. 23gb seems like a lot of ram usage.

I mostly use one zone and it is to hqplayer on the same machine.

If I restart Roon server ram usage starts at around 6gb.

Edit: should probably add I’m running LinuxMint 22.3.

Just put my latest Log files back through ChatGPT and you can see last night’s ‘Scheduled activity’ between 2am and 6am, with circa 7-8GB used, and then released, as there was a restart following the application of the latest build B1647.
I also restarted Roon Server yesterday morning, as I was having an issue with a Chromecast Audio endpoint, which had disappeared again.
Memory usage prior to that was circa 5GB following the scheduled activity.

Will repeat this for the new B1647 release, to see if there is any difference on my ROCK server and with my Library.

From yesterday through to today I’ve compiled a chart from 8021 data points. I’m seeing a general improvement with this latest EA update. Came home to see only 3.3gb in use after adding several albums to my local library. This action did see a spike prior to the update.

Chart to follow shortly

• 2026-04-08 22:24:03: [REBOOT & UPDATE] RoonServer starts using v2.65 (build 1645) earlyaccess.
• 2026-04-09 06:06:41: [REBOOT & UPDATE] RoonServer starts using v2.65 (build 1647) earlyaccess.
• 2026-04-09 07:41:12: [MANUAL RESTART] RoonServer restarts again on build 1647.
• 2026-04-09 07:44:00: [MANUAL RESTART] A final rapid restart on build 1647.

The next issue I need to look at is the Roon Ready implementation on my FiiO R7 that has recently started crashing. Seems to coincide with using the last couple of EA builds. Got worse with the last EA build. Just crashed now in B1647.

1 Like

Seen this on my Audalytic RD70 Streaming DAC as well. Only since the updated Roon Bridge 2.6 was released in my case.

Hmm :thinking:

New data points added.

Library size on this server is 43646 local tracks.
For context there are 1294 unidentified tracks (96 albums). Some of these albums should have been identified, because manually some are identified immediately.

  1. Massive Metadata Syncing
    The logs from the early hours of April 10th (around 01:17 AM to 01:30 AM) show Roon making huge batch requests to metadataserver.roonlabs.net. It is downloading updated metadata, images, and track information for thousands of items in your library.

  2. Library Sorting & “Dirty” Item Rebuilds
    As Roon downloads this new metadata, it flags parts of your database as “dirty” (meaning out of date). The logs show Roon constantly re-sorting albums and genres item-by-item in the background (e.g., LibraryAlbum:65 dirty and LibraryGenre:45 dirty items).

  3. Memory Caching Behavior
    To process these library updates quickly without wearing out your hard drive, Roon caches the database in your system’s RAM. Your memory stats at 06:04 AM show:
    Physical Memory: 5248 MB
    Managed Memory: 1349 MB
    Unmanaged Memory: 3899 MB
    The massive ~3.9 GB of “Unmanaged Memory” indicates that Roon is intentionally holding onto large blocks of background database operations and downloaded metadata images/cache in RAM.

Conclusion
When you aren’t using Roon to play music, it takes advantage of the downtime to sync with the cloud, re-index your 43,646 tracks, and cache the results. Because Roon is designed to be as fast as possible when you do open the app to search or browse, it will aggressively use whatever free RAM is available to hold onto this background data rather than letting it go immediately.

So this indicates the background work schedule is kicking in at 1am. It should end at 5am and in my eye, needs to release (GC) used memory.

It’s now 6:48am and 5.4gb is still used.

This gives me concerns for folks running a similar library size on a Nucleus One shipped with 4gb of memory. Multiple crashes would occur. Or is Roon clever to know how much RAM is available.

As it stands Roon shows no signs of a GC. I won’t manually restart things, I will use Roon Arc whilst at work and will check again in 10 hours what’s happened.

1 Like