Is it normal for Roon's database size/structure to be so large? [yes]

I agree, we have other features we are waiting for such as mobile sync and improved internet radio. Much better use of their time when the backup system already works.

1 Like

Notice I was not talking about an internal Roon backup, but a backup of my computer (so outside of Roon and an OS function) - which also of course includes the Roon application and backup directories. Roon’s choice of DB, which by design creates a many-thousand numbered directory structure, is not really what OS’s such as Windows are meant to handle (not in a home environment), and so it takes an inordinate amount of time for the OS to back up said structures…

I’d exclude the roon directories from.the normal backups.

1 Like

Yep, that is where I am at as well.

edit: I asked the question because this was my first encounter with Leveldb. Whatever its advantages for Roon (besides it being free), it had this obvious disadvantage for the end user…

I believe it is multi platform and is good at updating across a wide address space rather than drilling down. If you think of your library and the links I guess it is good at that reach across as you add a track to a playlist it also updates all the associated metadata across many hits in your library.

1 Like

Is this really something to cry about, having to make a backup that takes some time?

Here’s why Roon uses leveldb:

You don’t know my use situation. Yes, it really is something I have had to adjust for. Marketing speak about this particular database can always be countered with equally impressive sounding marketing speak about some other DB choice. Whatever it’s real advantages, I note its cost (as in free) and its effect on my file system and backups.

Ok, you have my sympathy. Perhaps you should look to another tool that isn’t data rich and doesn’t provide a near instantaneous data-driven interface. Your backups will be a lot quicker.


There is no such thing as a for all uses database. Roon have made a choice which for access for read across many data structures trumps peripheral actions such as backup. Really, don’t start second guessing architectural decisions without all the facts.


Perhaps you should just admit that yes, there is a downside to Roons choice. They are big boys, they know this - why do you seem resistant?

Second guessing? You mean like everyone around here does about their recent 1.6 UI updates? :yum:

Really, what’s the beef? Roon’s choice has a real and unusual end user impact. Whatever its positives, it has negatives. That’s the real world…

You’re right. You should post a feature request that Roon modify its architecture to meet your backup time requirements.

Imagine the stress and complications it would cause if the Roon database looked anything like mine:
41G RoonServer/

I imagine that would take some time to copy.

1 Like

LevelDB is what we chose. It is not SQL… no SQL library would come even close to performing here. SQL databases normally implement something like LevelDB internally for their indexes.

Lots of software uses LevelDB – Autodesk AutoCAD and Google Chrome come to mind.

The other thing to note is that you have so many files because we save your artist photos and album artwork as individual files. We use a hashed deep directory structure to avoid having directories with many thousands of files, which can cause performance issues.

This is not an area we are changing, or even entertaining a change.

If your backup software is breaking, I suggest you use better software for your backups. If it’s just slow, well, that’s the reality of having all your data local.


Excuse me reviving a very old thread but, to add maybe a slightly different perspective, from my previous experiences with applications that store their databases as a few very big files that can be a total showstopper for anyone running incremental backup software. A tiny change such as correcting a single spelling mistake somewhere can result in the system updating maybe only a few bytes in one of its huge database files resulting in online backup software seeing a multi-GB file getting changed frequently and needing to be constantly backed up. If that’s a cloud backup service (e.g, Crashplan or Carbonite both of which I use) that can be so painful and impractical, especially given that domestic upload speed is usually lower than download speed, as to require such low granularity databases to be excluded from the cloud backups. (The Evernote note-taking app is a real life case of where I have had to very reluctantly exclude such a database from my cloud backups.)

In my opinion you’ve done exactly the right and most backup-friendly thing here Danny in terms of having high granularity in the database structure.

Good backup software will still send only the changes in a large file and do versioning. (For example, Synology’s HyperBackup stores backups as 32MB chunks and only the affected chunk(s) are updated. It then reference counts the assorted chunks, and keeps indexes to say which chunks are associated with which version so you can do point-in-time restores of older versions easily.)

Fair point but when choosing an online backup solution there are so many other key features that a user is often looking for and trying to balance all that against cost that it is still helpful for something like Roon to not introduce an additional feature requirement that a user has to accommodate. Because of that I’m grateful for Roon’s quite granular database.

Maybe I should go and check whether either of my current backup solutions (Crashplan & Carbonite) are smarter than I thought in large-file-backup. Certainly a few years ago when I first set things up I had an issue but I should file support tickets and recheck though so thanks for the info/insight.

As the live Roon DB is constantly being updated … the backup that is “snapshot” but the backup software may not be consistent … which can result in a corrupted DB when restored.

Thus, I would suggest:

  • Excluding the live Roon DB from the backup system.
  • Setting up Roon to perform a scheduled nightly backup (They are incremental and I have 99 days worth).
  • Include those backup folders in your overall cloud based backup system.

Seconded. I debated discussing this in my first reply, but they’d used examples like Evernote’s database being a problem for backups rather than Roon. With all databases, you really need to use the supplied backup tool to get a consistent export, and then you can back up that export with whatever your preferred backup tool is.

Roon makes it super easy to set up automatic backups of the metadata database as well, so no excuse not to. The backups don’t take up much additional space, either. In my case, I have Roon write the backups to a “Roon Backups” share on my NAS, and then the NAS’ backup software gets that up to AWS S3.

Noted. Thanks for the specific recommendation. Excluding the live DB and including automatically generated backups sounds like the way to go.

What is the format of the automatically created backups? Are they something like a big zip file to seed the process and then smaller incrementals on top of that or are they still quite granular as in the live DB?

I do realise that with Carl’s suggested scheme the initial seed (full) backup being a single very big file wouldn’t be a particular issue anyway since it would only be created once and it wouldn’t matter much if it took a day or two to make it up to cloud storage. Yes, in that case the seed would be out of sync with the increments for the first day or two as far as the cloud backups were concerned but that’s a transient issue. My question is motivated by curiosity and wish for a reasonably complete understanding rather than any concern about it being an issue.