When powering up my Roon server this morning I needed about 5 reboots to get Roon working:
Accessing Roon from my Windows client presented the famous ‘“There was an issue loading your database” error, either immediately of within a minute of use.
For the moment it seems to run back ok, without me doing anything else than rebooting.
I have downloaded the logfiles of today, and in the latest ‘crash’ logfile (Roonserver_log_01), I see
07/04 11:32:48 Error: [broker/database] corruption detected: Corruption: corrupted compressed block contents
07/04 11:32:48 Warn: [broker] detected corrupt database, notifying client
07/04 11:32:48 Warn: [broker] detected corrupt database, halting broker threads
07/04 11:32:48 Critical: Library.EndMutation: LevelDb.Exception: Corruption: corrupted compressed block contents
at LevelDb.Database._CheckError(IntPtr err)
at LevelDb.Database.Write(WriteBatch batch)
at LevelDb.Transaction.Commit(Boolean trace)
at Sooloos.Broker.Music.MusicDatabase.Flush()
at Sooloos.Broker.Music.Library.EndMutation()
07/04 11:32:48 Trace: [dbperf] flush 129544 bytes, 0 ops in 4 ms (cumulative 147163670 bytes, 64647 ops in 17253 ms)
07/04 11:32:48 Trace: [metadatasvc] REQ [536] https://metadataserver.roonlabs.net/md/4/updatemetadata?uid=7b64ae83-bef5-4392-b703-b65625ccfe54&lid=&token=5ca1b151-ff98-4bda-b30a-1acb3aaf976c&metadataid[]=6100313138313736353132393134333032&metadataid[]=6100313138313736353132393134333030&metadataid[]=6100313138313736353132393134333031&metadataid[]=6100313138313736353132393134333036&metadataid[]=6100313138313736353132393134333034&metadataid[]=6100313138313736353132393134333039&metadataid[]=6100313138313736353132393134333037&metadataid[]=6100313138313736353132393134333033&metadataid[]=6100313138313736353132393134323939&metadataid[]=6100313138313736353132393134333038&metadataid[]=6100313138313736353132393134333035&metadataid[]=7b004d5430303033303135333434&metadataid[]=7b004d5430303032313539323130&metadataid[]=7b004d5430303032393230353338&metadataid[]=7b004d5430303034313430363432&metadataid[]=7b004d5430303038333331363530&metadataid[]=7b004d5430303032313338333338&metadataid[]=7b004d5430303038393436313632&metadataid[]=7b004d5430303033333531333934&metadataid[]=7b004d5430303030343132323431&metadataid[]=7b004d5430303032303934373338&metadataid[]=7b004d5430303034363334333637&metadataid[]=7b004d5430303030393933393230&metadataid[]=7b004d5430303032363439323536&metadataid[]=7b004d5430303032303436383531&metadataid[]=7b004d5430303033393736353230&metadataid[]=7b004d5430303033323734363337&metadataid[]=7b004d5430303039303030353731&metadataid[]=7b004d5430303034363935363837&metadataid[]=7b004d5430303032383631303532&metadataid[]=7b004d5430303034393936313035&metadataid[]=7b004d5430303034323730373934&metadataid[]=7b004d5430303031353537343538&metadataid[]=7b004d5430303031343338393038&metadataid[]=7b004d5430303035313035343131&metadataid[]=7b004d5430303031393239373536&metadataid[]=7b004d5430303035333437363030&metadataid[]=7b004d5430303030383939383532&metadataid[]=7b004d5430303032363630383032&metadataid[]=7b004d5430303031353630313337&metadataid[]=7b004d5430303031343039343632&metadataid[]=7b004d5430303039333036323932&metadataid[]=7b004d5430303031373735343834&metadataid[]=7b004d5430303031343638363439&metadataid[]=7b004d5430303032333132323537&metadataid[]=7b004d5430303031313436393035&metadataid[]=7b004d5430303038353534313534&metadataid[]=7b004d5430303031363830343532&metadataid[]=7b004d5430303035353438303534&metadataid[]=7b004d5430303033303534343534&metadataid[]=7b004d5430303033383438323531&metadataid[]=7b004d5430303035313334383032&metadataid[]=3e01155f94bc5f9ceb4b886f27a3bf36a686&metadataid[]=79004d5730303030343533393934&metadataid[]=79004d5730303030303831343233
07/04 11:32:48 Trace: [zone HQPlayer] Suspend
07/04 11:32:48 Info: [zone HQPlayer] Canceling Pending Sleep
07/04 11:32:48 Trace: [zone KEF LSX II] Suspend
07/04 11:32:48 Info: [zone KEF LSX II] Canceling Pending Sleep
07/04 11:32:48 Trace: [zone CXNv2 (be)] Suspend
07/04 11:32:48 Info: [zone CXNv2 (be)] Canceling Pending Sleep
07/04 11:32:48 Trace: [leveldb] closing /var/roon/RoonServer/Database/Core/7b64ae83bef54392b703b65625ccfe54/transport/zone_1601d935088f5a8521851f0b72f30228de66.db temporarily
07/04 11:32:48 Trace: [leveldb] closing /var/roon/RoonServer/Database/Core/7b64ae83bef54392b703b65625ccfe54/transport/zone_16014b93547471cc484093b79451f9179bb4.db temporarily
07/04 11:32:48 Trace: [leveldb] closing /var/roon/RoonServer/Database/Core/7b64ae83bef54392b703b65625ccfe54/transport/zone_160121b5788e01f1925eb86f7ecdb86b783d.db temporarily
07/04 11:32:53 Info: [stats] 36555mb Virtual, 4400mb Physical, 1302mb Managed, 324 Handles, 116 Threads
07/04 11:33:08 Info: [stats] 36611mb Virtual, 4400mb Physical, 1305mb Managed, 319 Handles, 121 Threads
07/04 11:33:09 Info: [remoting/serverconnectionv2] Client disconnected: 192.168.0.156:58922
I have used the Roon Log Uploader to upload the complete Roon log files.
The ones of today are:
RoonServer_log.05.txt - 04 - 03 -02 - 01 and RoonServer_log.txt, with 05 being the earliest of today
and RoonServer_log.txt the latest (of the session running fine during the day)
I have executed diskchecks, reporting no errors, for my system disks and both Music (data) disks.
I am running a ‘test’ conversion by dBPoweramp of all my AIFF & Flac files (dsf files are not taken into account, but these have not changed in more than over a year)
I also have tested my memory modules with Passmark MemTest86, 0 errors reported.
reference: How to Test RAM: Make Sure Bad Memory Isn’t Crashing Your PC | Tom’s Hardware
So I am running out of ideas what to do next. I am reluctant to restore a backup , as I keep up to about max. 20 daily backups. Any ‘real’ error in the database will be be present in my backups as well.
Could someone please take a look at today’s logfile, to see if you can trace any possible reason for the challenging startup this morning. Thanks in advance.
Dirk