Backing up internal storage to Synology NAS

You could try the following. I mount my ROCK SSD to a convenient location, in my case I have a share that is primarily for backup mounts to my DS416, but I mount my ROCK SSD there as well.

The following rsync job is set up in my Task Scheduler, which I run manually, never got around to defining an automatic trigger:

image

2 Likes

Youā€™re a scholar and a gentleman. Very much appreciated.

youā€™re welcome, happy to help.

Hey, I have a question for the esteemed folks on this threadā€¦

Iā€™d ideally like to treat my music folder on my Synology 918+ as master, and have it mirrored to two different ROCKsā€™ local storage (so every night after Iā€™ve added new folders to my NASā€™s music folder, the next morning theyā€™re on my ROCKā€™s local storage and library. Each one is visible on my local network at 192.168.1.100 and 192.168.10.100. Ideally Iā€™d be able to use Active Backup to copy from the NAS to the ROCKā€™s USB-attached storage, but I canā€™t figure out how to go in that direction since I canā€™t install the Active Backup Agent on the ROCK (obviously). Is there a way to achieve this? Am I doing something crazy by treating NAS as master? (Donā€™t worry, I also keep quarterly offsite backups of airgapped hard drives just to be darn sure because I donā€™t trust NAS for backups)

Oh my good gosh. This took me a while to figure out (mounting my remote ROCK cifs shares, what exactly happened based on directory names, etc) but this was pretty rookie stuff and now that Iā€™m through it itā€™s just amazing. Master on NAS, automagically copies immediately to my home ROCK and my remote ROCK. Dang. Ninja style stuff. Thank you @ACvitus !

EDIT: thereā€™s one issue. Iā€™m trying to back up my NAS-based FLAC masters to two locations: 1 is a local ROCK, whose internal storage is mounted as a CIFS share, and the other is a remote ROCK, whose internal storage is mounted as a CIFS share over a site-to-site OpenVPN tunnel. The problem is that CloudSync can only add 1 WebDAV sync to the same localhost:5005, and you canā€™t add two backup tasks to the same WebDAV CloudSync task that share the same local folder. And you obviously canā€™t run WebDAV on ROCK. I suppose I could daisy chain the two and copy from one ROCK mount to the other over the tunnel, but that intuitively sounds fragile. Perhaps I should use a different approach - I do have a Synology in each location, and I could mount the remote ROCKā€™s storage on the remote synology, and then run WebDAV on that Synologyā€¦ Iā€™m not very deep in this stuff, so not sure if that would work.

This can be done. It took me forever to figure this out and I finally got it thanks to some other poster here.

  1. In File Services, Enable rsync
  2. In File Station, Map remote a remote folder to your ROCKā€™s InternalStorage folder
  3. In Task Scheduler, create a task that runs the following command (you may need to adjust paths/host names):

rsync -av --delete --exclude ā€˜@eaDirā€™ /volume1/Library/ /volume1/ROCK/Library

Initially, I recommend setting it up to email you and running manually to make sure everything is working OK. For example, I had to fix up some folder names with special characters. Once youā€™re satisfied it works as desired, Schedule the task to run as frequently as you please. I have it running on 1 minute intervals! Itā€™s practically real time.

1 Like

Um, youā€™re amazing, thank you. Wish I could sticky this. Others will need this. Amazing.

I did eventually figure out that the single quotes in the script you pasted need to be replaced with standard single quotes, because I kept getting @eaDir files. I think this forum software replaces single quotes with ā€œfancy beginning and end quotesā€ for visual display. But thatā€™s on me, not you, and I figured it out which made me feel clever.

I wish there was a way to ā€œterminate after 15 minutesā€ or pause or something like that so I could test better before putting the whole several GB running. But, like you, Iā€™ll do it manual with full logging and email alerts for everything for a while. This is AWESOME. Again, thank you. I sound like a little kid, but this is so much better than running cloud sync (whcih I hear is being deprecated anyways?).

1 Like

This is a great thread. Whenever I set up any kind of sync, I use the exclusion list below. I suggest you use something similar for your rsync task. In addition to creating @eaDir folders, Synology can create #snapshot and #recycle folders depending on how you have your shares configured. .DS_Store folders can show up if you ever mount shares on a Mac. Youā€™re never going to regret excluding these but you might regret not doing it :slight_smile:

@eaDir
#snapshot
#recycle
.DS_Store

2 Likes

As always, additive, on point, and gentlemanly. Had a bunch of .DS_Store files, glad to be rid of them in my copies. Cleaning things up left & right. Gracias.

Paranthetically: Feel like a bit of stud when I have stuff like this running in the background, which is traversing a CIFS mount of a ROCK-attached USB disk over an OpenVPN tunnel.

rsync -av --delete --exclude ā€˜@eaDirā€™ --exclude '#snapshot' --exclude '#recycle' --exclude '.DS_Store' /volume1/music/ /volume1/MyMounts/HomeROCK/1_44_1-42218_SPK_DD5661978392C_b66cf5ac-27e3-4c3d-8e42-645be011x20b-p1/music

Old me would not have believed I could do this.

2 Likes

Ok, been running for a while - and I have a question for those who know more about this stuff than me (@EvilGnome6 @gTunes and other smart folks here). Does rsync send the whole file even though itā€™s already on the destination?

Current status:

  1. rsync from my NAS to ROCKā€™s USB-attached storage seems to be running like clockwork. But t did take a while even though the destination was exactly what the source already was. 4.5 hours for initial sync of nearly 1TB, incremental syncs take 20 seconds. But it works - I have tried adding files, deleting files, itā€™s quick. Just works, as advertised. Have it set to run once a day, and will run manually when I have a batch of new files. Builds up log files & sends me emails on errors, Iā€™ll keep those for a while just so I can see whatā€™s going on. But I assume that (as is suggested by log file) that it actually sent 1TB over local network with -av --delete --exclude options even though the destination files were identical already (and seemed to be by my own inspection of timestamp, filesize, etc).
  2. rsync from my NAS in home A to my remote ROCKā€™s USB-attached storage in home B over site-to-site OpenVPN seems very slow, Iā€™m assuming for similar reasons - I think itā€™s trying to send 1TB of files across the tunnel even though the content is the same at destination. It doesnā€™t seem to be changing timestamps in most cases (though it is changing timestamps in a few caes). It also doesnā€™t seem to be traversing the file structure in some natural way that I can understand, so itā€™s hard to tell how much progress itā€™s making. But it shows no evidence of changing anything - file names and file structure seem to be the same as in my pre-seeded content. I canā€™t tell if itā€™s overwriting each file with an identical one, but I assume thatā€™s what it is doing.

Should I just let it be, accept that itā€™ll take a few days to a week for the initial upload and not worry about it? Or should I try to understand rsync options like --partial-dir and -c? I thought if Iā€™d pre-seeded 100% of the content Iā€™d be in good shape. But house B is > 1 hour away, so Iā€™m not going to drive back and forth to try to pre-seed it differently.

Thanks!

Did you use the same command line options for both scenarios? I havenā€™t used rsync in a while but looking at the man page, -a expands to -rlptgoD. Most of that looks right but I donā€™t know if you want ā€œpā€ for either scenario - thatā€™s ā€œpermissionsā€. ā€œgā€ is group and that probably doesnā€™t make sense either. ā€œoā€ is owner". ā€œDā€ is devices.

I donā€™t think this is your perf issue but I donā€™t know if you want to be trying to clone any permission stuff across systems - that tends to only help if you know exactly what youā€™re doing.

Regarding perf - the first and most obvious question is ā€œWhat do you expect it to be doing and are you sure itā€™s not doing exactly that?ā€

If youā€™d specified -c, Iā€™m fairly certain it would have to pull file contents from remote to local in order to compute checksums. Without that option, I think it just compares file sizes and modification times. Did whatever you do to seed your remote data set preserve modification times? Should be easy to check.

For some of this kind of stuff, I use syncThing. I donā€™t think itā€™s right for your NAS ā†’ ROCK scenario but I would use it for the remote stuff. Or Iā€™d use Synologyā€™s Drive Server/Drive ShareSync. Both of those would give you remote sync without the need for a site to site VPN (each has a redirector). syncThing has a steep learning curve but once youā€™re in, youā€™re in. I use it to do full private cloud type stuff - sync our ā€œDocumentsā€ directories on computers to our ā€œHomeā€ directories on Synology. Iā€™ve tried using Synology Drive on user machines in the past and canā€™t stand it - has bugs and is a resource hog. Once youā€™ve got syncThing up, a lot of problems look like syncThing problems.

Okā€¦commercial over. In case you do want to jump ship from rsync for the remote scenario, thatā€™s what Iā€™d suggest you look at. Short of that, though, check to see if it different mod times are causing your issue because if mod times donā€™t match, and youā€™re in ā€œpushā€ mode, I assume itā€™ll just overwrite the file (and you are in push mode, right?)

Before I got rsync working directly on my NAS, I was synchronizing my NAS and ROCK using Robocopy on a Windows box. Other than the issues I mentioned with special characters in folder names, no other files were re-copied. As @gTunes mentioned, it may have to do with the -a switch. I honestly donā€™t know. This is my first foray into rsync.

Seriously? Nice! Would not have guessed that based on your posts!

Cool. None of us are experts, which I kind of love. Permission to muck about and fail. Worst case scenario it takes a week I suppose, and after that it should be pretty straightforward for the smaller adds/edits I can envision. Not sure I should chase another solution before I get this one working, and I know it works even if initial is inefficient.

Thanks all!

EDIT: on thinking about it, I bet it would have worked if Iā€™d first seeded the remote drive locally using rsync instead of just network file copy; so now it doesnā€™t really look like itā€™s copying, but it is because it needs to. You live and learn.

1 Like

Also, I discovered something clever if youā€™re ever in this situation where youā€™re not sure whatā€™s going on with rsync over a slow connection / big job on Synology Scheduled Tasks. If you want to ā€œsee where it is working as it is traversing the file structureā€, you can go in DSM to Resource Monitor / Connections / Accessed Files and youā€™ll be able to see the most recently touched file by rsync (which runs as a process under the DSM Desktop Service, assuming youā€™re running it through Task Scheduler). If you go to that directory in the target file system, youā€™ll actually see files populate / get a sense of how fast theyā€™re going and some reassurance that something is happening. I donā€™t know exactly in what order the file system is traversed, so this isnā€™t anything like an ability to estimate % completion, but it does give you a sense of progress, eg that something is occurring. Which is frankly what I needed.

1 Like

Questionā€¦ and I think the answer is going to be ā€œtry it and report backā€, but I figured Iā€™d see if anyone had any ideas.

Currently my way of getting files from my NAS music master to my 2 coresā€™ USB-attached SSDs is the task scheduler rsync jobs listed above. One is local, and I have it run hourly. The other goes across a VPN tunnel to another house. Both are set up via remote CIFS mounts on the local Synology, so the remote houseā€™s job looks like:

rsync -av --delete --exclude '@eaDir' --exclude '#snapshot' --exclude '#recycle' --exclude '.DS_Store' /volume1/music/ /volume1/homes/admin/RemoteRock/1_44_1-42218_SSK_DD5641988763D_c57f2da1-6301-49bc-6299-3841eef904e3-p1/music

(Again, RemoteRock is a CIFS mount of a NUC in another state that has a local IP address through an OpenVPN tunnel)

Theyā€™re both on different companiesā€™ cable plans, and have guaranteed upload speeds in the range of 20Mbps. I currently get more like 7Mbps (measured at the Unifi console, and itā€™s much less in reality - it took 9 days for my 600Gb music share to traverse the connection (I was stupid and didnā€™t start the copy locally, but once it was going I was lazy and decided to see how it did). Iā€™m sure thereā€™s a bunch of overhead, and Iā€™m sure thereā€™s a bottleneck here or there. Mostly itā€™s fine. It works, which is amazing. Iā€™m never adding more than 3-4 albums a day, and that is infrequent. So I should probably leave well enough alone.

Butā€¦ I also have a crappy little Synology ds220j that sits in my Remote House that I use as a Hyper Backup Vault for important stuff on my primary home NAS. So I have backup rotation etc in case my house burns down or a guest goes into my primary home server closet and takes my DS918+ and drop kicks it.

But what Iā€™m wondering is whether I should mount the remote rock via smb on the remote synology and then use the syntax more like rsync -av /volume1/music/ rsyncuser@192.168.10.232::/volume1/mounts/secondhomerock/1_44_1-42218_SSK_DD5641988763D_c57f2da1-6301-49bc-6299-3841eef904e3-p1/music instead, relying on the fact that the second home synology can act as an rsync daemon.

In other words, is there an advantage in speed / reliability to allowing the CIFS mount to be a local one and have rsync traverse the tunnel, or should I keep it like it is where itā€™s the CIFS mount that is traversing the tunnel and rsync is doing all its work within the local synology? (Sorry if Iā€™m not explaining exactly with right words, hope this is clear)

Also, note that Iā€™m not encrypting anything over rsync because Iā€™m relying on my site-to-site VPN as my source of security. I rotate that password periodically and try to assume that that canā€™t be a worse vulnerability than a frontal attack on either homeā€™s Unifi IPS detection, firewall, ARC port etc).

Thanks for any thoughts. Never thought Iā€™d be trying to figure stuff like this out for myself.

I suspect that rsync isnā€™t well suited for transferring data over WAN connections (lots of acknowledgements over a high latency connection will slow down transfer rates). If you have a second Synology at the Remote House, I might try running ShareSync to sync the music over to the second NAS and rsync it over to your second ROCK from there.

So ShareSync looks cool, and I can see in the docs that I can actually get sharesync to work to a mount on the remote NAS so I donā€™t have to do a two-step process. But if thatā€™s a good method for remote, why wouldnā€™t it be a good method for my local? In other words, if Iā€™m copying localNAS->RemoteROCK, why shouldnā€™t I use the same method for localNAS->localROCK? Is that impossible to use that tool from one NAS to a mount on itself?

Iā€™m not a great administrator, so having a single approach for both would be far better for me and all my many abilities to remember (and not remember) how to do things.

EDIT: got it, my guess was right - ShareSync is pretty great at keeping folders in sync across two Synology NASā€™s, but canā€™t do sync to a local drive. Tried to see if I could fool it by getting it to connect to localhost:5001, but no such luck. Drawing board. Perhaps rsync locally and sharesync to remoteNAS with destination of CIFS-mounted folder on remoteROCK. Having just sent 600Gb over this connection with rsync, and having a connection that works, I may leave well enough alone for now (Iā€™m not sure how Cox & Xfinity are going to love me after this one; data caps arenā€™t yet in effect where I live but Iā€™m not sure I want to really get involved). On my list of stuff to do.

localhost:5005 has worked for me using Synologyā€™s Cloud Sync, to designate the NAS as the ā€œcloudā€

Update. I think Iā€™ve reached the acceptance phase.

  1. ShareSync to keep the two NAS folders in 2-way sync (itā€™s possible though unlikely Iā€™ll do ripping in both locations itā€™s not impossible), with Intelliversioning on in case I do something dumb.
  2. In each location, 4x/day rsync job to keep the local NAS copied to the ROCK-attached USB SSD
  3. Hyperbackup with weekly snapshots for a year
  4. ~1x/year copy to spinning drive that I keep in an (obviously air-gapped) fireproof box kept in the basement; itā€™s on my home maintenance list

Iā€™m not done with this yet, Iā€™m just doing the local initial sync of the two Synology drives via ShareSync Drive. After that gotta then bring the drive out to the remote house, then set up rsync out there. But this feels like a plan. Iā€™ve learned a bunch.

The advantage of this is that it actually applies to much more than just music; thereā€™s also a lot of scanned family history docs etc. that are too big to pay to keep in Google Drive and which I canā€™t bear the thought of being lost forever in a fire. And Iā€™m too lazy to figure out in addition to all this how to deal with Glacier backup or whatever. So for now this works, I think.

2 Likes