Backing up internal storage to Synology NAS

You could try the following. I mount my ROCK SSD to a convenient location, in my case I have a share that is primarily for backup mounts to my DS416, but I mount my ROCK SSD there as well.

The following rsync job is set up in my Task Scheduler, which I run manually, never got around to defining an automatic trigger:

image

2 Likes

You’re a scholar and a gentleman. Very much appreciated.

you’re welcome, happy to help.

Hey, I have a question for the esteemed folks on this thread…

I’d ideally like to treat my music folder on my Synology 918+ as master, and have it mirrored to two different ROCKs’ local storage (so every night after I’ve added new folders to my NAS’s music folder, the next morning they’re on my ROCK’s local storage and library. Each one is visible on my local network at 192.168.1.100 and 192.168.10.100. Ideally I’d be able to use Active Backup to copy from the NAS to the ROCK’s USB-attached storage, but I can’t figure out how to go in that direction since I can’t install the Active Backup Agent on the ROCK (obviously). Is there a way to achieve this? Am I doing something crazy by treating NAS as master? (Don’t worry, I also keep quarterly offsite backups of airgapped hard drives just to be darn sure because I don’t trust NAS for backups)

Oh my good gosh. This took me a while to figure out (mounting my remote ROCK cifs shares, what exactly happened based on directory names, etc) but this was pretty rookie stuff and now that I’m through it it’s just amazing. Master on NAS, automagically copies immediately to my home ROCK and my remote ROCK. Dang. Ninja style stuff. Thank you @ACvitus !

EDIT: there’s one issue. I’m trying to back up my NAS-based FLAC masters to two locations: 1 is a local ROCK, whose internal storage is mounted as a CIFS share, and the other is a remote ROCK, whose internal storage is mounted as a CIFS share over a site-to-site OpenVPN tunnel. The problem is that CloudSync can only add 1 WebDAV sync to the same localhost:5005, and you can’t add two backup tasks to the same WebDAV CloudSync task that share the same local folder. And you obviously can’t run WebDAV on ROCK. I suppose I could daisy chain the two and copy from one ROCK mount to the other over the tunnel, but that intuitively sounds fragile. Perhaps I should use a different approach - I do have a Synology in each location, and I could mount the remote ROCK’s storage on the remote synology, and then run WebDAV on that Synology… I’m not very deep in this stuff, so not sure if that would work.

This can be done. It took me forever to figure this out and I finally got it thanks to some other poster here.

  1. In File Services, Enable rsync
  2. In File Station, Map remote a remote folder to your ROCK’s InternalStorage folder
  3. In Task Scheduler, create a task that runs the following command (you may need to adjust paths/host names):

rsync -av --delete --exclude ā€˜@eaDir’ /volume1/Library/ /volume1/ROCK/Library

Initially, I recommend setting it up to email you and running manually to make sure everything is working OK. For example, I had to fix up some folder names with special characters. Once you’re satisfied it works as desired, Schedule the task to run as frequently as you please. I have it running on 1 minute intervals! It’s practically real time.

1 Like

Um, you’re amazing, thank you. Wish I could sticky this. Others will need this. Amazing.

I did eventually figure out that the single quotes in the script you pasted need to be replaced with standard single quotes, because I kept getting @eaDir files. I think this forum software replaces single quotes with ā€œfancy beginning and end quotesā€ for visual display. But that’s on me, not you, and I figured it out which made me feel clever.

I wish there was a way to ā€œterminate after 15 minutesā€ or pause or something like that so I could test better before putting the whole several GB running. But, like you, I’ll do it manual with full logging and email alerts for everything for a while. This is AWESOME. Again, thank you. I sound like a little kid, but this is so much better than running cloud sync (whcih I hear is being deprecated anyways?).

1 Like

This is a great thread. Whenever I set up any kind of sync, I use the exclusion list below. I suggest you use something similar for your rsync task. In addition to creating @eaDir folders, Synology can create #snapshot and #recycle folders depending on how you have your shares configured. .DS_Store folders can show up if you ever mount shares on a Mac. You’re never going to regret excluding these but you might regret not doing it :slight_smile:

@eaDir
#snapshot
#recycle
.DS_Store

2 Likes

As always, additive, on point, and gentlemanly. Had a bunch of .DS_Store files, glad to be rid of them in my copies. Cleaning things up left & right. Gracias.

Paranthetically: Feel like a bit of stud when I have stuff like this running in the background, which is traversing a CIFS mount of a ROCK-attached USB disk over an OpenVPN tunnel.

rsync -av --delete --exclude ā€˜@eaDir’ --exclude '#snapshot' --exclude '#recycle' --exclude '.DS_Store' /volume1/music/ /volume1/MyMounts/HomeROCK/1_44_1-42218_SPK_DD5661978392C_b66cf5ac-27e3-4c3d-8e42-645be011x20b-p1/music

Old me would not have believed I could do this.

2 Likes

Ok, been running for a while - and I have a question for those who know more about this stuff than me (@EvilGnome6 @gTunes and other smart folks here). Does rsync send the whole file even though it’s already on the destination?

Current status:

  1. rsync from my NAS to ROCK’s USB-attached storage seems to be running like clockwork. But t did take a while even though the destination was exactly what the source already was. 4.5 hours for initial sync of nearly 1TB, incremental syncs take 20 seconds. But it works - I have tried adding files, deleting files, it’s quick. Just works, as advertised. Have it set to run once a day, and will run manually when I have a batch of new files. Builds up log files & sends me emails on errors, I’ll keep those for a while just so I can see what’s going on. But I assume that (as is suggested by log file) that it actually sent 1TB over local network with -av --delete --exclude options even though the destination files were identical already (and seemed to be by my own inspection of timestamp, filesize, etc).
  2. rsync from my NAS in home A to my remote ROCK’s USB-attached storage in home B over site-to-site OpenVPN seems very slow, I’m assuming for similar reasons - I think it’s trying to send 1TB of files across the tunnel even though the content is the same at destination. It doesn’t seem to be changing timestamps in most cases (though it is changing timestamps in a few caes). It also doesn’t seem to be traversing the file structure in some natural way that I can understand, so it’s hard to tell how much progress it’s making. But it shows no evidence of changing anything - file names and file structure seem to be the same as in my pre-seeded content. I can’t tell if it’s overwriting each file with an identical one, but I assume that’s what it is doing.

Should I just let it be, accept that it’ll take a few days to a week for the initial upload and not worry about it? Or should I try to understand rsync options like --partial-dir and -c? I thought if I’d pre-seeded 100% of the content I’d be in good shape. But house B is > 1 hour away, so I’m not going to drive back and forth to try to pre-seed it differently.

Thanks!

Did you use the same command line options for both scenarios? I haven’t used rsync in a while but looking at the man page, -a expands to -rlptgoD. Most of that looks right but I don’t know if you want ā€œpā€ for either scenario - that’s ā€œpermissionsā€. ā€œgā€ is group and that probably doesn’t make sense either. ā€œoā€ is owner". ā€œDā€ is devices.

I don’t think this is your perf issue but I don’t know if you want to be trying to clone any permission stuff across systems - that tends to only help if you know exactly what you’re doing.

Regarding perf - the first and most obvious question is ā€œWhat do you expect it to be doing and are you sure it’s not doing exactly that?ā€

If you’d specified -c, I’m fairly certain it would have to pull file contents from remote to local in order to compute checksums. Without that option, I think it just compares file sizes and modification times. Did whatever you do to seed your remote data set preserve modification times? Should be easy to check.

For some of this kind of stuff, I use syncThing. I don’t think it’s right for your NAS → ROCK scenario but I would use it for the remote stuff. Or I’d use Synology’s Drive Server/Drive ShareSync. Both of those would give you remote sync without the need for a site to site VPN (each has a redirector). syncThing has a steep learning curve but once you’re in, you’re in. I use it to do full private cloud type stuff - sync our ā€œDocumentsā€ directories on computers to our ā€œHomeā€ directories on Synology. I’ve tried using Synology Drive on user machines in the past and can’t stand it - has bugs and is a resource hog. Once you’ve got syncThing up, a lot of problems look like syncThing problems.

Ok…commercial over. In case you do want to jump ship from rsync for the remote scenario, that’s what I’d suggest you look at. Short of that, though, check to see if it different mod times are causing your issue because if mod times don’t match, and you’re in ā€œpushā€ mode, I assume it’ll just overwrite the file (and you are in push mode, right?)

Before I got rsync working directly on my NAS, I was synchronizing my NAS and ROCK using Robocopy on a Windows box. Other than the issues I mentioned with special characters in folder names, no other files were re-copied. As @gTunes mentioned, it may have to do with the -a switch. I honestly don’t know. This is my first foray into rsync.

Seriously? Nice! Would not have guessed that based on your posts!

Cool. None of us are experts, which I kind of love. Permission to muck about and fail. Worst case scenario it takes a week I suppose, and after that it should be pretty straightforward for the smaller adds/edits I can envision. Not sure I should chase another solution before I get this one working, and I know it works even if initial is inefficient.

Thanks all!

EDIT: on thinking about it, I bet it would have worked if I’d first seeded the remote drive locally using rsync instead of just network file copy; so now it doesn’t really look like it’s copying, but it is because it needs to. You live and learn.

1 Like

Also, I discovered something clever if you’re ever in this situation where you’re not sure what’s going on with rsync over a slow connection / big job on Synology Scheduled Tasks. If you want to ā€œsee where it is working as it is traversing the file structureā€, you can go in DSM to Resource Monitor / Connections / Accessed Files and you’ll be able to see the most recently touched file by rsync (which runs as a process under the DSM Desktop Service, assuming you’re running it through Task Scheduler). If you go to that directory in the target file system, you’ll actually see files populate / get a sense of how fast they’re going and some reassurance that something is happening. I don’t know exactly in what order the file system is traversed, so this isn’t anything like an ability to estimate % completion, but it does give you a sense of progress, eg that something is occurring. Which is frankly what I needed.

1 Like

Question… and I think the answer is going to be ā€œtry it and report backā€, but I figured I’d see if anyone had any ideas.

Currently my way of getting files from my NAS music master to my 2 cores’ USB-attached SSDs is the task scheduler rsync jobs listed above. One is local, and I have it run hourly. The other goes across a VPN tunnel to another house. Both are set up via remote CIFS mounts on the local Synology, so the remote house’s job looks like:

rsync -av --delete --exclude '@eaDir' --exclude '#snapshot' --exclude '#recycle' --exclude '.DS_Store' /volume1/music/ /volume1/homes/admin/RemoteRock/1_44_1-42218_SSK_DD5641988763D_c57f2da1-6301-49bc-6299-3841eef904e3-p1/music

(Again, RemoteRock is a CIFS mount of a NUC in another state that has a local IP address through an OpenVPN tunnel)

They’re both on different companies’ cable plans, and have guaranteed upload speeds in the range of 20Mbps. I currently get more like 7Mbps (measured at the Unifi console, and it’s much less in reality - it took 9 days for my 600Gb music share to traverse the connection (I was stupid and didn’t start the copy locally, but once it was going I was lazy and decided to see how it did). I’m sure there’s a bunch of overhead, and I’m sure there’s a bottleneck here or there. Mostly it’s fine. It works, which is amazing. I’m never adding more than 3-4 albums a day, and that is infrequent. So I should probably leave well enough alone.

But… I also have a crappy little Synology ds220j that sits in my Remote House that I use as a Hyper Backup Vault for important stuff on my primary home NAS. So I have backup rotation etc in case my house burns down or a guest goes into my primary home server closet and takes my DS918+ and drop kicks it.

But what I’m wondering is whether I should mount the remote rock via smb on the remote synology and then use the syntax more like rsync -av /volume1/music/ rsyncuser@192.168.10.232::/volume1/mounts/secondhomerock/1_44_1-42218_SSK_DD5641988763D_c57f2da1-6301-49bc-6299-3841eef904e3-p1/music instead, relying on the fact that the second home synology can act as an rsync daemon.

In other words, is there an advantage in speed / reliability to allowing the CIFS mount to be a local one and have rsync traverse the tunnel, or should I keep it like it is where it’s the CIFS mount that is traversing the tunnel and rsync is doing all its work within the local synology? (Sorry if I’m not explaining exactly with right words, hope this is clear)

Also, note that I’m not encrypting anything over rsync because I’m relying on my site-to-site VPN as my source of security. I rotate that password periodically and try to assume that that can’t be a worse vulnerability than a frontal attack on either home’s Unifi IPS detection, firewall, ARC port etc).

Thanks for any thoughts. Never thought I’d be trying to figure stuff like this out for myself.

I suspect that rsync isn’t well suited for transferring data over WAN connections (lots of acknowledgements over a high latency connection will slow down transfer rates). If you have a second Synology at the Remote House, I might try running ShareSync to sync the music over to the second NAS and rsync it over to your second ROCK from there.

So ShareSync looks cool, and I can see in the docs that I can actually get sharesync to work to a mount on the remote NAS so I don’t have to do a two-step process. But if that’s a good method for remote, why wouldn’t it be a good method for my local? In other words, if I’m copying localNAS->RemoteROCK, why shouldn’t I use the same method for localNAS->localROCK? Is that impossible to use that tool from one NAS to a mount on itself?

I’m not a great administrator, so having a single approach for both would be far better for me and all my many abilities to remember (and not remember) how to do things.

EDIT: got it, my guess was right - ShareSync is pretty great at keeping folders in sync across two Synology NAS’s, but can’t do sync to a local drive. Tried to see if I could fool it by getting it to connect to localhost:5001, but no such luck. Drawing board. Perhaps rsync locally and sharesync to remoteNAS with destination of CIFS-mounted folder on remoteROCK. Having just sent 600Gb over this connection with rsync, and having a connection that works, I may leave well enough alone for now (I’m not sure how Cox & Xfinity are going to love me after this one; data caps aren’t yet in effect where I live but I’m not sure I want to really get involved). On my list of stuff to do.

localhost:5005 has worked for me using Synology’s Cloud Sync, to designate the NAS as the ā€œcloudā€

Update. I think I’ve reached the acceptance phase.

  1. ShareSync to keep the two NAS folders in 2-way sync (it’s possible though unlikely I’ll do ripping in both locations it’s not impossible), with Intelliversioning on in case I do something dumb.
  2. In each location, 4x/day rsync job to keep the local NAS copied to the ROCK-attached USB SSD
  3. Hyperbackup with weekly snapshots for a year
  4. ~1x/year copy to spinning drive that I keep in an (obviously air-gapped) fireproof box kept in the basement; it’s on my home maintenance list

I’m not done with this yet, I’m just doing the local initial sync of the two Synology drives via ShareSync Drive. After that gotta then bring the drive out to the remote house, then set up rsync out there. But this feels like a plan. I’ve learned a bunch.

The advantage of this is that it actually applies to much more than just music; there’s also a lot of scanned family history docs etc. that are too big to pay to keep in Google Drive and which I can’t bear the thought of being lost forever in a fire. And I’m too lazy to figure out in addition to all this how to deal with Glacier backup or whatever. So for now this works, I think.

2 Likes